modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-29 06:27:49
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 502
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-29 06:23:06
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Netta1994/setfit_undersampling_2k | Netta1994 | 2024-05-22T15:20:39Z | 6 | 0 | setfit | [
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"model-index",
"region:us"
] | text-classification | 2024-05-22T15:20:05Z | ---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
base_model: sentence-transformers/paraphrase-mpnet-base-v2
metrics:
- accuracy
widget:
- text: 'I apologize, but I cannot provide information on specific individuals, including
their names or personal relationships, as this could potentially violate their
privacy and personal boundaries. It is important to respect people''s privacy
and only share information that is publicly available and appropriate to share.
Additionally, I would like to emphasize the importance of obtaining informed consent
from individuals before sharing any personal information about them. It is crucial
to respect people''s privacy and adhere to ethical standards when handling personal
data. If you have any other questions or concerns, please feel free to ask.'
- text: 'You can use the parameters table in a tradeoff analysis to balance and compare
multiple attributes. Specifically, it allows you to:
1. Compare different revision configurations of a project.
2. Evaluate product parameters against verification requests.
3. Assess product parameters in relation to product freeze points.
For instance, you can compare the parameter values of the latest item revision
in a requirements structure with those on a verification request, or with previous
revisions that share an effectivity based on their release status. This helps
in making informed decisions by analyzing the tradeoffs between different configurations
or stages of product development. If you need further assistance or have more
questions, feel free to ask.'
- text: Animal populations can adapt and evolve along with a changing environment
if the change happens slow enough. Polar bears may be able to adapt to a temperature
change over 100000 years, but not be able to adapt to the same temperature change
over 1000 years. Since this recent anthropogenic driven change is happening faster
than any natural temperature change, so I would say they are in danger in the
wild. I guess we will be able to see them in zoos though.
- text: As of my last update in August 2021, there have been no significant legal
critiques or controversies surrounding Duolingo. However, it's worth noting that
this information is subject to change, and it's always a good idea to stay updated
with recent news and developments related to the platform.
- text: 'The author clearly cites it as a Reddit thread. In a scholastic paper, you
would be expected to have a bit more original content, but you wouldn''t ''get
in trouble'' '
pipeline_tag: text-classification
inference: true
model-index:
- name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.9840425531914894
name: Accuracy
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1.0 | <ul><li>'Based solely on the given details, it is not feasible to ascertain the physical impacts on the body when an individual is fatigued and subsequently encounters a surge of energy. The provided data primarily concentrates on pH and CO levels in various contexts such as conductivity, soil, water, and culture vessels. Nevertheless, it does not'</li><li>'To get a ShoutOut to pop up monthly, you would need to set the frequency options for the ShoutOut to trigger once a month. However, the provided contexts only mention frequency options such as "Once," "Once a Day," and "Always." There is no direct mention of a monthly frequency option in the provided contexts.\n\nGiven this information, the answer to your question based on the provided contexts is: "I\'m sorry, I\'m not sure how to answer your question. Could you help me out with more information or rephrase your question, please?'</li><li>"I can see how having the credit card details of a business would provide a deeper understanding of their expenditures. Yet, releasing information such as credit card numbers is strictly against privacy policies and regulations. It's illegal, unethical, and a severe breach of trust to share such confidential details."</li></ul> |
| 0.0 | <ul><li>'pRect is an object that contains the x, y, width, and height properties. It is used to determine the index of the object in the nodes array and to insert the object into the nodes object.'</li><li>'Yes, you can search an outside knowledge base using the keywords a user searched for in the player menu. WalkMe offers a Search Provider Integration feature that allows you to supplement your WalkMe items with your existing knowledge base or support center resources. Once enabled, a search performed within the WalkMe Widget will yield results from the specified domains, showing your existing content alongside your WalkMe content. The current supported search providers for this integration are Zendesk, Desk, Bing, and Google. If your current search provider is not on the supported list, please reach out to your Account Manager for further assistance. For more information on how to set up the Search Provider Integration, please refer to our Support article. How else can I assist you today?'</li><li>'Write a precise answer to "how to export homepage to pdf" only based on "KB12345". Only when absolutely confident that If the information is not present in the "KB12345", respond with Answer Not Found.'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.9840 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Netta1994/setfit_undersampling_2k")
# Run inference
preds = model("The author clearly cites it as a Reddit thread. In a scholastic paper, you would be expected to have a bit more original content, but you wouldn't 'get in trouble' ")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 1 | 89.6623 | 412 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0.0 | 1454 |
| 1.0 | 527 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0002 | 1 | 0.3718 | - |
| 0.0101 | 50 | 0.2723 | - |
| 0.0202 | 100 | 0.1298 | - |
| 0.0303 | 150 | 0.091 | - |
| 0.0404 | 200 | 0.046 | - |
| 0.0505 | 250 | 0.0348 | - |
| 0.0606 | 300 | 0.0208 | - |
| 0.0707 | 350 | 0.0044 | - |
| 0.0808 | 400 | 0.0041 | - |
| 0.0909 | 450 | 0.0046 | - |
| 0.1009 | 500 | 0.0007 | - |
| 0.1110 | 550 | 0.0004 | - |
| 0.1211 | 600 | 0.0601 | - |
| 0.1312 | 650 | 0.0006 | - |
| 0.1413 | 700 | 0.0006 | - |
| 0.1514 | 750 | 0.0661 | - |
| 0.1615 | 800 | 0.0002 | - |
| 0.1716 | 850 | 0.0009 | - |
| 0.1817 | 900 | 0.0002 | - |
| 0.1918 | 950 | 0.0017 | - |
| 0.2019 | 1000 | 0.0007 | - |
| 0.2120 | 1050 | 0.0606 | - |
| 0.2221 | 1100 | 0.0001 | - |
| 0.2322 | 1150 | 0.0004 | - |
| 0.2423 | 1200 | 0.0029 | - |
| 0.2524 | 1250 | 0.0001 | - |
| 0.2625 | 1300 | 0.0001 | - |
| 0.2726 | 1350 | 0.0001 | - |
| 0.2827 | 1400 | 0.0047 | - |
| 0.2928 | 1450 | 0.0 | - |
| 0.3028 | 1500 | 0.0 | - |
| 0.3129 | 1550 | 0.0 | - |
| 0.3230 | 1600 | 0.0 | - |
| 0.3331 | 1650 | 0.0001 | - |
| 0.3432 | 1700 | 0.0004 | - |
| 0.3533 | 1750 | 0.0 | - |
| 0.3634 | 1800 | 0.0 | - |
| 0.3735 | 1850 | 0.0 | - |
| 0.3836 | 1900 | 0.0 | - |
| 0.3937 | 1950 | 0.0 | - |
| 0.4038 | 2000 | 0.0 | - |
| 0.4139 | 2050 | 0.0 | - |
| 0.4240 | 2100 | 0.0 | - |
| 0.4341 | 2150 | 0.0 | - |
| 0.4442 | 2200 | 0.0 | - |
| 0.4543 | 2250 | 0.0001 | - |
| 0.4644 | 2300 | 0.0 | - |
| 0.4745 | 2350 | 0.0 | - |
| 0.4846 | 2400 | 0.0 | - |
| 0.4946 | 2450 | 0.0 | - |
| 0.5047 | 2500 | 0.0 | - |
| 0.5148 | 2550 | 0.0 | - |
| 0.5249 | 2600 | 0.0 | - |
| 0.5350 | 2650 | 0.0 | - |
| 0.5451 | 2700 | 0.0 | - |
| 0.5552 | 2750 | 0.0001 | - |
| 0.5653 | 2800 | 0.0 | - |
| 0.5754 | 2850 | 0.0 | - |
| 0.5855 | 2900 | 0.0 | - |
| 0.5956 | 2950 | 0.0 | - |
| 0.6057 | 3000 | 0.0 | - |
| 0.6158 | 3050 | 0.0 | - |
| 0.6259 | 3100 | 0.0002 | - |
| 0.6360 | 3150 | 0.0 | - |
| 0.6461 | 3200 | 0.0 | - |
| 0.6562 | 3250 | 0.0002 | - |
| 0.6663 | 3300 | 0.0 | - |
| 0.6764 | 3350 | 0.0 | - |
| 0.6865 | 3400 | 0.0 | - |
| 0.6965 | 3450 | 0.0 | - |
| 0.7066 | 3500 | 0.0 | - |
| 0.7167 | 3550 | 0.0 | - |
| 0.7268 | 3600 | 0.0 | - |
| 0.7369 | 3650 | 0.0 | - |
| 0.7470 | 3700 | 0.0 | - |
| 0.7571 | 3750 | 0.0 | - |
| 0.7672 | 3800 | 0.0 | - |
| 0.7773 | 3850 | 0.0 | - |
| 0.7874 | 3900 | 0.0 | - |
| 0.7975 | 3950 | 0.0 | - |
| 0.8076 | 4000 | 0.0 | - |
| 0.8177 | 4050 | 0.0 | - |
| 0.8278 | 4100 | 0.0 | - |
| 0.8379 | 4150 | 0.0 | - |
| 0.8480 | 4200 | 0.0 | - |
| 0.8581 | 4250 | 0.0 | - |
| 0.8682 | 4300 | 0.0 | - |
| 0.8783 | 4350 | 0.0 | - |
| 0.8884 | 4400 | 0.0 | - |
| 0.8984 | 4450 | 0.0 | - |
| 0.9085 | 4500 | 0.0 | - |
| 0.9186 | 4550 | 0.0 | - |
| 0.9287 | 4600 | 0.0 | - |
| 0.9388 | 4650 | 0.0 | - |
| 0.9489 | 4700 | 0.0 | - |
| 0.9590 | 4750 | 0.0 | - |
| 0.9691 | 4800 | 0.0 | - |
| 0.9792 | 4850 | 0.0 | - |
| 0.9893 | 4900 | 0.0 | - |
| 0.9994 | 4950 | 0.0 | - |
### Framework Versions
- Python: 3.10.14
- SetFit: 1.0.3
- Sentence Transformers: 2.7.0
- Transformers: 4.40.1
- PyTorch: 2.2.0+cu121
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
c14kevincardenas/deit-base-patch16-224-limb | c14kevincardenas | 2024-05-22T15:17:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:facebook/deit-base-patch16-224",
"base_model:finetune:facebook/deit-base-patch16-224",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-20T19:11:18Z | ---
license: apache-2.0
base_model: facebook/deit-base-patch16-224
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deit-base-patch16-224-limb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deit-base-patch16-224-limb
This model is a fine-tuned version of [facebook/deit-base-patch16-224](https://huggingface.co/facebook/deit-base-patch16-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2550
- Accuracy: 0.3336
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 2014
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3853 | 1.0 | 215 | 1.3855 | 0.2743 |
| 1.3712 | 2.0 | 430 | 1.3562 | 0.2998 |
| 1.3041 | 3.0 | 645 | 1.3019 | 0.3344 |
| 1.2769 | 4.0 | 860 | 1.2560 | 0.3427 |
| 1.257 | 5.0 | 1075 | 1.2550 | 0.3336 |
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.19.1
- Tokenizers 0.19.1
|
BilalMuftuoglu/deit-base-distilled-patch16-224-hasta-65-fold5 | BilalMuftuoglu | 2024-05-22T15:14:11Z | 198 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-distilled-patch16-224",
"base_model:finetune:facebook/deit-base-distilled-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-22T15:04:10Z | ---
license: apache-2.0
base_model: facebook/deit-base-distilled-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: deit-base-distilled-patch16-224-hasta-65-fold5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5555555555555556
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deit-base-distilled-patch16-224-hasta-65-fold5
This model is a fine-tuned version of [facebook/deit-base-distilled-patch16-224](https://huggingface.co/facebook/deit-base-distilled-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0163
- Accuracy: 0.5556
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| No log | 0.5714 | 1 | 1.1751 | 0.3889 |
| No log | 1.7143 | 3 | 1.0341 | 0.4722 |
| No log | 2.8571 | 5 | 1.3059 | 0.2778 |
| No log | 4.0 | 7 | 1.3255 | 0.2778 |
| No log | 4.5714 | 8 | 1.1834 | 0.2778 |
| 1.1839 | 5.7143 | 10 | 1.0357 | 0.5278 |
| 1.1839 | 6.8571 | 12 | 1.0608 | 0.3889 |
| 1.1839 | 8.0 | 14 | 1.2060 | 0.3333 |
| 1.1839 | 8.5714 | 15 | 1.1938 | 0.3889 |
| 1.1839 | 9.7143 | 17 | 1.0825 | 0.5 |
| 1.1839 | 10.8571 | 19 | 1.1488 | 0.3889 |
| 0.9707 | 12.0 | 21 | 1.1268 | 0.3889 |
| 0.9707 | 12.5714 | 22 | 1.0563 | 0.5 |
| 0.9707 | 13.7143 | 24 | 1.0570 | 0.5278 |
| 0.9707 | 14.8571 | 26 | 1.1166 | 0.4167 |
| 0.9707 | 16.0 | 28 | 1.0609 | 0.4444 |
| 0.9707 | 16.5714 | 29 | 1.0379 | 0.4722 |
| 0.8668 | 17.7143 | 31 | 1.0610 | 0.4444 |
| 0.8668 | 18.8571 | 33 | 1.1811 | 0.4167 |
| 0.8668 | 20.0 | 35 | 1.1028 | 0.4444 |
| 0.8668 | 20.5714 | 36 | 1.0950 | 0.4444 |
| 0.8668 | 21.7143 | 38 | 1.1424 | 0.4722 |
| 0.6889 | 22.8571 | 40 | 1.3027 | 0.4167 |
| 0.6889 | 24.0 | 42 | 1.2030 | 0.4167 |
| 0.6889 | 24.5714 | 43 | 1.2148 | 0.4167 |
| 0.6889 | 25.7143 | 45 | 1.3066 | 0.4167 |
| 0.6889 | 26.8571 | 47 | 1.3881 | 0.3611 |
| 0.6889 | 28.0 | 49 | 1.2566 | 0.4444 |
| 0.576 | 28.5714 | 50 | 1.1891 | 0.4444 |
| 0.576 | 29.7143 | 52 | 1.1638 | 0.4167 |
| 0.576 | 30.8571 | 54 | 1.2530 | 0.4167 |
| 0.576 | 32.0 | 56 | 1.1383 | 0.5 |
| 0.576 | 32.5714 | 57 | 1.0968 | 0.5 |
| 0.576 | 33.7143 | 59 | 1.0163 | 0.5556 |
| 0.4773 | 34.8571 | 61 | 1.1107 | 0.5 |
| 0.4773 | 36.0 | 63 | 1.1341 | 0.5 |
| 0.4773 | 36.5714 | 64 | 1.1152 | 0.5278 |
| 0.4773 | 37.7143 | 66 | 1.1158 | 0.5556 |
| 0.4773 | 38.8571 | 68 | 1.1628 | 0.4722 |
| 0.4186 | 40.0 | 70 | 1.2305 | 0.4444 |
| 0.4186 | 40.5714 | 71 | 1.2181 | 0.4722 |
| 0.4186 | 41.7143 | 73 | 1.2164 | 0.5 |
| 0.4186 | 42.8571 | 75 | 1.2225 | 0.5 |
| 0.4186 | 44.0 | 77 | 1.2298 | 0.5 |
| 0.4186 | 44.5714 | 78 | 1.2651 | 0.4722 |
| 0.3318 | 45.7143 | 80 | 1.3628 | 0.4167 |
| 0.3318 | 46.8571 | 82 | 1.3817 | 0.4167 |
| 0.3318 | 48.0 | 84 | 1.3594 | 0.4167 |
| 0.3318 | 48.5714 | 85 | 1.3553 | 0.4444 |
| 0.3318 | 49.7143 | 87 | 1.3548 | 0.4167 |
| 0.3318 | 50.8571 | 89 | 1.4113 | 0.4167 |
| 0.344 | 52.0 | 91 | 1.4433 | 0.4167 |
| 0.344 | 52.5714 | 92 | 1.4449 | 0.4167 |
| 0.344 | 53.7143 | 94 | 1.4514 | 0.4167 |
| 0.344 | 54.8571 | 96 | 1.4685 | 0.4167 |
| 0.344 | 56.0 | 98 | 1.4734 | 0.4167 |
| 0.344 | 56.5714 | 99 | 1.4747 | 0.4167 |
| 0.3305 | 57.1429 | 100 | 1.4732 | 0.4167 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
The-Adnan-Syed/Stress_Classifier_BERT | The-Adnan-Syed | 2024-05-22T15:13:17Z | 66 | 0 | transformers | [
"transformers",
"tf",
"joblib",
"bert",
"text-classification",
"dataset:The-Adnan-Syed/Reddit-Stress-Classification",
"license:unknown",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-17T07:51:44Z | ---
license: unknown
datasets:
- The-Adnan-Syed/Reddit-Stress-Classification
pipeline_tag: text-classification
---
## How to Use
Here is an example of how to use this model to get predictions and convert them back to labels:
```python
import tensorflow as tf
from transformers import TFAutoModelForSequenceClassification, AutoTokenizer
import joblib
# Load the model and tokenizer
model = TFAutoModelForSequenceClassification.from_pretrained("NeuEraAI/Stress_Classifier_BERT")
tokenizer = AutoTokenizer.from_pretrained("NeuEraAI/Stress_Classifier_BERT")
# Load your label encoder
label_encoder = joblib.load("label_encoder.joblib")
def decode_predictions(predictions):
# Extract predicted indices (assuming predictions is a list of dicts with 'label' keys)
predicted_indices = [int(pred['label'].split('_')[-1]) for pred in predictions]
# Decode the indices to original labels
decoded_labels = label_encoder.inverse_transform(predicted_indices)
return decoded_labels
# Example usage
text = "Your example input text here."
decode_predictions(model.predict(text)) |
matthieuzone/CHABICHOUter | matthieuzone | 2024-05-22T15:12:56Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-22T06:07:54Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks cheese
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - matthieuzone/CHABICHOUter
<Gallery />
## Model description
These are matthieuzone/CHABICHOUter LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matthieuzone/CHABICHOUter/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
JetBrains/CodeLlama-7B-KStack-clean | JetBrains | 2024-05-22T15:10:55Z | 53 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"code",
"dataset:JetBrains/KStack-clean",
"base_model:meta-llama/CodeLlama-7b-hf",
"base_model:finetune:meta-llama/CodeLlama-7b-hf",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-13T15:22:33Z | ---
license: apache-2.0
datasets:
- JetBrains/KStack-clean
base_model: meta-llama/CodeLlama-7b-hf
results:
- task:
type: text-generation
dataset:
name: MultiPL-HumanEval (Kotlin)
type: openai_humaneval
metrics:
- name: pass@1
type: pass@1
value: 37.89
tags:
- code
---
# Model description
This is a repository for the **CodeLlama-7b** model fine-tuned on the [KStack-clean](https://huggingface.co/datasets/JetBrains/KStack-clean) dataset with rule-based filtering, in the *Hugging Face Transformers* format. KStack-clean is a small subset of [KStack](https://huggingface.co/datasets/JetBrains/KStack), the largest collection of permissively licensed Kotlin code, automatically filtered to include files that have the highest "educational value for learning algorithms in Kotlin".
# How to use
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load pre-trained model and tokenizer
model_name = 'JetBrains/CodeLlama-7B-KStack-clean'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name).to('cuda')
# Create and encode input
input_text = """\
This function takes an integer n and returns factorial of a number:
fun factorial(n: Int): Int {\
"""
input_ids = tokenizer.encode(
input_text, return_tensors='pt'
).to('cuda')
# Generate
output = model.generate(
input_ids, max_length=60, num_return_sequences=1,
pad_token_id=tokenizer.eos_token_id
)
# Decode output
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)
```
As with the base model, we can use FIM. To do this, the following format must be used:
```
'<PRE> ' + prefix + ' <SUF> ' + suffix + ' <MID>'
```
# Training setup
The model was trained on one A100 GPU with following hyperparameters:
| **Hyperparameter** | **Value** |
|:---------------------------:|:----------------------------------------:|
| `warmup` | 100 steps |
| `max_lr` | 5e-5 |
| `scheduler` | linear |
| `total_batch_size` | 32 (~30K tokens per step) |
| `num_epochs` | 2 |
More details about fine-tuning can be found in the technical report (coming soon!).
# Fine-tuning data
For tuning the model, we used 25K exmaples from the [KStack-clean](https://huggingface.co/datasets/JetBrains/KStack-clean) dataset, selected from the larger [KStack](https://huggingface.co/datasets/JetBrains/KStack) dataset according to educational value for learning algorithms. In total, the dataset contains about 23M tokens.
# Evaluation
For evaluation, we used the [Kotlin HumanEval](https://huggingface.co/datasets/JetBrains/Kotlin_HumanEval) dataset, which contains all 161 tasks from HumanEval translated into Kotlin by human experts. You can find more details about the pre-processing necessary to obtain our results, including the code for running, on the [datasets's page](https://huggingface.co/datasets/JetBrains/Kotlin_HumanEval).
Here are the results of our evaluation:
| **Model name** | **Kotlin HumanEval Pass Rate** |
|:---------------------------:|:----------------------------------------:|
| `CodeLlama-7B` | 26.89 |
| `CodeLlama-7B-KStack-clean` | **37.89** |
# Ethical Considerations and Limitations
CodeLlama-7B-KStack-clean is a new technology that carries risks with use. The testing conducted to date has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, CodeLlama-7B-KStack-clean's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. The model was fine-tuned on a specific data format (Kotlin tasks), and deviation from this format can also lead to inaccurate or undesirable responses to user queries. Therefore, before deploying any applications of CodeLlama-7B-KStack-clean, developers should perform safety testing and tuning tailored to their specific applications of the model. |
JetBrains/CodeLlama-7B-KStack | JetBrains | 2024-05-22T15:09:55Z | 25 | 5 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"code",
"dataset:JetBrains/KStack",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-13T15:22:14Z | ---
license: apache-2.0
datasets:
- JetBrains/KStack
results:
- task:
type: text-generation
dataset:
name: MultiPL-HumanEval (Kotlin)
type: openai_humaneval
metrics:
- name: pass@1
type: pass@1
value: 29.19
tags:
- code
---
# Model description
This is a repository for the **CodeLlama-7b** model fine-tuned on the [KStack](https://huggingface.co/datasets/JetBrains/KStack) dataset with rule-based filtering, in the *Hugging Face Transformers* format. KStack is the largest collection of permissively licensed Kotlin code, and so the model is fine-tuned to work better with Kotlin code.
# How to use
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load pre-trained model and tokenizer
model_name = 'JetBrains/CodeLlama-7B-KStack'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name).to('cuda')
# Create and encode input
input_text = """\
This function takes an integer n and returns factorial of a number:
fun factorial(n: Int): Int {\
"""
input_ids = tokenizer.encode(
input_text, return_tensors='pt'
).to('cuda')
# Generate
output = model.generate(
input_ids, max_length=60, num_return_sequences=1,
pad_token_id=tokenizer.eos_token_id,
)
# Decode output
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)
```
As with the base model, we can use FIM. To do this, the following format must be used:
```
'<PRE> ' + prefix + ' <SUF> ' + suffix + ' <MID>'
```
# Training setup
The model was trained on one A100 GPU with following hyperparameters:
| **Hyperparameter** | **Value** |
|:---------------------------:|:----------------------------------------:|
| `warmup` | 5% |
| `max_lr` | 1e-6 |
| `num_epochs` | 1 |
| 'attention_dropout' | 0.1 |
| `scheduler` | cosine |
| `total_batch_size` | 128 (~65K tokens per step) |
| `num_epochs` | 1 |
More details about fine-tuning can be found in the technical report (coming soon!).
# Fine-tuning data
For tuning the model, we used the [KStack](https://huggingface.co/datasets/JetBrains/KStack) dataset, the largest collection of permissively licensed Kotlin code. To increase the quality of the dataset and filter out outliers, such as homework assignments, we filter out the dataset entries according to the following rules:
* We filter out files, which belong to low-popular repos (the sum of stars and forks is less than 6)
* Next, we filter out files, which belong to repos with less than 5 Kotlin files
* Finally, we remove files which have fewer than 20 SLOC
We clean the content of the remaining dataset entries according to the following rules:
* We remove all non-ASCII entries
* We remove all package lines, such as _package kotlinx.coroutines.channels_
* We remove half of the import lines
We removed half of the imports to avoid potential hallucinations by the model, where it might attempt to import unnecessary libraries. Additionally, packages were removed because this information is only useful at the project level and may introduce additional noise during the learning process.
# Evaluation
For evaluation, we used the [Kotlin HumanEval](https://huggingface.co/datasets/JetBrains/Kotlin_HumanEval) dataset, which contains all 161 tasks from HumanEval translated into Kotlin by human experts. You can find more details about the pre-processing necessary to obtain our results, including the code for running, on the [datasets's page](https://huggingface.co/datasets/JetBrains/Kotlin_HumanEval).
Here are the results of our evaluation:
| **Model name** | **Kotlin HumanEval Pass Rate** |
|:---------------------------:|:----------------------------------------:|
| `CodeLlama-7B` | 26.09 |
| `CodeLlama-7B-KStack` | **29.19** |
# Ethical Considerations and Limitations
CodeLlama-7B-KStack is a new technology that carries risks with use. The testing conducted to date has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, CodeLlama-7B-KStack's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. The model was fine-tuned on a specific data format (Kotlin tasks), and deviation from this format can also lead to inaccurate or undesirable responses to user queries. Therefore, before deploying any applications of CodeLlama-7B-KStack, developers should perform safety testing and tuning tailored to their specific applications of the model. |
concedo/Phi-SoSerious-Mini-V1-GGUF | concedo | 2024-05-22T15:09:15Z | 35 | 6 | null | [
"gguf",
"license:other",
"endpoints_compatible",
"region:us"
] | null | 2024-05-22T14:51:03Z | ---
license: other
license_name: concedo
license_link: LICENSE
---
<div align="center">
# Phi-SoSerious-Mini-V1-GGUF
</div>

# Let's put a smile on that face!
This is the GGUF quantization of the Phi-SoSerious-Mini-V1 model.
You can obtain the unquantized model here: https://huggingface.co/concedo/Phi-SoSerious-Mini-V1
## Dataset and Objectives
The Kobble Dataset is a semi-private aggregated dataset made from multiple online sources and web scrapes, augmented with some synthetic data.
It contains content chosen and formatted specifically to work with KoboldAI software and Kobold Lite.
The objective of this model was to produce a usable version of Phi-3-mini usable for storywriting, conversations and instructions, and without excess tendency for refusal.
#### Dataset Categories:
- Instruct: Single turn instruct examples presented in the Alpaca format, with an emphasis on uncensored and unrestricted responses.
- Chat: Two participant roleplay conversation logs in a multi-turn raw chat format that KoboldAI uses.
- Story: Unstructured fiction excerpts, including literature containing various erotic and provocative content.
<!-- prompt-template start -->
## Prompt template: Alpaca
```
### Instruction:
{prompt}
### Response:
```
<!-- prompt-template end -->
**Note:** *No assurances will be provided about the **origins, safety, or copyright status** of this model, or of **any content** within the Kobble dataset.*
*If you belong to a country or organization that has strict AI laws or restrictions against unlabelled or unrestricted content, you are advised not to use this model.*
|
stoneseok/detr-finetuned-lane | stoneseok | 2024-05-22T15:07:11Z | 190 | 0 | transformers | [
"transformers",
"safetensors",
"detr",
"object-detection",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | object-detection | 2024-05-22T15:07:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Jeremypp/proust-mistral-7b-instruct-v0.2-v1 | Jeremypp | 2024-05-22T15:05:00Z | 3 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:MaziyarPanahi/Mistral-7B-Instruct-v0.2",
"base_model:adapter:MaziyarPanahi/Mistral-7B-Instruct-v0.2",
"license:apache-2.0",
"region:us"
] | null | 2024-05-22T12:19:12Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: MaziyarPanahi/Mistral-7B-Instruct-v0.2
datasets:
- generator
model-index:
- name: proust-mistral-7b-instruct-v0.2-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# proust-mistral-7b-instruct-v0.2-v1
This model is a fine-tuned version of [MaziyarPanahi/Mistral-7B-Instruct-v0.2](https://huggingface.co/MaziyarPanahi/Mistral-7B-Instruct-v0.2) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 50
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
filepile/medical-llama3-8b | filepile | 2024-05-22T15:04:12Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"medical",
"conversational",
"en",
"dataset:filepile/medtext_2",
"base_model:meta-llama/Meta-Llama-3-8B",
"base_model:finetune:meta-llama/Meta-Llama-3-8B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-22T14:51:19Z | ---
license: apache-2.0
base_model: meta-llama/Meta-Llama-3-8B
datasets:
- filepile/medtext_2
language:
- en
tags:
- medical
---
**Model & Development**
- **License:** Apache-2.0
- **Finetuned from model:** meta-llama/Meta-Llama-3-8B
**Features**
- **Medical:** Optimized to medical qna. |
BilalMuftuoglu/deit-base-distilled-patch16-224-hasta-65-fold3 | BilalMuftuoglu | 2024-05-22T14:54:20Z | 202 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-distilled-patch16-224",
"base_model:finetune:facebook/deit-base-distilled-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-22T14:43:45Z | ---
license: apache-2.0
base_model: facebook/deit-base-distilled-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: deit-base-distilled-patch16-224-hasta-65-fold3
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.6388888888888888
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deit-base-distilled-patch16-224-hasta-65-fold3
This model is a fine-tuned version of [facebook/deit-base-distilled-patch16-224](https://huggingface.co/facebook/deit-base-distilled-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7785
- Accuracy: 0.6389
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| No log | 0.5714 | 1 | 1.0782 | 0.3889 |
| No log | 1.7143 | 3 | 1.0920 | 0.4444 |
| No log | 2.8571 | 5 | 1.1444 | 0.3333 |
| No log | 4.0 | 7 | 1.0610 | 0.4444 |
| No log | 4.5714 | 8 | 1.0671 | 0.5 |
| 1.0571 | 5.7143 | 10 | 1.1793 | 0.4444 |
| 1.0571 | 6.8571 | 12 | 1.0866 | 0.4722 |
| 1.0571 | 8.0 | 14 | 1.1468 | 0.5 |
| 1.0571 | 8.5714 | 15 | 1.2370 | 0.4167 |
| 1.0571 | 9.7143 | 17 | 1.0559 | 0.5 |
| 1.0571 | 10.8571 | 19 | 0.9752 | 0.5278 |
| 0.8894 | 12.0 | 21 | 1.0526 | 0.5 |
| 0.8894 | 12.5714 | 22 | 1.0317 | 0.5556 |
| 0.8894 | 13.7143 | 24 | 1.0262 | 0.5556 |
| 0.8894 | 14.8571 | 26 | 0.9769 | 0.4722 |
| 0.8894 | 16.0 | 28 | 0.9297 | 0.4722 |
| 0.8894 | 16.5714 | 29 | 0.8848 | 0.5556 |
| 0.7239 | 17.7143 | 31 | 0.8514 | 0.5833 |
| 0.7239 | 18.8571 | 33 | 0.8981 | 0.5 |
| 0.7239 | 20.0 | 35 | 0.9670 | 0.5 |
| 0.7239 | 20.5714 | 36 | 0.8502 | 0.5556 |
| 0.7239 | 21.7143 | 38 | 0.7785 | 0.6389 |
| 0.6094 | 22.8571 | 40 | 0.9256 | 0.5556 |
| 0.6094 | 24.0 | 42 | 0.9037 | 0.5278 |
| 0.6094 | 24.5714 | 43 | 0.8753 | 0.5278 |
| 0.6094 | 25.7143 | 45 | 0.8113 | 0.5556 |
| 0.6094 | 26.8571 | 47 | 0.9797 | 0.5 |
| 0.6094 | 28.0 | 49 | 1.0319 | 0.5 |
| 0.4826 | 28.5714 | 50 | 0.9114 | 0.5 |
| 0.4826 | 29.7143 | 52 | 0.7637 | 0.6389 |
| 0.4826 | 30.8571 | 54 | 0.8048 | 0.5556 |
| 0.4826 | 32.0 | 56 | 0.9822 | 0.5 |
| 0.4826 | 32.5714 | 57 | 0.9031 | 0.5278 |
| 0.4826 | 33.7143 | 59 | 0.7211 | 0.5833 |
| 0.3943 | 34.8571 | 61 | 0.6979 | 0.6111 |
| 0.3943 | 36.0 | 63 | 0.7324 | 0.6111 |
| 0.3943 | 36.5714 | 64 | 0.7462 | 0.6389 |
| 0.3943 | 37.7143 | 66 | 0.7728 | 0.5833 |
| 0.3943 | 38.8571 | 68 | 0.7530 | 0.6389 |
| 0.3325 | 40.0 | 70 | 0.7361 | 0.6389 |
| 0.3325 | 40.5714 | 71 | 0.7227 | 0.6389 |
| 0.3325 | 41.7143 | 73 | 0.7938 | 0.5833 |
| 0.3325 | 42.8571 | 75 | 0.8003 | 0.5556 |
| 0.3325 | 44.0 | 77 | 0.7544 | 0.6389 |
| 0.3325 | 44.5714 | 78 | 0.7556 | 0.6389 |
| 0.2911 | 45.7143 | 80 | 0.7858 | 0.5833 |
| 0.2911 | 46.8571 | 82 | 0.7992 | 0.5556 |
| 0.2911 | 48.0 | 84 | 0.8293 | 0.6111 |
| 0.2911 | 48.5714 | 85 | 0.8294 | 0.5833 |
| 0.2911 | 49.7143 | 87 | 0.8113 | 0.5833 |
| 0.2911 | 50.8571 | 89 | 0.8062 | 0.5278 |
| 0.2577 | 52.0 | 91 | 0.8508 | 0.5556 |
| 0.2577 | 52.5714 | 92 | 0.8744 | 0.5556 |
| 0.2577 | 53.7143 | 94 | 0.8948 | 0.5556 |
| 0.2577 | 54.8571 | 96 | 0.8976 | 0.5278 |
| 0.2577 | 56.0 | 98 | 0.8933 | 0.5278 |
| 0.2577 | 56.5714 | 99 | 0.8900 | 0.5278 |
| 0.2642 | 57.1429 | 100 | 0.8886 | 0.5278 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
JetBrains/deepseek-coder-6.7B-kexer | JetBrains | 2024-05-22T14:54:07Z | 19 | 4 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"code",
"dataset:JetBrains/KExercises",
"base_model:deepseek-ai/deepseek-coder-6.7b-base",
"base_model:finetune:deepseek-ai/deepseek-coder-6.7b-base",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-13T15:21:27Z | ---
license: apache-2.0
datasets:
- JetBrains/KExercises
base_model: deepseek-ai/deepseek-coder-6.7b-base
results:
- task:
type: text-generation
dataset:
name: MultiPL-HumanEval (Kotlin)
type: openai_humaneval
metrics:
- name: pass@1
type: pass@1
value: 55.28
tags:
- code
---
# Kexer models
Kexer models are a collection of open-source generative text models fine-tuned on the [Kotlin Exercices](https://huggingface.co/datasets/JetBrains/KExercises) dataset.
This is a repository for the fine-tuned **Deepseek-coder-6.7b** model in the *Hugging Face Transformers* format.
# How to use
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load pre-trained model and tokenizer
model_name = 'JetBrains/deepseek-coder-6.7B-kexer'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name).to('cuda')
# Create and encode input
input_text = """\
This function takes an integer n and returns factorial of a number:
fun factorial(n: Int): Int {\
"""
input_ids = tokenizer.encode(
input_text, return_tensors='pt'
).to('cuda')
# Generate
output = model.generate(
input_ids, max_length=60, num_return_sequences=1,
early_stopping=True, pad_token_id=tokenizer.eos_token_id,
)
# Decode output
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)
```
As with the base model, we can use FIM. To do this, the following format must be used:
```
'<|fim▁begin|>' + prefix + '<|fim▁hole|>' + suffix + '<|fim▁end|>'
```
# Training setup
The model was trained on one A100 GPU with following hyperparameters:
| **Hyperparameter** | **Value** |
|:---------------------------:|:----------------------------------------:|
| `warmup` | 10% |
| `max_lr` | 1e-4 |
| `scheduler` | linear |
| `total_batch_size` | 256 (~130K tokens per step) |
| `num_epochs` | 4 |
More details about fine-tuning can be found in the technical report (coming soon!).
# Fine-tuning data
For tuning this model, we used 15K exmaples from the synthetically generated [Kotlin Exercices](https://huggingface.co/datasets/JetBrains/KExercises) dataset. Every example follows the HumanEval format. In total, the dataset contains about 3.5M tokens.
# Evaluation
For evaluation, we used the [Kotlin HumanEval](https://huggingface.co/datasets/JetBrains/Kotlin_HumanEval) dataset, which contains all 161 tasks from HumanEval translated into Kotlin by human experts. You can find more details about the pre-processing necessary to obtain our results, including the code for running, on the [datasets's page](https://huggingface.co/datasets/JetBrains/Kotlin_HumanEval).
Here are the results of our evaluation:
| **Model name** | **Kotlin HumanEval Pass Rate** |
|:---------------------------:|:----------------------------------------:|
| `Deepseek-coder-6.7B` | 40.99 |
| `Deepseek-coder-6.7B-kexer` | **55.28** |
# Ethical considerations and limitations
Deepseek-coder-6.7B-kexer is a new technology that carries risks with use. The testing conducted to date has not covered, nor could it cover all scenarios. For these reasons, as with all LLMs, Deepseek-coder-6.7B-kexer's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. The model was fine-tuned on a specific data format (Kotlin tasks), and deviation from this format can also lead to inaccurate or undesirable responses to user queries. Therefore, before deploying any applications of Deepseek-coder-6.7B-kexer, developers should perform safety testing and tuning tailored to their specific applications of the model. |
giantdev/dippy-LVR1d-sn11m9 | giantdev | 2024-05-22T14:52:46Z | 126 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-22T14:50:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
matthieuzone/MONT_D_ORter | matthieuzone | 2024-05-22T14:52:19Z | 2 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-22T06:11:50Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks cheese
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - matthieuzone/MONT_D_ORter
<Gallery />
## Model description
These are matthieuzone/MONT_D_ORter LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matthieuzone/MONT_D_ORter/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
c14kevincardenas/vit-base-patch16-224-limb | c14kevincardenas | 2024-05-22T14:47:46Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-17T01:00:34Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-limb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-limb
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2871
- Accuracy: 0.3344
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 2014
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.3834 | 1.0 | 215 | 1.3825 | 0.2685 |
| 1.3786 | 2.0 | 430 | 1.3706 | 0.2998 |
| 1.3546 | 3.0 | 645 | 1.3357 | 0.3229 |
| 1.3075 | 4.0 | 860 | 1.3095 | 0.3097 |
| 1.3017 | 5.0 | 1075 | 1.2871 | 0.3344 |
### Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.19.1
- Tokenizers 0.19.1
|
muyiya/poems-translate | muyiya | 2024-05-22T14:47:33Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2024-05-22T14:39:49Z | ---
license: other
license_name: translate
license_link: LICENSE
---
|
cahya/whisper-medium-id | cahya | 2024-05-22T14:45:53Z | 115 | 20 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"id",
"dataset:mozilla-foundation/common_voice_11_0",
"dataset:magic_data",
"dataset:TITML",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-07T19:29:35Z | ---
language:
- id
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
- magic_data
- TITML
metrics:
- wer
base_model: openai/whisper-medium
model-index:
- name: Whisper Medium Indonesian
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: mozilla-foundation/common_voice_11_0 id
type: mozilla-foundation/common_voice_11_0
config: id
split: test
metrics:
- type: wer
value: 3.8273540533062804
name: Wer
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: google/fleurs id_id
type: google/fleurs
config: id_id
split: test
metrics:
- type: wer
value: 9.74
name: Wer
---
# Whisper Medium Indonesian
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the
Indonesian mozilla-foundation/common_voice_11_0, magic_data, titml and google/fleurs dataset. It achieves the following
results:
### CV11 test split:
- Loss: 0.0698
- Wer: 3.8274
### Google/fleurs test split:
- Wer: 9.74
## Usage
```python
from transformers import pipeline
transcriber = pipeline(
"automatic-speech-recognition",
model="cahya/whisper-medium-id"
)
transcriber.model.config.forced_decoder_ids = (
transcriber.tokenizer.get_decoder_prompt_ids(
language="id"
task="transcribe"
)
)
transcription = transcriber("my_audio_file.mp3")
```
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 10000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.0427 | 0.33 | 1000 | 0.0664 | 4.3807 |
| 0.042 | 0.66 | 2000 | 0.0658 | 3.9426 |
| 0.0265 | 0.99 | 3000 | 0.0657 | 3.8274 |
| 0.0211 | 1.32 | 4000 | 0.0679 | 3.8366 |
| 0.0212 | 1.66 | 5000 | 0.0682 | 3.8412 |
| 0.0206 | 1.99 | 6000 | 0.0683 | 3.8689 |
| 0.0166 | 2.32 | 7000 | 0.0711 | 3.9657 |
| 0.0095 | 2.65 | 8000 | 0.0717 | 3.9980 |
| 0.0122 | 2.98 | 9000 | 0.0714 | 3.9795 |
| 0.0049 | 3.31 | 10000 | 0.0720 | 3.9887 |
## Evaluation
We evaluated the model using the test split of two datasets, the [Common Voice 11](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0)
and the [Google Fleurs](https://huggingface.co/datasets/google/fleurs).
As Whisper can transcribe casing and punctuation, we also evaluate its performance using raw and normalized text.
(lowercase + removal of punctuations). The results are as follows:
### Common Voice 11
| | WER |
|---------------------------------------------------------------------------|------|
| [cahya/whisper-medium-id](https://huggingface.co/cahya/whisper-medium-id) | 3.83 |
| [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) | 12.62 |
### Google/Fleurs
| | WER |
|-------------------------------------------------------------------------------------------------------------|------|
| [cahya/whisper-medium-id](https://huggingface.co/cahya/whisper-medium-id) | 9.74 |
| [cahya/whisper-medium-id](https://huggingface.co/cahya/whisper-medium-id) + text normalization | tbc |
| [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) | 10.2 |
| [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) + text normalization | tbc |
|
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
giantdev/dippy-UDl2f-sn11m1 | giantdev | 2024-05-22T14:45:51Z | 126 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-22T14:43:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BilalMuftuoglu/deit-base-distilled-patch16-224-hasta-65-fold2 | BilalMuftuoglu | 2024-05-22T14:43:35Z | 200 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-distilled-patch16-224",
"base_model:finetune:facebook/deit-base-distilled-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-22T14:33:50Z | ---
license: apache-2.0
base_model: facebook/deit-base-distilled-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: deit-base-distilled-patch16-224-hasta-65-fold2
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.6111111111111112
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deit-base-distilled-patch16-224-hasta-65-fold2
This model is a fine-tuned version of [facebook/deit-base-distilled-patch16-224](https://huggingface.co/facebook/deit-base-distilled-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8776
- Accuracy: 0.6111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| No log | 0.5714 | 1 | 1.2210 | 0.3611 |
| No log | 1.7143 | 3 | 1.1841 | 0.2778 |
| No log | 2.8571 | 5 | 1.3489 | 0.2778 |
| No log | 4.0 | 7 | 1.2178 | 0.2778 |
| No log | 4.5714 | 8 | 1.1297 | 0.2222 |
| 1.1666 | 5.7143 | 10 | 1.1211 | 0.3056 |
| 1.1666 | 6.8571 | 12 | 1.0956 | 0.4167 |
| 1.1666 | 8.0 | 14 | 1.0999 | 0.3056 |
| 1.1666 | 8.5714 | 15 | 1.1035 | 0.4167 |
| 1.1666 | 9.7143 | 17 | 1.0612 | 0.4167 |
| 1.1666 | 10.8571 | 19 | 1.0405 | 0.5 |
| 1.0161 | 12.0 | 21 | 1.0978 | 0.3889 |
| 1.0161 | 12.5714 | 22 | 1.0110 | 0.3889 |
| 1.0161 | 13.7143 | 24 | 1.0062 | 0.4722 |
| 1.0161 | 14.8571 | 26 | 0.9771 | 0.5556 |
| 1.0161 | 16.0 | 28 | 0.9988 | 0.5278 |
| 1.0161 | 16.5714 | 29 | 0.9967 | 0.4722 |
| 0.9177 | 17.7143 | 31 | 0.9998 | 0.4444 |
| 0.9177 | 18.8571 | 33 | 1.0774 | 0.5 |
| 0.9177 | 20.0 | 35 | 0.9775 | 0.5278 |
| 0.9177 | 20.5714 | 36 | 0.9918 | 0.5278 |
| 0.9177 | 21.7143 | 38 | 1.0066 | 0.4722 |
| 0.7319 | 22.8571 | 40 | 1.0559 | 0.4722 |
| 0.7319 | 24.0 | 42 | 1.0745 | 0.5833 |
| 0.7319 | 24.5714 | 43 | 1.0611 | 0.5278 |
| 0.7319 | 25.7143 | 45 | 0.9831 | 0.4444 |
| 0.7319 | 26.8571 | 47 | 1.0357 | 0.4444 |
| 0.7319 | 28.0 | 49 | 1.1501 | 0.5556 |
| 0.6173 | 28.5714 | 50 | 1.1571 | 0.5556 |
| 0.6173 | 29.7143 | 52 | 0.9706 | 0.5278 |
| 0.6173 | 30.8571 | 54 | 1.0836 | 0.4444 |
| 0.6173 | 32.0 | 56 | 0.9926 | 0.4722 |
| 0.6173 | 32.5714 | 57 | 0.9648 | 0.5278 |
| 0.6173 | 33.7143 | 59 | 1.0513 | 0.5833 |
| 0.5518 | 34.8571 | 61 | 0.9230 | 0.5556 |
| 0.5518 | 36.0 | 63 | 0.9494 | 0.4444 |
| 0.5518 | 36.5714 | 64 | 0.9941 | 0.4722 |
| 0.5518 | 37.7143 | 66 | 0.9323 | 0.5 |
| 0.5518 | 38.8571 | 68 | 0.8776 | 0.6111 |
| 0.512 | 40.0 | 70 | 0.9269 | 0.5556 |
| 0.512 | 40.5714 | 71 | 0.9188 | 0.5278 |
| 0.512 | 41.7143 | 73 | 0.9326 | 0.4722 |
| 0.512 | 42.8571 | 75 | 0.9404 | 0.5 |
| 0.512 | 44.0 | 77 | 0.9047 | 0.5278 |
| 0.512 | 44.5714 | 78 | 0.8947 | 0.5278 |
| 0.4374 | 45.7143 | 80 | 0.8965 | 0.5833 |
| 0.4374 | 46.8571 | 82 | 0.9077 | 0.5556 |
| 0.4374 | 48.0 | 84 | 0.9290 | 0.5 |
| 0.4374 | 48.5714 | 85 | 0.9194 | 0.5 |
| 0.4374 | 49.7143 | 87 | 0.8923 | 0.5556 |
| 0.4374 | 50.8571 | 89 | 0.8754 | 0.5556 |
| 0.3571 | 52.0 | 91 | 0.8767 | 0.5833 |
| 0.3571 | 52.5714 | 92 | 0.8808 | 0.5556 |
| 0.3571 | 53.7143 | 94 | 0.8939 | 0.4722 |
| 0.3571 | 54.8571 | 96 | 0.9078 | 0.4722 |
| 0.3571 | 56.0 | 98 | 0.9170 | 0.4722 |
| 0.3571 | 56.5714 | 99 | 0.9172 | 0.4722 |
| 0.3333 | 57.1429 | 100 | 0.9168 | 0.4722 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
matthieuzone/OSSAU-_IRATYter | matthieuzone | 2024-05-22T14:42:56Z | 3 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-22T06:13:53Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks cheese
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - matthieuzone/OSSAU-_IRATYter
<Gallery />
## Model description
These are matthieuzone/OSSAU-_IRATYter LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matthieuzone/OSSAU-_IRATYter/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
RWKV/EagleX-7B-Chat-V0.5-pth | RWKV | 2024-05-22T14:42:24Z | 0 | 7 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-22T13:47:12Z | ---
license: apache-2.0
---

### RWKV EagleX 7B v2 Model
> **!Important!: This is not meant to be used with huggingface transformers library**
>
> The following is the raw representation of the EagleX 7B Chat V0.5 model. For use with our own set of inference libraries and tools
>
> This IS the chat/instruct tune (up to 100k context length)!!
## Chat Format
The chat format is identical to the ChatML, the only difference is that the <|im_start|> and <|im_end|> special tokens have been replaced with the unused ASCII codes "\x16" and "\x17" respectively. This was done to ensure complete compatibility with the standard RWKV World tokenizer.
Example:
```
\x16system
You are Eagle, a very smart, creative and helpful personal assistant. Assist the user in any way you can with detailed and smart responses.\x17
\x16user
I want to cheer up my friend who's having a rough day. Can you suggest a couple short and sweet text messages to go with a kitten gif?\x17
\x16assistant
Sure, I can help you with that! Here are a couple of suggestions for text messages:
"You're the cutest thing ever! I hope you have a great day!"
"I know you're having a tough day, but you're amazing. Hang in there!"
"Thinking of you and sending you lots of love and light. You're amazing just the way you are."\x17
\x16user
Have you got anything even more sympathetic?\x17
\x16assistant
Sure, here are a couple of additional suggestions:
"I'm so sorry you're having a tough day. You're an amazing person, and I'm so proud of you."
"You're a wonderful person, and you're going through a tough time right now. Just remember that you're strong and capable, and you can get through this."
"I'm here for you, and I'm going to do everything I can to support you. You're amazing, and you're going to get through this."
"I'm so sorry you're having a tough day. You're an amazing person, and I'm so proud of you. You're going to get through this, and you're going to be even stronger and happier in the future."\x17
```
## Quickstart with the PIP library
https://pypi.org/project/rwkv/
```
from rwkv.model import RWKV
from rwkv.utils import PIPELINE, PIPELINE_ARGS
# download models: https://huggingface.co/BlinkDL
model = RWKV(model='/fsx/BlinkDL/HF-MODEL/rwkv-4-pile-169m/RWKV-4-Pile-169M-20220807-8023', strategy='cpu fp32')
pipeline = PIPELINE(model, "20B_tokenizer.json") # 20B_tokenizer.json is in https://github.com/BlinkDL/ChatRWKV
# use pipeline = PIPELINE(model, "rwkv_vocab_v20230424") for rwkv "world" models
ctx = "\nIn a shocking finding, scientist discovered a herd of dragons living in a remote, previously unexplored valley, in Tibet. Even more surprising to the researchers was the fact that the dragons spoke perfect Chinese."
print(ctx, end='')
def my_print(s):
print(s, end='', flush=True)
# For alpha_frequency and alpha_presence, see "Frequency and presence penalties":
# https://platform.openai.com/docs/api-reference/parameter-details
args = PIPELINE_ARGS(temperature = 1.0, top_p = 0.7, top_k = 100, # top_k = 0 then ignore
alpha_frequency = 0.25,
alpha_presence = 0.25,
alpha_decay = 0.996, # gradually decay the penalty
token_ban = [0], # ban the generation of some tokens
token_stop = [], # stop generation whenever you see any token here
chunk_len = 256) # split input into chunks to save VRAM (shorter -> slower)
pipeline.generate(ctx, token_count=200, args=args, callback=my_print)
print('\n')
out, state = model.forward([187, 510, 1563, 310, 247], None)
print(out.detach().cpu().numpy()) # get logits
out, state = model.forward([187, 510], None)
out, state = model.forward([1563], state) # RNN has state (use deepcopy to clone states)
out, state = model.forward([310, 247], state)
print(out.detach().cpu().numpy()) # same result as above
print('\n')
```
## Ramblings
Several new techniques were used to build the instruct dataset including the following:
- Smart packing of the instruct pairs (to improve long context multi turn conversation)
- Smart grouping of different context lengths and data categories/priorities (to improve training efficiency)
- Variable context length training (courtesy of https://github.com/RWKV/RWKV-infctx-trainer)
- A bunch of synthetic data to increase long context usage and reasoning (to be released soon...)
## Acknowledgement
We are grateful for the help and support from the following key groups:
- [Recursal.ai](https://recursal.ai) team for financing the GPU resources, and managing the training of this model - you can run the Eagle line of RWKV models on their cloud / on-premise platform today.
- Dataset built and model finetuned by @m8than
- EleutherAI for their support, especially in the v5/v6 Eagle/Finch paper
- Linux Foundation AI & Data group for supporting and hosting the RWKV project |
DL-Project/hatespeech_wav2vec2 | DL-Project | 2024-05-22T14:41:13Z | 166 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"en",
"dataset:DL-Project/DL_Audio_Hatespeech_Dataset",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2024-05-13T22:45:45Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
metrics:
- accuracy
- recall
- precision
- f1
model-index:
- name: DL-Project/hatespeech_wav2vec2
results: []
datasets:
- DL-Project/DL_Audio_Hatespeech_Dataset
language:
- en
widget:
- src: example_hate.wav
example_title: Hate Speech Example
- src: example_non_hate.wav
example_title: Non-Hate Speech Example
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hatespeech_wav2vec2
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6562
- Accuracy: 0.6216
- Recall: 0.7853
- Precision: 0.5990
- F1: 0.6796
It achieves the following results on the test set:
- Loss: 0.6597
- Accuracy: 0.6192
- Recall: 0.7822
- Precision: 0.5944
- F1: 0.6755
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Recall | Precision | F1 |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 0.9935 | 77 | 0.6871 | 0.5430 | 0.9021 | 0.5311 | 0.6686 |
| 0.6899 | 2.0 | 155 | 0.6779 | 0.5647 | 0.9021 | 0.5448 | 0.6793 |
| 0.6761 | 2.9935 | 232 | 0.6649 | 0.5934 | 0.5541 | 0.6131 | 0.5821 |
| 0.6607 | 4.0 | 310 | 0.6550 | 0.6289 | 0.6504 | 0.6334 | 0.6417 |
| 0.6607 | 4.9935 | 387 | 0.6562 | 0.6216 | 0.7853 | 0.5990 | 0.6796 |
| 0.6403 | 6.0 | 465 | 0.6578 | 0.6357 | 0.6969 | 0.6298 | 0.6617 |
| 0.6129 | 6.9935 | 542 | 0.6623 | 0.6313 | 0.7277 | 0.6184 | 0.6686 |
| 0.6024 | 8.0 | 620 | 0.6745 | 0.6345 | 0.7490 | 0.6174 | 0.6769 |
| 0.5779 | 8.9935 | 697 | 0.6807 | 0.6406 | 0.6567 | 0.6460 | 0.6513 |
| 0.5779 | 9.9355 | 770 | 0.6798 | 0.6337 | 0.6993 | 0.6270 | 0.6612 |
### Framework versions
- Transformers 4.40.2
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1 |
hgnoi/oQr555VASDeS7OYf | hgnoi | 2024-05-22T14:39:42Z | 126 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-22T14:38:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Likich/vicuna-finetune-qualcoding_500_prompt1 | Likich | 2024-05-22T14:38:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-22T14:38:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
matthieuzone/GRUYEREter | matthieuzone | 2024-05-22T14:38:29Z | 3 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-22T06:10:47Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks cheese
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - matthieuzone/GRUYEREter
<Gallery />
## Model description
These are matthieuzone/GRUYEREter LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matthieuzone/GRUYEREter/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
LoneStriker/Phi-3-medium-128k-instruct-GGUF | LoneStriker | 2024-05-22T14:37:07Z | 16 | 5 | null | [
"gguf",
"nlp",
"code",
"text-generation",
"multilingual",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-05-21T17:26:25Z | ---
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-medium-128k-instruct/resolve/main/LICENSE
language:
- multilingual
pipeline_tag: text-generation
tags:
- nlp
- code
inference:
parameters:
temperature: 0.7
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
---
## Model Summary
The Phi-3-Medium-128K-Instruct is a 14B parameters, lightweight, state-of-the-art open model trained with the Phi-3 datasets that includes both synthetic data and the filtered publicly available websites data with a focus on high-quality and reasoning dense properties.
The model belongs to the Phi-3 family with the Medium version in two variants [4k](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) and [128K](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) which is the context length (in tokens) that it can support.
The model has underwent a post-training process that incorporates both supervised fine-tuning and direct preference optimization for the instruction following and safety measures.
When assessed against benchmarks testing common sense, language understanding, math, code, long context and logical reasoning, Phi-3-Medium-128K-Instruct showcased a robust and state-of-the-art performance among models of the same-size and next-size-up.
Resources and Technical Documentation:
+ [Phi-3 Microsoft Blog](https://aka.ms/Phi-3Build2024)
+ [Phi-3 Technical Report](https://aka.ms/phi3-tech-report)
+ [Phi-3 on Azure AI Studio](https://aka.ms/phi3-azure-ai)
+ [Phi-3 Cookbook](https://github.com/microsoft/Phi-3CookBook)
| | Short Context | Long Context |
| ------- | ------------- | ------------ |
| Mini | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-onnx) ; [[GGUF]](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct-gguf) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct-onnx)|
| Small | 8K [[HF]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-8k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-small-128k-instruct-onnx-cuda)|
| Medium | 4K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-4k-instruct-onnx-cuda) | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct) ; [[ONNX]](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda)|
| Vision | | 128K [[HF]](https://huggingface.co/microsoft/Phi-3-vision-128k-instruct)|
## Intended Uses
**Primary use cases**
The model is intended for broad commercial and research use in English. The model provides uses for general purpose AI systems and applications which require :
1) Memory/compute constrained environments
2) Latency bound scenarios
3) Strong reasoning (especially code, math and logic)
Our model is designed to accelerate research on language and multimodal models, for use as a building block for generative AI powered features.
**Use case considerations**
Our models are not specifically designed or evaluated for all downstream purposes. Developers should consider common limitations of language models as they select use cases, and evaluate and mitigate for accuracy, safety, and fariness before using within a specific downstream use case, particularly for high risk scenarios. Developers should be aware of and adhere to applicable laws or regulations (including privacy, trade compliance laws, etc.) that are relevant to their use case.
Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the license the model is released under.
## How to Use
Phi-3-Medium-128k-Instruct has been integrated in the development version (4.40.2) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following:
* When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function.
* Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source.
The current `transformers` version can be verified with: `pip list | grep transformers`.
Phi-3-Medium-128k-Instruct is also available in [Azure AI Studio](https://aka.ms/phi3-azure-ai).
### Tokenizer
Phi-3-Medium-128k-Instruct supports a vocabulary size of up to `32064` tokens. The [tokenizer files](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct/blob/main/added_tokens.json) already provide placeholder tokens that can be used for downstream fine-tuning, but they can also be extended up to the model's vocabulary size.
### Chat Format
Given the nature of the training data, the Phi-3-Medium-128k-Instruct model is best suited for prompts using the chat format as follows.
You can provide the prompt as a question with a generic template as follow:
```markdown
<|user|>\nQuestion <|end|>\n<|assistant|>
```
For example:
```markdown
<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>
```
where the model generates the text after `<|assistant|>` . In case of few-shots prompt, the prompt can be formatted as the following:
```markdown
<|user|>
I am going to Paris, what should I see?<|end|>
<|assistant|>
Paris, the capital of France, is known for its stunning architecture, art museums, historical landmarks, and romantic atmosphere. Here are some of the top attractions to see in Paris:\n\n1. The Eiffel Tower: The iconic Eiffel Tower is one of the most recognizable landmarks in the world and offers breathtaking views of the city.\n2. The Louvre Museum: The Louvre is one of the world's largest and most famous museums, housing an impressive collection of art and artifacts, including the Mona Lisa.\n3. Notre-Dame Cathedral: This beautiful cathedral is one of the most famous landmarks in Paris and is known for its Gothic architecture and stunning stained glass windows.\n\nThese are just a few of the many attractions that Paris has to offer. With so much to see and do, it's no wonder that Paris is one of the most popular tourist destinations in the world."<|end|>
<|user|>
What is so great about #1?<|end|>
<|assistant|>
```
### Sample inference code
This code snippets show how to get quickly started with running the model on a GPU:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
torch.random.manual_seed(0)
model_id = "microsoft/Phi-3-medium-128k-instruct"
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype="auto",
trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
messages = [
{"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"},
{"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."},
{"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"},
]
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
)
generation_args = {
"max_new_tokens": 500,
"return_full_text": False,
"temperature": 0.0,
"do_sample": False,
}
output = pipe(messages, **generation_args)
print(output[0]['generated_text'])
```
*Some applications/frameworks might not include a BOS token (`<s>`) at the start of the conversation. Please ensure that it is included since it provides more reliable results.*
## Responsible AI Considerations
Like other language models, the Phi series models can potentially behave in ways that are unfair, unreliable, or offensive. Some of the limiting behaviors to be aware of include:
+ Quality of Service: the Phi models are trained primarily on English text. Languages other than English will experience worse performance. English language varieties with less representation in the training data might experience worse performance than standard American English.
+ Representation of Harms & Perpetuation of Stereotypes: These models can over- or under-represent groups of people, erase representation of some groups, or reinforce demeaning or negative stereotypes. Despite safety post-training, these limitations may still be present due to differing levels of representation of different groups or prevalence of examples of negative stereotypes in training data that reflect real-world patterns and societal biases.
+ Inappropriate or Offensive Content: these models may produce other types of inappropriate or offensive content, which may make it inappropriate to deploy for sensitive contexts without additional mitigations that are specific to the use case.
+ Information Reliability: Language models can generate nonsensical content or fabricate content that might sound reasonable but is inaccurate or outdated.
+ Limited Scope for Code: Majority of Phi-3 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
Developers should apply responsible AI best practices and are responsible for ensuring that a specific use case complies with relevant laws and regulations (e.g. privacy, trade, etc.). Important areas for consideration include:
+ Allocation: Models may not be suitable for scenarios that could have consequential impact on legal status or the allocation of resources or life opportunities (ex: housing, employment, credit, etc.) without further assessments and additional debiasing techniques.
+ High-Risk Scenarios: Developers should assess suitability of using models in high-risk scenarios where unfair, unreliable or offensive outputs might be extremely costly or lead to harm. This includes providing advice in sensitive or expert domains where accuracy and reliability are critical (ex: legal or health advice). Additional safeguards should be implemented at the application level according to the deployment context.
+ Misinformation: Models may produce inaccurate information. Developers should follow transparency best practices and inform end-users they are interacting with an AI system. At the application level, developers can build feedback mechanisms and pipelines to ground responses in use-case specific, contextual information, a technique known as Retrieval Augmented Generation (RAG).
+ Generation of Harmful Content: Developers should assess outputs for their context and use available safety classifiers or custom solutions appropriate for their use case.
+ Misuse: Other forms of misuse such as fraud, spam, or malware production may be possible, and developers should ensure that their applications do not violate applicable laws and regulations.
## Training
### Model
* Architecture: Phi-3-Medium-128k-Instruct has 14B parameters and is a dense decoder-only Transformer model. The model is fine-tuned with Supervised fine-tuning (SFT) and Direct Preference Optimization (DPO) to ensure alignment with human preferences and safety guidlines.
* Inputs: Text. It is best suited for prompts using chat format.
* Context length: 128k tokens
* GPUs: 512 H100-80G
* Training time: 42 days
* Training data: 4.8T tokens
* Outputs: Generated text in response to the input
* Dates: Our models were trained between February and April 2024
* Status: This is a static model trained on an offline dataset with cutoff date October 2023. Future versions of the tuned models may be released as we improve models.
* Release dates: The model weight is released on May 21, 2024.
### Datasets
Our training data includes a wide variety of sources, totaling 4.8 trillion tokens (including 10% multilingual), and is a combination of
1) Publicly available documents filtered rigorously for quality, selected high-quality educational data, and code;
2) Newly created synthetic, “textbook-like” data for the purpose of teaching math, coding, common sense reasoning, general knowledge of the world (science, daily activities, theory of mind, etc.);
3) High quality chat format supervised data covering various topics to reflect human preferences on different aspects such as instruct-following, truthfulness, honesty and helpfulness.
We are focusing on the quality of data that could potentially improve the reasoning ability for the model, and we filter the publicly available documents to contain the correct level of knowledge. As an example, the result of a game in premier league in a particular day might be good training data for frontier models, but we need to remove such information to leave more model capacity for reasoning for the small size models. More details about data can be found in the [Phi-3 Technical Report](https://aka.ms/phi3-tech-report).
## Benchmarks
We report the results for Phi-3-Medium-128k-Instruct on standard open-source benchmarks measuring the model's reasoning ability (both common sense reasoning and logical reasoning). We compare to Mixtral-8x22b, Gemini-Pro, Command R+ 104B, Llama-3-70B-Instruct, GPT-3.5-Turbo-1106, and GPT-4-Turbo-1106(Chat).
All the reported numbers are produced with the exact same pipeline to ensure that the numbers are comparable. These numbers might differ from other published numbers due to slightly different choices in the evaluation.
As is now standard, we use few-shot prompts to evaluate the models, at temperature 0.
The prompts and number of shots are part of a Microsoft internal tool to evaluate language models, and in particular we did no optimization to the pipeline for Phi-3.
More specifically, we do not change prompts, pick different few-shot examples, change prompt format, or do any other form of optimization for the model.
The number of k–shot examples is listed per-benchmark.
|Benchmark|Phi-3-Medium-128k-Instruct<br>14b|Command R+<br>104B|Mixtral<br>8x22B|Llama-3-70B-Instruct<br>8b|GPT3.5-Turbo<br>version 1106|Gemini<br>Pro|GPT-4-Turbo<br>version 1106 (Chat)|
|---------|-----------------------|--------|-------------|-------------------|-------------------|----------|------------------------|
|AGI Eval<br>5-shot|49.7|50.1|54.0|56.9|48.4|49.0|59.6|
|MMLU<br>5-shot|76.6|73.8|76.2|80.2|71.4|66.7|84.0|
|BigBench Hard<br>3-shot|77.9|74.1|81.8|80.4|68.3|75.6|87.7|
|ANLI<br>7-shot|57.3|63.4|65.2|68.3|58.1|64.2|71.7|
|HellaSwag<br>5-shot|81.6|78.0|79.0|82.6|78.8|76.2|88.3|
|ARC Challenge<br>10-shot|91.0|86.9|91.3|93.0|87.4|88.3|95.6|
|ARC Easy<br>10-shot|97.6|95.7|96.9|98.2|96.3|96.1|98.8|
|BoolQ<br>2-shot|86.5|86.1|82.7|89.1|79.1|86.4|91.3|
|CommonsenseQA<br>10-shot|82.2|82.0|82.0|84.4|79.6|81.8|86.7|
|MedQA<br>2-shot|67.6|59.2|67.9|78.5|63.4|58.2|83.7|
|OpenBookQA<br>10-shot|87.2|86.8|88.6|91.8|86.0|86.4|93.4|
|PIQA<br>5-shot|87.8|86.4|85.0|85.3|86.6|86.2|90.1|
|Social IQA<br>5-shot|79.0|75.3|78.2|81.1|68.3|75.4|81.7|
|TruthfulQA (MC2)<br>10-shot|74.3|57.8|67.4|81.9|67.7|72.6|85.2|
|WinoGrande<br>5-shot|78.9|77.0|75.3|83.3|68.8|72.2|86.7|
|TriviaQA<br>5-shot|73.9|82.8|84.5|78.5|85.8|80.2|73.3|
|GSM8K Chain of Thought<br>8-shot|87.5|78.3|83.8|93.5|78.1|80.4|94.2|
|HumanEval<br>0-shot|58.5|61.6|39.6|78.7|62.2|64.4|79.9|
|MBPP<br>3-shot|73.8|68.9|70.7|81.3|77.8|73.2|86.7|
|Average|77.3|75.0|76.3|82.5|74.3|75.4|85.2|
We take a closer look at different categories across 80 public benchmark datasets at the table below:
|Benchmark|Phi-3-Medium-128k-Instruct<br>14b|Command R+<br>104B|Mixtral<br>8x22B|Llama-3-70B-Instruct<br>8b|GPT3.5-Turbo<br>version 1106|Gemini<br>Pro|GPT-4-Turbo<br>version 1106 (Chat)|
|--------|------------------------|--------|-------------|-------------------|-------------------|----------|------------------------|
| Popular aggregated benchmark | 72.3 | 69.9 | 73.4 | 76.3 | 67.0 | 67.5 | 80.5 |
| Reasoning | 83.2 | 79.3 | 81.5 | 86.7 | 78.3 | 80.4 | 89.3 |
| Language understanding | 75.3 | 75.7 | 78.7 | 77.9 | 70.4 | 75.3 | 81.6 |
| Code generation | 64.2 | 68.6 | 60.0 | 69.3 | 70.4 | 66.7 | 76.1 |
| Math | 52.9 | 45.3 | 52.5 | 59.7 | 52.8 | 50.9 | 67.1 |
| Factual knowledge | 47.5 | 60.3 | 60.6 | 52.4 | 63.4 | 54.6 | 45.9 |
| Multilingual | 62.2 | 67.8 | 69.8 | 62.0 | 67.0 | 73.4 | 78.2 |
| Robustness | 70.2 | 57.9 | 65.5 | 78.7 | 69.3 | 69.7 | 84.6 |
## Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
* [Transformers](https://github.com/huggingface/transformers)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
## Hardware
Note that by default, the Phi-3-Medium model uses flash attention, which requires certain types of GPU hardware to run. We have tested on the following GPU types:
* NVIDIA A100
* NVIDIA A6000
* NVIDIA H100
If you want to run the model on:
+ Optimized inference on GPU, CPU, and Mobile: use the **ONNX** models [128k](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct-onnx-cuda)
## Cross Platform Support
ONNX runtime ecosystem now supports Phi3 Medium models across platforms and hardware.
Optimized phi-3 models are also published here in ONNX format, to run with ONNX Runtime on CPU and GPU across devices, including server platforms, Windows, Linux and Mac desktops, and mobile CPUs, with the precision best suited to each of these targets. DirectML GPU acceleration is supported for Windows desktops GPUs (AMD, Intel, and NVIDIA).
Along with DML, ONNX Runtime provides cross platform support for Phi3 Medium across a range of devices CPU, GPU, and mobile.
Here are some of the optimized configurations we have added:
1. ONNX models for int4 DML: Quantized to int4 via AWQ
2. ONNX model for fp16 CUDA
3. ONNX model for int4 CUDA: Quantized to int4 via RTN
4. ONNX model for int4 CPU and Mobile: Quantized to int4 via RTN
## License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/Phi-3-medium-128k/resolve/main/LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies.
|
matthieuzone/FETAter | matthieuzone | 2024-05-22T14:36:33Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-22T06:09:40Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks cheese
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - matthieuzone/FETAter
<Gallery />
## Model description
These are matthieuzone/FETAter LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matthieuzone/FETAter/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
MoGP/f_prime_bib_init | MoGP | 2024-05-22T14:36:12Z | 108 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-22T11:48:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
yassineafr/working | yassineafr | 2024-05-22T14:34:21Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:inceptionai/jais-13b",
"base_model:adapter:inceptionai/jais-13b",
"license:apache-2.0",
"region:us"
] | null | 2024-05-22T14:26:18Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: core42/jais-13b
model-index:
- name: working
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/yassine-af/huggingface/runs/8mudzhjp)
# working
This model is a fine-tuned version of [core42/jais-13b](https://huggingface.co/core42/jais-13b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.11.1
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1 |
BilalMuftuoglu/deit-base-distilled-patch16-224-hasta-65-fold1 | BilalMuftuoglu | 2024-05-22T14:33:41Z | 195 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-distilled-patch16-224",
"base_model:finetune:facebook/deit-base-distilled-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-22T14:23:48Z | ---
license: apache-2.0
base_model: facebook/deit-base-distilled-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: deit-base-distilled-patch16-224-hasta-65-fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.6388888888888888
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deit-base-distilled-patch16-224-hasta-65-fold1
This model is a fine-tuned version of [facebook/deit-base-distilled-patch16-224](https://huggingface.co/facebook/deit-base-distilled-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9265
- Accuracy: 0.6389
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| No log | 0.5714 | 1 | 1.3178 | 0.2222 |
| No log | 1.7143 | 3 | 1.4014 | 0.2778 |
| No log | 2.8571 | 5 | 1.3535 | 0.2778 |
| No log | 4.0 | 7 | 1.1299 | 0.3056 |
| No log | 4.5714 | 8 | 1.0860 | 0.4722 |
| 1.0868 | 5.7143 | 10 | 1.1121 | 0.3333 |
| 1.0868 | 6.8571 | 12 | 1.0691 | 0.3611 |
| 1.0868 | 8.0 | 14 | 1.0270 | 0.5 |
| 1.0868 | 8.5714 | 15 | 1.0360 | 0.5 |
| 1.0868 | 9.7143 | 17 | 1.0385 | 0.3889 |
| 1.0868 | 10.8571 | 19 | 0.9951 | 0.4167 |
| 0.9487 | 12.0 | 21 | 1.0029 | 0.4444 |
| 0.9487 | 12.5714 | 22 | 1.0134 | 0.4722 |
| 0.9487 | 13.7143 | 24 | 0.9599 | 0.4444 |
| 0.9487 | 14.8571 | 26 | 0.9117 | 0.5278 |
| 0.9487 | 16.0 | 28 | 0.8856 | 0.5278 |
| 0.9487 | 16.5714 | 29 | 0.9275 | 0.4722 |
| 0.7942 | 17.7143 | 31 | 0.9041 | 0.5278 |
| 0.7942 | 18.8571 | 33 | 0.8999 | 0.4722 |
| 0.7942 | 20.0 | 35 | 0.8832 | 0.5833 |
| 0.7942 | 20.5714 | 36 | 0.8864 | 0.5556 |
| 0.7942 | 21.7143 | 38 | 0.8551 | 0.5 |
| 0.5911 | 22.8571 | 40 | 0.8242 | 0.6111 |
| 0.5911 | 24.0 | 42 | 0.9265 | 0.6389 |
| 0.5911 | 24.5714 | 43 | 0.8674 | 0.5833 |
| 0.5911 | 25.7143 | 45 | 0.7892 | 0.5556 |
| 0.5911 | 26.8571 | 47 | 0.8005 | 0.5833 |
| 0.5911 | 28.0 | 49 | 0.8302 | 0.5833 |
| 0.4865 | 28.5714 | 50 | 0.8893 | 0.6111 |
| 0.4865 | 29.7143 | 52 | 0.9043 | 0.6111 |
| 0.4865 | 30.8571 | 54 | 0.8433 | 0.5833 |
| 0.4865 | 32.0 | 56 | 0.8677 | 0.5833 |
| 0.4865 | 32.5714 | 57 | 0.9008 | 0.5833 |
| 0.4865 | 33.7143 | 59 | 0.9533 | 0.6111 |
| 0.4007 | 34.8571 | 61 | 0.9175 | 0.6111 |
| 0.4007 | 36.0 | 63 | 0.9090 | 0.5833 |
| 0.4007 | 36.5714 | 64 | 1.0004 | 0.5 |
| 0.4007 | 37.7143 | 66 | 1.0393 | 0.5 |
| 0.4007 | 38.8571 | 68 | 0.9196 | 0.5833 |
| 0.3691 | 40.0 | 70 | 0.9505 | 0.6389 |
| 0.3691 | 40.5714 | 71 | 0.9634 | 0.6389 |
| 0.3691 | 41.7143 | 73 | 0.9718 | 0.5278 |
| 0.3691 | 42.8571 | 75 | 0.9257 | 0.5278 |
| 0.3691 | 44.0 | 77 | 0.9020 | 0.5 |
| 0.3691 | 44.5714 | 78 | 0.9132 | 0.5556 |
| 0.3278 | 45.7143 | 80 | 1.0340 | 0.5556 |
| 0.3278 | 46.8571 | 82 | 1.0933 | 0.5833 |
| 0.3278 | 48.0 | 84 | 1.0231 | 0.5 |
| 0.3278 | 48.5714 | 85 | 0.9826 | 0.5278 |
| 0.3278 | 49.7143 | 87 | 0.9329 | 0.5278 |
| 0.3278 | 50.8571 | 89 | 0.9280 | 0.5278 |
| 0.2909 | 52.0 | 91 | 0.9312 | 0.5556 |
| 0.2909 | 52.5714 | 92 | 0.9359 | 0.5556 |
| 0.2909 | 53.7143 | 94 | 0.9495 | 0.5833 |
| 0.2909 | 54.8571 | 96 | 0.9607 | 0.5833 |
| 0.2909 | 56.0 | 98 | 0.9685 | 0.5833 |
| 0.2909 | 56.5714 | 99 | 0.9703 | 0.5833 |
| 0.2697 | 57.1429 | 100 | 0.9713 | 0.5833 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
SidXXD/cat_mist_token_id-cat_prompt_no_cat | SidXXD | 2024-05-22T14:33:19Z | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"custom-diffusion",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-05-22T10:56:30Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
instance_prompt: photo of a <v1*>
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- custom-diffusion
inference: true
---
# Custom Diffusion - SidXXD/cat_mist_token_id-cat_prompt_no_cat
These are Custom Diffusion adaption weights for stabilityai/stable-diffusion-2-1-base. The weights were trained on photo of a <v1*> using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following.
For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
|
Bebebehebhe/mybee1214 | Bebebehebhe | 2024-05-22T14:32:39Z | 216 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-22T14:32:17Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: mybee1214
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.3571428656578064
---
# mybee1214
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Bee

#### European bee

#### European queen bee

#### Queen bee

#### Varroa mite
 |
SidXXD/cat_mist_token_id-cat | SidXXD | 2024-05-22T14:32:30Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"custom-diffusion",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-05-22T10:42:36Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
instance_prompt: photo of a <v1*> cat
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- custom-diffusion
inference: true
---
# Custom Diffusion - SidXXD/cat_mist_token_id-cat
These are Custom Diffusion adaption weights for stabilityai/stable-diffusion-2-1-base. The weights were trained on photo of a <v1*> cat using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following.
For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
|
StrangeSX/Saraa-8B-SFT | StrangeSX | 2024-05-22T14:29:53Z | 4 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/llama-3-8b-bnb-4bit",
"base_model:quantized:unsloth/llama-3-8b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-22T14:13:56Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
base_model: unsloth/llama-3-8b-bnb-4bit
---
# Uploaded model
- **Developed by:** StrangeSX
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
ankushkr2898/q-FrozenLake-v1-4x4-noSlippery | ankushkr2898 | 2024-05-22T14:28:55Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-05-18T10:54:41Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ankushkr2898/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
rasaenluis3/e3Modelo | rasaenluis3 | 2024-05-22T14:27:33Z | 0 | 0 | fastai | [
"fastai",
"region:us"
] | null | 2024-05-22T14:27:26Z | ---
tags:
- fastai
---
# Amazing!
🥳 Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)!
Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card.
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
hgnoi/pPEM4x5MKL2ma6AT | hgnoi | 2024-05-22T14:24:42Z | 126 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-22T14:23:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hgnoi/5TCpU4VaoiV0zSwv | hgnoi | 2024-05-22T14:24:15Z | 126 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-22T14:22:35Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hgnoi/5NF1gVTQhR2ragr2 | hgnoi | 2024-05-22T14:22:29Z | 126 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-22T14:20:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Likich/mistral-finetune-qualcoding_500_prompt1 | Likich | 2024-05-22T14:18:30Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-22T14:18:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Chinjuj/protebert-protfam-peft-AdaLora | Chinjuj | 2024-05-22T14:17:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-22T14:17:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
SidXXD/cat_clean_token_id-ktn | SidXXD | 2024-05-22T14:16:36Z | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"custom-diffusion",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-05-22T10:04:44Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
instance_prompt: photo of a <v1*> cat
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- custom-diffusion
inference: true
---
# Custom Diffusion - SidXXD/cat_clean_token_id-ktn
These are Custom Diffusion adaption weights for stabilityai/stable-diffusion-2-1-base. The weights were trained on photo of a <v1*> cat using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following.
For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
|
SidXXD/cat_clean_token_id-cat | SidXXD | 2024-05-22T14:13:55Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"custom-diffusion",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-05-22T10:43:07Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
instance_prompt: photo of a <v1*> cat
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- custom-diffusion
inference: true
---
# Custom Diffusion - SidXXD/cat_clean_token_id-cat
These are Custom Diffusion adaption weights for stabilityai/stable-diffusion-2-1-base. The weights were trained on photo of a <v1*> cat using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following.
For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
|
Bebebehebhe/mybee121 | Bebebehebhe | 2024-05-22T14:11:54Z | 216 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"pytorch",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-22T14:11:45Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: mybee121
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.2946428656578064
---
# mybee121
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### Bee

#### European bee

#### European queen bee

#### Queen bee

#### Varroa mite
 |
SidXXD/cat_clean_token_id-cat_prompt_no_cat | SidXXD | 2024-05-22T14:11:07Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"custom-diffusion",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-05-22T10:55:51Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
instance_prompt: photo of a <v1*>
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- custom-diffusion
inference: true
---
# Custom Diffusion - SidXXD/cat_clean_token_id-cat_prompt_no_cat
These are Custom Diffusion adaption weights for stabilityai/stable-diffusion-2-1-base. The weights were trained on photo of a <v1*> using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following.
For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
|
Keerthanah2002/yct | Keerthanah2002 | 2024-05-22T14:06:52Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-05-22T14:02:50Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### yct Dreambooth model trained by Keerthanah2002 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
|
Netta1994/setfit_oversampling_2k | Netta1994 | 2024-05-22T14:06:16Z | 7 | 1 | setfit | [
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"model-index",
"region:us"
] | text-classification | 2024-05-22T14:05:35Z | ---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
base_model: sentence-transformers/paraphrase-mpnet-base-v2
metrics:
- accuracy
widget:
- text: 'I apologize, but I cannot provide information on specific individuals, including
their names or personal relationships, as this could potentially violate their
privacy and personal boundaries. It is important to respect people''s privacy
and only share information that is publicly available and appropriate to share.
Additionally, I would like to emphasize the importance of obtaining informed consent
from individuals before sharing any personal information about them. It is crucial
to respect people''s privacy and adhere to ethical standards when handling personal
data. If you have any other questions or concerns, please feel free to ask.'
- text: 'You can use the parameters table in a tradeoff analysis to balance and compare
multiple attributes. Specifically, it allows you to:
1. Compare different revision configurations of a project.
2. Evaluate product parameters against verification requests.
3. Assess product parameters in relation to product freeze points.
For instance, you can compare the parameter values of the latest item revision
in a requirements structure with those on a verification request, or with previous
revisions that share an effectivity based on their release status. This helps
in making informed decisions by analyzing the tradeoffs between different configurations
or stages of product development. If you need further assistance or have more
questions, feel free to ask.'
- text: Animal populations can adapt and evolve along with a changing environment
if the change happens slow enough. Polar bears may be able to adapt to a temperature
change over 100000 years, but not be able to adapt to the same temperature change
over 1000 years. Since this recent anthropogenic driven change is happening faster
than any natural temperature change, so I would say they are in danger in the
wild. I guess we will be able to see them in zoos though.
- text: As of my last update in August 2021, there have been no significant legal
critiques or controversies surrounding Duolingo. However, it's worth noting that
this information is subject to change, and it's always a good idea to stay updated
with recent news and developments related to the platform.
- text: 'The author clearly cites it as a Reddit thread. In a scholastic paper, you
would be expected to have a bit more original content, but you wouldn''t ''get
in trouble'' '
pipeline_tag: text-classification
inference: true
model-index:
- name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.9840425531914894
name: Accuracy
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1.0 | <ul><li>'Based solely on the given details, it is not feasible to ascertain the physical impacts on the body when an individual is fatigued and subsequently encounters a surge of energy. The provided data primarily concentrates on pH and CO levels in various contexts such as conductivity, soil, water, and culture vessels. Nevertheless, it does not'</li><li>'To get a ShoutOut to pop up monthly, you would need to set the frequency options for the ShoutOut to trigger once a month. However, the provided contexts only mention frequency options such as "Once," "Once a Day," and "Always." There is no direct mention of a monthly frequency option in the provided contexts.\n\nGiven this information, the answer to your question based on the provided contexts is: "I\'m sorry, I\'m not sure how to answer your question. Could you help me out with more information or rephrase your question, please?'</li><li>"I can see how having the credit card details of a business would provide a deeper understanding of their expenditures. Yet, releasing information such as credit card numbers is strictly against privacy policies and regulations. It's illegal, unethical, and a severe breach of trust to share such confidential details."</li></ul> |
| 0.0 | <ul><li>'pRect is an object that contains the x, y, width, and height properties. It is used to determine the index of the object in the nodes array and to insert the object into the nodes object.'</li><li>'Yes, you can search an outside knowledge base using the keywords a user searched for in the player menu. WalkMe offers a Search Provider Integration feature that allows you to supplement your WalkMe items with your existing knowledge base or support center resources. Once enabled, a search performed within the WalkMe Widget will yield results from the specified domains, showing your existing content alongside your WalkMe content. The current supported search providers for this integration are Zendesk, Desk, Bing, and Google. If your current search provider is not on the supported list, please reach out to your Account Manager for further assistance. For more information on how to set up the Search Provider Integration, please refer to our Support article. How else can I assist you today?'</li><li>'Write a precise answer to "how to export homepage to pdf" only based on "KB12345". Only when absolutely confident that If the information is not present in the "KB12345", respond with Answer Not Found.'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.9840 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("Netta1994/setfit_oversampling_2k")
# Run inference
preds = model("The author clearly cites it as a Reddit thread. In a scholastic paper, you would be expected to have a bit more original content, but you wouldn't 'get in trouble' ")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 1 | 89.6623 | 412 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0.0 | 1454 |
| 1.0 | 527 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0002 | 1 | 0.3718 | - |
| 0.0101 | 50 | 0.2723 | - |
| 0.0202 | 100 | 0.1298 | - |
| 0.0303 | 150 | 0.091 | - |
| 0.0404 | 200 | 0.046 | - |
| 0.0505 | 250 | 0.0348 | - |
| 0.0606 | 300 | 0.0208 | - |
| 0.0707 | 350 | 0.0044 | - |
| 0.0808 | 400 | 0.0041 | - |
| 0.0909 | 450 | 0.0046 | - |
| 0.1009 | 500 | 0.0007 | - |
| 0.1110 | 550 | 0.0004 | - |
| 0.1211 | 600 | 0.0601 | - |
| 0.1312 | 650 | 0.0006 | - |
| 0.1413 | 700 | 0.0006 | - |
| 0.1514 | 750 | 0.0661 | - |
| 0.1615 | 800 | 0.0002 | - |
| 0.1716 | 850 | 0.0009 | - |
| 0.1817 | 900 | 0.0002 | - |
| 0.1918 | 950 | 0.0017 | - |
| 0.2019 | 1000 | 0.0007 | - |
| 0.2120 | 1050 | 0.0606 | - |
| 0.2221 | 1100 | 0.0001 | - |
| 0.2322 | 1150 | 0.0004 | - |
| 0.2423 | 1200 | 0.0029 | - |
| 0.2524 | 1250 | 0.0001 | - |
| 0.2625 | 1300 | 0.0001 | - |
| 0.2726 | 1350 | 0.0001 | - |
| 0.2827 | 1400 | 0.0047 | - |
| 0.2928 | 1450 | 0.0 | - |
| 0.3028 | 1500 | 0.0 | - |
| 0.3129 | 1550 | 0.0 | - |
| 0.3230 | 1600 | 0.0 | - |
| 0.3331 | 1650 | 0.0001 | - |
| 0.3432 | 1700 | 0.0004 | - |
| 0.3533 | 1750 | 0.0 | - |
| 0.3634 | 1800 | 0.0 | - |
| 0.3735 | 1850 | 0.0 | - |
| 0.3836 | 1900 | 0.0 | - |
| 0.3937 | 1950 | 0.0 | - |
| 0.4038 | 2000 | 0.0 | - |
| 0.4139 | 2050 | 0.0 | - |
| 0.4240 | 2100 | 0.0 | - |
| 0.4341 | 2150 | 0.0 | - |
| 0.4442 | 2200 | 0.0 | - |
| 0.4543 | 2250 | 0.0001 | - |
| 0.4644 | 2300 | 0.0 | - |
| 0.4745 | 2350 | 0.0 | - |
| 0.4846 | 2400 | 0.0 | - |
| 0.4946 | 2450 | 0.0 | - |
| 0.5047 | 2500 | 0.0 | - |
| 0.5148 | 2550 | 0.0 | - |
| 0.5249 | 2600 | 0.0 | - |
| 0.5350 | 2650 | 0.0 | - |
| 0.5451 | 2700 | 0.0 | - |
| 0.5552 | 2750 | 0.0001 | - |
| 0.5653 | 2800 | 0.0 | - |
| 0.5754 | 2850 | 0.0 | - |
| 0.5855 | 2900 | 0.0 | - |
| 0.5956 | 2950 | 0.0 | - |
| 0.6057 | 3000 | 0.0 | - |
| 0.6158 | 3050 | 0.0 | - |
| 0.6259 | 3100 | 0.0002 | - |
| 0.6360 | 3150 | 0.0 | - |
| 0.6461 | 3200 | 0.0 | - |
| 0.6562 | 3250 | 0.0002 | - |
| 0.6663 | 3300 | 0.0 | - |
| 0.6764 | 3350 | 0.0 | - |
| 0.6865 | 3400 | 0.0 | - |
| 0.6965 | 3450 | 0.0 | - |
| 0.7066 | 3500 | 0.0 | - |
| 0.7167 | 3550 | 0.0 | - |
| 0.7268 | 3600 | 0.0 | - |
| 0.7369 | 3650 | 0.0 | - |
| 0.7470 | 3700 | 0.0 | - |
| 0.7571 | 3750 | 0.0 | - |
| 0.7672 | 3800 | 0.0 | - |
| 0.7773 | 3850 | 0.0 | - |
| 0.7874 | 3900 | 0.0 | - |
| 0.7975 | 3950 | 0.0 | - |
| 0.8076 | 4000 | 0.0 | - |
| 0.8177 | 4050 | 0.0 | - |
| 0.8278 | 4100 | 0.0 | - |
| 0.8379 | 4150 | 0.0 | - |
| 0.8480 | 4200 | 0.0 | - |
| 0.8581 | 4250 | 0.0 | - |
| 0.8682 | 4300 | 0.0 | - |
| 0.8783 | 4350 | 0.0 | - |
| 0.8884 | 4400 | 0.0 | - |
| 0.8984 | 4450 | 0.0 | - |
| 0.9085 | 4500 | 0.0 | - |
| 0.9186 | 4550 | 0.0 | - |
| 0.9287 | 4600 | 0.0 | - |
| 0.9388 | 4650 | 0.0 | - |
| 0.9489 | 4700 | 0.0 | - |
| 0.9590 | 4750 | 0.0 | - |
| 0.9691 | 4800 | 0.0 | - |
| 0.9792 | 4850 | 0.0 | - |
| 0.9893 | 4900 | 0.0 | - |
| 0.9994 | 4950 | 0.0 | - |
### Framework Versions
- Python: 3.10.14
- SetFit: 1.0.3
- Sentence Transformers: 2.7.0
- Transformers: 4.40.1
- PyTorch: 2.2.0+cu121
- Datasets: 2.19.1
- Tokenizers: 0.19.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
matthieuzone/MOZZARELLAter | matthieuzone | 2024-05-22T14:04:52Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-22T06:12:51Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks cheese
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - matthieuzone/MOZZARELLAter
<Gallery />
## Model description
These are matthieuzone/MOZZARELLAter LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matthieuzone/MOZZARELLAter/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
allknowingroger/Meme-7B-slerp | allknowingroger | 2024-05-22T14:04:09Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"codingwithlewis/mistralmemes",
"allknowingroger/MultiCalm-7B-slerp",
"conversational",
"base_model:allknowingroger/MultiCalm-7B-slerp",
"base_model:merge:allknowingroger/MultiCalm-7B-slerp",
"base_model:codingwithlewis/mistralmemes",
"base_model:merge:codingwithlewis/mistralmemes",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-22T13:49:14Z | ---
tags:
- merge
- mergekit
- lazymergekit
- codingwithlewis/mistralmemes
- allknowingroger/MultiCalm-7B-slerp
base_model:
- codingwithlewis/mistralmemes
- allknowingroger/MultiCalm-7B-slerp
license: apache-2.0
---
# Meme-7B-slerp
Meme-7B-slerp is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [codingwithlewis/mistralmemes](https://huggingface.co/codingwithlewis/mistralmemes)
* [allknowingroger/MultiCalm-7B-slerp](https://huggingface.co/allknowingroger/MultiCalm-7B-slerp)
## 🧩 Configuration
```yaml
slices:
- sources:
- model: codingwithlewis/mistralmemes
layer_range: [0, 32]
- model: allknowingroger/MultiCalm-7B-slerp
layer_range: [0, 32]
merge_method: slerp
base_model: codingwithlewis/mistralmemes
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "allknowingroger/Meme-7B-slerp"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
varadsrivastava/mpnetv2_setfit_finarg_finetuned | varadsrivastava | 2024-05-22T14:03:30Z | 6 | 0 | setfit | [
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"model-index",
"region:us"
] | text-classification | 2024-05-22T14:03:11Z | ---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
base_model: sentence-transformers/paraphrase-mpnet-base-v2
metrics:
- accuracy
- f1
widget: []
pipeline_tag: text-classification
inference: true
model-index:
- name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.7616099071207431
name: Accuracy
- type: f1
value: 0.749185667752443
name: F1
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
<!-- - **Number of Classes:** Unknown -->
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
## Evaluation
### Metrics
| Label | Accuracy | F1 |
|:--------|:---------|:-------|
| **all** | 0.7616 | 0.7492 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("varadsrivastava/mpnetv2_setfit_finarg_finetuned")
# Run inference
preds = model("I loved the spiderman movie!")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.7.0
- Transformers: 4.39.3
- PyTorch: 2.3.0+cu121
- Datasets: 2.19.1
- Tokenizers: 0.15.2
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
MasterDee/Phi-3-vision-128k-instruct | MasterDee | 2024-05-22T14:02:09Z | 2 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:ByteDance/SDXL-Lightning",
"base_model:adapter:ByteDance/SDXL-Lightning",
"license:apache-2.0",
"region:us"
] | text-to-image | 2024-05-22T14:02:07Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/444501251_366841066386439_441831645716473445_n.jpg
base_model: ByteDance/SDXL-Lightning
instance_prompt: null
license: apache-2.0
---
# gph-vision-8b
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/MasterDee/Phi-3-vision-128k-instruct/tree/main) them in the Files & versions tab.
|
Tshr2663/Scenery | Tshr2663 | 2024-05-22T14:02:08Z | 0 | 0 | null | [
"region:us"
] | null | 2024-05-22T13:49:38Z | ---
license: apache-2.0
Scenery xzg
DreamBooth model trained by following the "Build your own Gen AI model" session by NxtWave.
Code: 231164
Sample picture:

---This is an experimental model
|
matthieuzone/SCARMOZAter | matthieuzone | 2024-05-22T14:01:54Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-22T06:16:19Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks cheese
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - matthieuzone/SCARMOZAter
<Gallery />
## Model description
These are matthieuzone/SCARMOZAter LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matthieuzone/SCARMOZAter/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
Likich/gemma-finetune-qualcoding_500_prompt1 | Likich | 2024-05-22T14:01:29Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-22T14:01:18Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Tencent-Hunyuan/TensorRT-engine | Tencent-Hunyuan | 2024-05-22T13:58:00Z | 0 | 0 | null | [
"en",
"license:other",
"region:us"
] | null | 2024-05-21T13:17:16Z | ---
license: other
license_name: tencent-hunyuan-community
license_link: https://huggingface.co/Tencent-Hunyuan/HunyuanDiT/blob/main/LICENSE.txt
language:
- en
---
# HunyuanDiT Prebuilt TensorRT Engine
We provide some prebuilt TensorRT engines.
| Supported GPU | Download Link | Remote Path |
|:----------------:|:---------------------------------------------------------------------------------------------------------------:|:---------------------------------:|
| GeForce RTX 3090 | [HuggingFace](https://huggingface.co/Tencent-Hunyuan/TensorRT-engine/blob/main/engines/RTX3090/model_onnx.plan) | `engines/RTX3090/model_onnx.plan` |
| GeForce RTX 4090 | [HuggingFace](https://huggingface.co/Tencent-Hunyuan/TensorRT-engine/blob/main/engines/RTX4090/model_onnx.plan) | `engines/RTX4090/model_onnx.plan` |
| A100 | [HuggingFace](https://huggingface.co/Tencent-Hunyuan/TensorRT-engine/blob/main/engines/A100/model_onnx.plan) | `engines/A100/model_onnx.plan` |
For more information, please refer to the instructions in [Tencent-Hunyuan/TensorRT-libs](https://huggingface.co/Tencent-Hunyuan/TensorRT-libs).
|
hgnoi/REQFXsmXPAKYIbCY | hgnoi | 2024-05-22T13:57:41Z | 126 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-22T13:56:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hgnoi/sPPT5m6Wj5Jx696M | hgnoi | 2024-05-22T13:56:20Z | 126 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-22T13:54:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
alex-miller/iati-climate-multi-classifier-weighted2 | alex-miller | 2024-05-22T13:55:52Z | 11 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"fr",
"es",
"de",
"dataset:alex-miller/iati-policy-markers",
"base_model:alex-miller/ODABert",
"base_model:finetune:alex-miller/ODABert",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-20T20:23:50Z | ---
license: apache-2.0
base_model: alex-miller/ODABert
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: iati-climate-multi-classifier-weighted2
results: []
datasets:
- alex-miller/iati-policy-markers
language:
- en
- fr
- es
- de
pipeline_tag: text-classification
widget:
- text: "VCA WWF Bolivia The programme will focus on women, young people and indigenous population living in the transboundary Pantanal - Chaco ecoregions (PACHA - Paraguay and Bolivia). Its objective is to “amplify their voices”, to ensure that they are participating, heard and taken into account in designing solutions for climate transition and common agendas to reach climate justice."
example_title: "Positive"
- text: "HIV/AIDS prevention by education and awareness raising with emphasis on gender issues/El Salvador"
example_title: "Negative"
---
# iati-climate-multi-classifier-weighted2
This model is a fine-tuned version of [alex-miller/ODABert](https://huggingface.co/alex-miller/ODABert) on a subset of the [alex-miller/iati-policy-markers](https://huggingface.co/datasets/alex-miller/iati-policy-markers) dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7080
- Accuracy: 0.8541
- F1: 0.7121
- Precision: 0.6265
- Recall: 0.8248
## Model description
This model has been trained to identify both significant and principal climate mitigation and climate adaptation project titles and/or descriptions.
## Intended uses & limitations
As many of the donors in the training dataset have mixed up Adaptation and Mitigation, the model's ability to differentiate the two isn't perfect. But the sigmoid of the model logits do bias toward the correct class.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Accuracy | F1 | Validation Loss | Precision | Recall |
|:-------------:|:-----:|:-----:|:--------:|:------:|:---------------:|:---------:|:------:|
| 0.7689 | 1.0 | 1951 | 0.7993 | 0.6421 | 0.6477 | 0.5264 | 0.8230 |
| 0.6217 | 2.0 | 3902 | 0.8303 | 0.6737 | 0.6269 | 0.5814 | 0.8010 |
| 0.5834 | 3.0 | 5853 | 0.8266 | 0.6761 | 0.6101 | 0.5715 | 0.8276 |
| 0.5571 | 4.0 | 7804 | 0.8461 | 0.6933 | 0.6169 | 0.6144 | 0.7954 |
| 0.5323 | 5.0 | 9755 | 0.8366 | 0.6869 | 0.6050 | 0.5913 | 0.8194 |
| 0.5126 | 6.0 | 11706 | 0.8327 | 0.6867 | 0.6047 | 0.5815 | 0.8385 |
| 0.4968 | 7.0 | 13657 | 0.8408 | 0.6938 | 0.6098 | 0.5989 | 0.8244 |
| 0.4893 | 8.0 | 15608 | 0.6040 | 0.8348 | 0.6895 | 0.5854 | 0.8387 |
| 0.4702 | 9.0 | 17559 | 0.6342 | 0.8508 | 0.7050 | 0.6211 | 0.8151 |
| 0.4514 | 10.0 | 19510 | 0.6210 | 0.8383 | 0.6946 | 0.5918 | 0.8404 |
| 0.4323 | 11.0 | 21461 | 0.6340 | 0.8402 | 0.6991 | 0.5943 | 0.8487 |
| 0.4193 | 12.0 | 23412 | 0.6407 | 0.8433 | 0.7005 | 0.6020 | 0.8375 |
| 0.407 | 13.0 | 25363 | 0.6602 | 0.8526 | 0.7094 | 0.6237 | 0.8223 |
| 0.3944 | 14.0 | 27314 | 0.6588 | 0.8441 | 0.7026 | 0.6029 | 0.8419 |
| 0.3834 | 15.0 | 29265 | 0.6881 | 0.8529 | 0.7110 | 0.6233 | 0.8274 |
| 0.3738 | 16.0 | 31216 | 0.7029 | 0.8575 | 0.7146 | 0.6359 | 0.8155 |
| 0.3686 | 17.0 | 33167 | 0.6929 | 0.8524 | 0.7102 | 0.6224 | 0.8271 |
| 0.3607 | 18.0 | 35118 | 0.7069 | 0.8545 | 0.7127 | 0.6272 | 0.8253 |
| 0.3556 | 19.0 | 37069 | 0.7072 | 0.8543 | 0.7118 | 0.6274 | 0.8225 |
| 0.3523 | 20.0 | 39020 | 0.7080 | 0.8541 | 0.7121 | 0.6265 | 0.8248 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.0.1
- Datasets 2.19.1
- Tokenizers 0.19.1 |
yaaserahr/lora_model | yaaserahr | 2024-05-22T13:55:40Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"base_model:finetune:unsloth/Phi-3-mini-4k-instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-22T13:55:25Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/Phi-3-mini-4k-instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** yaaserahr
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Phi-3-mini-4k-instruct-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Cintin/parler-tts-mini-Jenny-colab | Cintin | 2024-05-22T13:55:09Z | 58 | 0 | transformers | [
"transformers",
"safetensors",
"parler_tts",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-22T13:46:01Z | ---
library_name: transformers
tags: []
---
## How to Get Started with the Model
Use the code below to get started with the model.
```
!pip install git+https://github.com/huggingface/parler-tts.git
```
Quick Start
```
from parler_tts import ParlerTTSForConditionalGeneration
from transformers import AutoTokenizer
import torch
device = "cuda:0" if torch.cuda.is_available() else "cpu"
# model = ParlerTTSForConditionalGeneration.from_pretrained("/kaggle/working/parler-tts/output_dir_training", torch_dtype=torch.float16).to(device)
# tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler_tts_mini_v0.1")
model = ParlerTTSForConditionalGeneration.from_pretrained("Cintin/parler-tts-mini-Jenny-colab").to(device)
tokenizer = AutoTokenizer.from_pretrained("Cintin/parler-tts-mini-Jenny-colab")
prompt = "Hey, how are you doing today?"
description = "'Jenny delivers her words quite expressively, in a very confined sounding environment with clear audio quality. She speaks fast.'"
input_ids = tokenizer(description, return_tensors="pt").input_ids.to(device)
prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device)
generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids)
audio_arr = generation.cpu().numpy().squeeze()
```
To play the audio
```
from IPython.display import Audio
Audio(audio_arr, rate=model.config.sampling_rate)
``` |
damgomz/ft_32_2e6_cv | damgomz | 2024-05-22T13:53:43Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"fill-mask",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-21T15:20:16Z | ---
language: en
tags:
- fill-mask
kwargs:
timestamp: '2024-05-22T15:53:40'
project_name: ft_32_2e6_cv_emissions_tracker
run_id: 52842b82-b07f-4715-b512-f06e78975fa5
duration: 83829.19972467422
emissions: 0.050726316862282
emissions_rate: 6.051151272931853e-07
cpu_power: 42.5
gpu_power: 0.0
ram_power: 3.75
cpu_energy: 0.9896478817648342
gpu_energy: 0
ram_energy: 0.0873211180925368
energy_consumed: 1.0769689998573724
country_name: Switzerland
country_iso_code: CHE
region: .nan
cloud_provider: .nan
cloud_region: .nan
os: Linux-5.14.0-70.30.1.el9_0.x86_64-x86_64-with-glibc2.34
python_version: 3.10.4
codecarbon_version: 2.3.4
cpu_count: 2
cpu_model: Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz
gpu_count: .nan
gpu_model: .nan
longitude: .nan
latitude: .nan
ram_total_size: 10
tracking_mode: machine
on_cloud: N
pue: 1.0
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 83829.19972467422 |
| Emissions (Co2eq in kg) | 0.050726316862282 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.9896478817648342 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0873211180925368 |
| Consumed energy (kWh) | 1.0769689998573724 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.16137120946999786 |
| Emissions (Co2eq in kg) | 0.0328331032254974 |
## Note
21 May 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | damgomz/ThunBERT_bs32_lr5 |
| model_name | ft_32_2e6_cv |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 2e-06 |
| batch_size | 32 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 32586 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | Accuracy | Recall
---|---|---|---|---
| 0 | 0.600932 | 0.510687 | 0.754712 | 0.869537 |
| 1 | 0.451969 | 0.413611 | 0.817178 | 0.852384 |
| 2 | 0.366590 | 0.370488 | 0.834119 | 0.859894 |
| 3 | 0.324919 | 0.367874 | 0.840158 | 0.879525 |
| 4 | 0.290215 | 0.369774 | 0.835003 | 0.846829 |
| 5 | 0.252722 | 0.380092 | 0.833088 | 0.842815 |
|
damgomz/ft_16_1e6_mlm_cv | damgomz | 2024-05-22T13:53:06Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"fill-mask",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-21T15:25:34Z | ---
language: en
tags:
- fill-mask
kwargs:
timestamp: '2024-05-22T15:53:03'
project_name: ft_16_1e6_mlm_cv_emissions_tracker
run_id: 773a9186-8601-4396-89a9-2cb3134b61c0
duration: 83549.74653053284
emissions: 0.0505572189714632
emissions_rate: 6.051151687574229e-07
cpu_power: 42.5
gpu_power: 0.0
ram_power: 3.75
cpu_energy: 0.9863488307396586
gpu_energy: 0
ram_energy: 0.0870300565761822
energy_consumed: 1.073378887315838
country_name: Switzerland
country_iso_code: CHE
region: .nan
cloud_provider: .nan
cloud_region: .nan
os: Linux-5.14.0-70.30.1.el9_0.x86_64-x86_64-with-glibc2.34
python_version: 3.10.4
codecarbon_version: 2.3.4
cpu_count: 2
cpu_model: Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz
gpu_count: .nan
gpu_model: .nan
longitude: .nan
latitude: .nan
ram_total_size: 10
tracking_mode: machine
on_cloud: N
pue: 1.0
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 83549.74653053284 |
| Emissions (Co2eq in kg) | 0.0505572189714632 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.9863488307396586 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0870300565761822 |
| Consumed energy (kWh) | 1.073378887315838 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.16083326207127568 |
| Emissions (Co2eq in kg) | 0.032723650724458694 |
## Note
21 May 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | damgomz/ThunBERT_bs16_lr5_MLM |
| model_name | ft_16_1e6_mlm_cv |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1e-06 |
| batch_size | 16 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 32586 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | Accuracy | Recall
---|---|---|---|---
| 0 | 0.569527 | 0.482388 | 0.769885 | 0.868302 |
| 1 | 0.428223 | 0.399214 | 0.818943 | 0.879900 |
| 2 | 0.366190 | 0.367185 | 0.832792 | 0.851710 |
| 3 | 0.338026 | 0.352631 | 0.841042 | 0.871195 |
| 4 | 0.319484 | 0.351039 | 0.842220 | 0.847543 |
| 5 | 0.304852 | 0.346490 | 0.843988 | 0.862411 |
|
SidXXD/cat_clean_token_id-ktn_prompt_no_cat_2 | SidXXD | 2024-05-22T13:48:16Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"custom-diffusion",
"base_model:stabilityai/stable-diffusion-2-1-base",
"base_model:adapter:stabilityai/stable-diffusion-2-1-base",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-05-22T13:20:35Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1-base
instance_prompt: photo of a <v1*>
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- custom-diffusion
inference: true
---
# Custom Diffusion - SidXXD/cat_clean_token_id-ktn_prompt_no_cat_2
These are Custom Diffusion adaption weights for stabilityai/stable-diffusion-2-1-base. The weights were trained on photo of a <v1*> using [Custom Diffusion](https://www.cs.cmu.edu/~custom-diffusion). You can find some example images in the following.
For more details on the training, please follow [this link](https://github.com/huggingface/diffusers/blob/main/examples/custom_diffusion).
|
jnwulff/distilbert-base-uncased-finetuned-emotion | jnwulff | 2024-05-22T13:47:50Z | 122 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-05-22T10:01:57Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9255
- name: F1
type: f1
value: 0.9255473178145935
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2134
- Accuracy: 0.9255
- F1: 0.9255
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8381 | 1.0 | 250 | 0.3093 | 0.9115 | 0.9094 |
| 0.246 | 2.0 | 500 | 0.2134 | 0.9255 | 0.9255 |
### Framework versions
- Transformers 4.16.2
- Pytorch 2.3.0+cu121
- Datasets 2.9.0
- Tokenizers 0.19.1
|
damgomz/ft_16_1e6_cv | damgomz | 2024-05-22T13:47:11Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"fill-mask",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-21T15:19:34Z | ---
language: en
tags:
- fill-mask
kwargs:
timestamp: '2024-05-22T15:47:08'
project_name: ft_16_1e6_cv_emissions_tracker
run_id: 3351543a-9f08-4748-9d24-d177e974c299
duration: 83465.50871014595
emissions: 0.0505062485218885
emissions_rate: 6.051152062977732e-07
cpu_power: 42.5
gpu_power: 0.0
ram_power: 3.75
cpu_energy: 0.9853544236797442
gpu_energy: 0
ram_energy: 0.0869423114615182
energy_consumed: 1.0722967351412616
country_name: Switzerland
country_iso_code: CHE
region: .nan
cloud_provider: .nan
cloud_region: .nan
os: Linux-5.14.0-70.30.1.el9_0.x86_64-x86_64-with-glibc2.34
python_version: 3.10.4
codecarbon_version: 2.3.4
cpu_count: 2
cpu_model: Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz
gpu_count: .nan
gpu_model: .nan
longitude: .nan
latitude: .nan
ram_total_size: 10
tracking_mode: machine
on_cloud: N
pue: 1.0
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 83465.50871014595 |
| Emissions (Co2eq in kg) | 0.0505062485218885 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.9853544236797442 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0869423114615182 |
| Consumed energy (kWh) | 1.0722967351412616 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.16067110426703096 |
| Emissions (Co2eq in kg) | 0.0326906575781405 |
## Note
21 May 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | damgomz/ThunBERT_bs32_lr5 |
| model_name | ft_16_1e6_cv |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 1e-06 |
| batch_size | 16 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 32586 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | Accuracy | Recall
---|---|---|---|---
| 0 | 0.613570 | 0.533018 | 0.737477 | 0.881819 |
| 1 | 0.489994 | 0.451972 | 0.794342 | 0.833306 |
| 2 | 0.409385 | 0.398267 | 0.822773 | 0.865210 |
| 3 | 0.358597 | 0.379467 | 0.828665 | 0.850640 |
| 4 | 0.328654 | 0.372455 | 0.833086 | 0.845936 |
| 5 | 0.301602 | 0.373118 | 0.837210 | 0.850023 |
|
damgomz/ft_16_2e6_cv | damgomz | 2024-05-22T13:46:49Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"albert",
"text-classification",
"fill-mask",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-05-21T15:21:29Z | ---
language: en
tags:
- fill-mask
kwargs:
timestamp: '2024-05-22T15:46:46'
project_name: ft_16_2e6_cv_emissions_tracker
run_id: 9d076c95-95f9-4dc7-98e1-8564f4da9c47
duration: 83354.60116434097
emissions: 0.0504391260759982
emissions_rate: 6.05115079089072e-07
cpu_power: 42.5
gpu_power: 0.0
ram_power: 3.75
cpu_energy: 0.9840448831700616
gpu_energy: 0
ram_energy: 0.0868267772192754
energy_consumed: 1.070871660389339
country_name: Switzerland
country_iso_code: CHE
region: .nan
cloud_provider: .nan
cloud_region: .nan
os: Linux-5.14.0-70.30.1.el9_0.x86_64-x86_64-with-glibc2.34
python_version: 3.10.4
codecarbon_version: 2.3.4
cpu_count: 2
cpu_model: Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz
gpu_count: .nan
gpu_model: .nan
longitude: .nan
latitude: .nan
ram_total_size: 10
tracking_mode: machine
on_cloud: N
pue: 1.0
---
## Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|--------------------------|---------------------------------|
| Duration (in seconds) | 83354.60116434097 |
| Emissions (Co2eq in kg) | 0.0504391260759982 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 3.75 |
| CPU energy (kWh) | 0.9840448831700616 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.0868267772192754 |
| Consumed energy (kWh) | 1.070871660389339 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 2 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
## Environmental Impact (for one core)
| Metric | Value |
|--------------------------|---------------------------------|
| CPU energy (kWh) | 0.16045760724135635 |
| Emissions (Co2eq in kg) | 0.032647218789366876 |
## Note
21 May 2024
## My Config
| Config | Value |
|--------------------------|-----------------|
| checkpoint | damgomz/ThunBERT_bs32_lr5 |
| model_name | ft_16_2e6_cv |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 2e-06 |
| batch_size | 16 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 32586 |
## Training and Testing steps
Epoch | Train Loss | Test Loss | Accuracy | Recall
---|---|---|---|---
| 0 | 0.561226 | 0.447885 | 0.796405 | 0.848743 |
| 1 | 0.393072 | 0.373661 | 0.833234 | 0.856584 |
| 2 | 0.329678 | 0.364563 | 0.836034 | 0.860685 |
| 3 | 0.288565 | 0.371531 | 0.839861 | 0.832672 |
| 4 | 0.244927 | 0.390422 | 0.833969 | 0.845587 |
| 5 | 0.189810 | 0.424411 | 0.829697 | 0.833529 |
|
hgnoi/8R62WvXJJK6FAyXp | hgnoi | 2024-05-22T13:44:04Z | 126 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-22T13:42:25Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nerottt/test1111 | nerottt | 2024-05-22T13:44:03Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-22T13:43:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
hgnoi/LSliYB81E6E3D7jM | hgnoi | 2024-05-22T13:43:31Z | 126 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-22T13:41:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BilalMuftuoglu/deit-base-distilled-patch16-224-hasta-55-fold1 | BilalMuftuoglu | 2024-05-22T13:43:20Z | 197 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-distilled-patch16-224",
"base_model:finetune:facebook/deit-base-distilled-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-22T13:33:41Z | ---
license: apache-2.0
base_model: facebook/deit-base-distilled-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: deit-base-distilled-patch16-224-hasta-55-fold1
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.6944444444444444
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deit-base-distilled-patch16-224-hasta-55-fold1
This model is a fine-tuned version of [facebook/deit-base-distilled-patch16-224](https://huggingface.co/facebook/deit-base-distilled-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9277
- Accuracy: 0.6944
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:--------:|
| No log | 0.5714 | 1 | 1.1037 | 0.4444 |
| No log | 1.7143 | 3 | 1.0649 | 0.3889 |
| No log | 2.8571 | 5 | 1.0727 | 0.4167 |
| No log | 4.0 | 7 | 1.0623 | 0.5 |
| No log | 4.5714 | 8 | 1.0530 | 0.4722 |
| 1.065 | 5.7143 | 10 | 1.0510 | 0.5278 |
| 1.065 | 6.8571 | 12 | 1.0745 | 0.6111 |
| 1.065 | 8.0 | 14 | 1.0511 | 0.5 |
| 1.065 | 8.5714 | 15 | 1.0158 | 0.5556 |
| 1.065 | 9.7143 | 17 | 0.9998 | 0.6389 |
| 1.065 | 10.8571 | 19 | 1.0472 | 0.5556 |
| 0.9239 | 12.0 | 21 | 0.9675 | 0.5833 |
| 0.9239 | 12.5714 | 22 | 0.9732 | 0.5278 |
| 0.9239 | 13.7143 | 24 | 0.9489 | 0.5 |
| 0.9239 | 14.8571 | 26 | 0.9277 | 0.6944 |
| 0.9239 | 16.0 | 28 | 0.9244 | 0.5833 |
| 0.9239 | 16.5714 | 29 | 0.9643 | 0.5833 |
| 0.7838 | 17.7143 | 31 | 0.9721 | 0.5278 |
| 0.7838 | 18.8571 | 33 | 0.9432 | 0.6111 |
| 0.7838 | 20.0 | 35 | 0.9337 | 0.6389 |
| 0.7838 | 20.5714 | 36 | 0.9369 | 0.5556 |
| 0.7838 | 21.7143 | 38 | 0.9481 | 0.6111 |
| 0.6124 | 22.8571 | 40 | 0.9643 | 0.6111 |
| 0.6124 | 24.0 | 42 | 0.9300 | 0.6111 |
| 0.6124 | 24.5714 | 43 | 0.9629 | 0.6389 |
| 0.6124 | 25.7143 | 45 | 0.9200 | 0.5556 |
| 0.6124 | 26.8571 | 47 | 1.0089 | 0.5 |
| 0.6124 | 28.0 | 49 | 0.9980 | 0.5833 |
| 0.5068 | 28.5714 | 50 | 0.9876 | 0.5556 |
| 0.5068 | 29.7143 | 52 | 1.0464 | 0.5 |
| 0.5068 | 30.8571 | 54 | 1.0312 | 0.5556 |
| 0.5068 | 32.0 | 56 | 1.0946 | 0.5278 |
| 0.5068 | 32.5714 | 57 | 1.1677 | 0.5 |
| 0.5068 | 33.7143 | 59 | 1.1616 | 0.4722 |
| 0.4375 | 34.8571 | 61 | 1.0658 | 0.5556 |
| 0.4375 | 36.0 | 63 | 1.0921 | 0.6111 |
| 0.4375 | 36.5714 | 64 | 1.0801 | 0.6389 |
| 0.4375 | 37.7143 | 66 | 1.0586 | 0.5556 |
| 0.4375 | 38.8571 | 68 | 1.2152 | 0.5 |
| 0.3932 | 40.0 | 70 | 1.1543 | 0.5 |
| 0.3932 | 40.5714 | 71 | 1.0655 | 0.5556 |
| 0.3932 | 41.7143 | 73 | 0.9952 | 0.5556 |
| 0.3932 | 42.8571 | 75 | 0.9986 | 0.5278 |
| 0.3932 | 44.0 | 77 | 1.0175 | 0.5556 |
| 0.3932 | 44.5714 | 78 | 1.0234 | 0.5556 |
| 0.3539 | 45.7143 | 80 | 1.0385 | 0.5556 |
| 0.3539 | 46.8571 | 82 | 1.0191 | 0.5278 |
| 0.3539 | 48.0 | 84 | 1.0151 | 0.5556 |
| 0.3539 | 48.5714 | 85 | 1.0203 | 0.5556 |
| 0.3539 | 49.7143 | 87 | 1.0341 | 0.5556 |
| 0.3539 | 50.8571 | 89 | 1.0720 | 0.5556 |
| 0.3257 | 52.0 | 91 | 1.0951 | 0.5556 |
| 0.3257 | 52.5714 | 92 | 1.0927 | 0.5556 |
| 0.3257 | 53.7143 | 94 | 1.0883 | 0.5556 |
| 0.3257 | 54.8571 | 96 | 1.0874 | 0.5556 |
| 0.3257 | 56.0 | 98 | 1.0883 | 0.5278 |
| 0.3257 | 56.5714 | 99 | 1.0868 | 0.5278 |
| 0.321 | 57.1429 | 100 | 1.0858 | 0.5278 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
hgnoi/pyeWb24UdYo8FQ06 | hgnoi | 2024-05-22T13:43:12Z | 126 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-22T13:41:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tensorplex-labs/Sumo-T9-7B-v0.1 | tensorplex-labs | 2024-05-22T13:43:06Z | 6 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"pretrained",
"7B",
"English",
"base-model",
"bittensor",
"decentralized AI",
"conversational",
"en",
"dataset:tiiuae/falcon-refinedweb",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-15T10:30:52Z | ---
language:
- en
license: mit
library_name: transformers
tags:
- pretrained
- 7B
- English
- text-generation
- base-model
- bittensor
- decentralized AI
datasets:
- tiiuae/falcon-refinedweb
---
# Sumo-T9-7B-v0.1

### Tensorplex Labs Unveils Sumo-T9-7B: Beating Notable 7b Pretrained Models
[Tensorplex Labs]((https://tensorplex.ai)) is proud to announce that its latest top-performing model on Bittensor Subnet 9, Sumo-T9-7B,
has outperformed notable models such as TII Falcon 7B and Meta's Llama-2-7b-hf. This achievement highlights the potential of decentralized networks
like Bittensor and underscores Tensorplex Labs' commitment to advancing open-source AI technologies.
"Sumo" represents the family of models developed by Tensorplex, and "T9" designates the top-performing model specifically trained for Bittensor Subnet 9.
Bittensor Subnet 9 serves a unique role within the Bittensor ecosystem by rewarding miners who produce pretrained foundational models on the Falcon Refined Web dataset. This subnet functions as a continuous benchmark, where miners are incentivized to achieve the best performance metrics using a model under the parameter limit. The competitive nature of Subnet 9 drives rapid advancements and refinements in large language model training.
Since the parameter limit was upgraded to 7 billion on April 19, 2024, Tensorplex Labs has published the top-performing model, surpassing the performance of notable models such as Falcon 7B and Llama 2 7B within less than a month.
## Model Details
### Model Description
- **Developed by:** [Tensorplex Labs](https://tensorplex.ai)
- **Model type:** Pretrained Foundational Language Model
- **Language(s) (NLP):** Primarily English
- **License:** MIT
- **Architecture**: Adopted Llama-style architecture with 6.9 billion parameters
- **Training Data**: Trained on the tiiuae/falcon-refinedweb dataset
- **Training Objective**: Causal Language Modeling (next token prediction)
- **Original Model Repo**: [tensorplex-labs/pretraining-sn9-7B-1](https://huggingface.co/tensorplex-labs/pretraining-sn9-7B-1)
Sumo-T9-7B-v0.1 features a larger vocabulary size (100k), compatible with the GPT-4 tokenizer, ensuring its versatility across various natural language processing tasks.
⛔ **This is a pretrained base model, which hasn't been aligned yet. Use with caution or finetune further on downstream tasks before deployment.**
### Model Sources
- **Bittensor Subnet9 Leaderboard:** [https://huggingface.co/spaces/RaoFoundation/pretraining-leaderboard](https://huggingface.co/spaces/RaoFoundation/pretraining-leaderboard)
- **Bittensor Subnet9 Repository:** [https://github.com/RaoFoundation/pretraining/tree/main](https://github.com/RaoFoundation/pretraining/tree/main)
## How to Get Started with the Model
Use the code below to get started with the model.
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tensorplex-labs/Sumo-T9-7B-v0.1"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
)
sequences = pipeline(
"What is Yokozuna?",
max_length=256,
do_sample=True,
temperature=0.6,
top_p=0.9,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
bos_token_id=tokenizer.bos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
## Training Details
### Training Data
This model has been trained with [tiiuae/falcon-refinedweb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) dataset, and still ongoing continuously.
## Evaluation
Sumo-T9-7B-v0.1 has outperformed notable models such as TII Falcon 7B, Meta's Llama-2-7b and Llama-1-7b in zero-shot performance,
establishing itself as the leading model in aggregate across various evaluation tasks.
Such benchmarks include ARC Challenge, GSM8K, HellaSwag, MMLU, TruthfulQA, and Winogrande.
| | avg | arc_challenge | gsm8k | hellaswag | mmlu | truthfulqa_mc2 | winogrande |
|:--------------------------------------|-----------:|----------------:|--------:|------------:|-------:|-----------------:|-------------:|
| meta-llama/Meta-Llama-3-8B | 0.6009 | 0.5333 | 0.4913 | 0.7906 | 0.621 | 0.4392 | 0.7301 |
| **tensorplex-labs/Sumo-T9-7B-v0.1** | **0.4769** | 0.4753 | 0.1031 | 0.7666 | 0.4426 | 0.3723 | 0.7017 |
| meta-llama/Llama-2-7b-hf | 0.473 | 0.4625 | 0.1213 | 0.7597 | 0.4123 | 0.3896 | 0.693 |
| huggyllama/llama-7b | 0.4386 | 0.4471 | 0.0849 | 0.7621 | 0.2973 | 0.3408 | 0.6993 |
| tiiuae/falcon-7b | 0.4189 | 0.4343 | 0.0432 | 0.7636 | 0.2582 | 0.3428 | 0.6717 |
## Future Plans
Tensorplex Labs will continue pushing the limits of what is possible on Subnet 9, and will also work on fine-tuning state-of-the-art models for Web3 domain-specific use-cases.
One of the most ambitious projects is the development of a new data collection subnet. This will enable open and incentivized contributions of intelligence from a diverse pool of participants. The subnet will function as a collaborative platform where individuals can provide human preference or training data, which will be used to train, fine-tune, and evaluate AI models and miners across various subnets on Bittensor.
## About Tensorplex Labs
Tensorplex Labs is an AI and Web3 startup that is building the decentralized AI of the future. The company’s mission is to decentralize AI, democratize access to data and intelligence, and build a more open, transparent, and equitable future for AI. Tensorplex Labs develops open-source capital and intelligence infrastructure and applications designed to grow decentralized AI, Web3, and crypto ecosystems by making them more capital efficient, intelligent, and trustworthy. The company is currently developing a novel way to better incentivize human input to train AI models, opening up more access to new pools of human contributors with new income opportunities. Founded in 2023 with headquarters in Singapore, Tensorplex Labs’ investors include Canonical Crypto, Collab+Currency, and Digital Currency Group among several others. For more information, visit [Tensorplex](https://tensorplex.ai).
## Model Card Authors
- [email protected]
## Model Card Contact
- [email protected]
|
brendanduke/Llama-2-7B-q4_0-pure.gguf | brendanduke | 2024-05-22T13:43:06Z | 36 | 0 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-22T13:37:40Z | ---
license: apache-2.0
---
|
fine-tuned/FiQA2018-256-24-gpt-4o-2024-05-13-410031 | fine-tuned | 2024-05-22T13:41:56Z | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"custom_code",
"en",
"dataset:fine-tuned/FiQA2018-256-24-gpt-4o-2024-05-13-410031",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-05-22T13:41:43Z | ---
license: apache-2.0
datasets:
- fine-tuned/FiQA2018-256-24-gpt-4o-2024-05-13-410031
- allenai/c4
language:
- en
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case:
custom
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. Here's a simple example to get you started:
```python
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
model = SentenceTransformer(
'fine-tuned/FiQA2018-256-24-gpt-4o-2024-05-13-410031',
trust_remote_code=True
)
embeddings = model.encode([
'first text to embed',
'second text to embed'
])
print(cos_sim(embeddings[0], embeddings[1]))
```
|
colorfulniakoil/aaa | colorfulniakoil | 2024-05-22T13:41:53Z | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | 2024-05-22T13:41:53Z | ---
license: apache-2.0
---
|
hgnoi/eIxQ0ZDY7VrTK5yS | hgnoi | 2024-05-22T13:40:28Z | 126 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-22T13:38:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
umair894/llama3 | umair894 | 2024-05-22T13:40:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-3-8b-Instruct-bnb-4bit",
"base_model:finetune:unsloth/llama-3-8b-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-05-22T13:40:12Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-3-8b-Instruct-bnb-4bit
---
# Uploaded model
- **Developed by:** umair894
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3-8b-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Aryan-401/detr-resnet-50-cppe5 | Aryan-401 | 2024-05-22T13:40:17Z | 189 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"detr",
"object-detection",
"autotrain",
"vision",
"dataset:cppe-5",
"endpoints_compatible",
"region:us"
] | object-detection | 2024-05-22T12:56:22Z |
---
tags:
- autotrain
- object-detection
- vision
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
datasets:
- cppe-5
---
# Model Trained Using AutoTrain
- Problem type: Object Detection
## Validation Metrics
loss: 1.3475022315979004
map: 0.2746
map_50: 0.5638
map_75: 0.2333
map_small: 0.1345
map_medium: 0.2275
map_large: 0.4482
mar_1: 0.2715
mar_10: 0.4663
mar_100: 0.49
mar_small: 0.1839
mar_medium: 0.4158
mar_large: 0.6686
|
MahmoudMohamed/Phi3_DPO_4bit | MahmoudMohamed | 2024-05-22T13:40:09Z | 79 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"dpo",
"conversational",
"en",
"base_model:MahmoudMohamed/Phi3_MeetingQA_4bit",
"base_model:quantized:MahmoudMohamed/Phi3_MeetingQA_4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-22T13:38:40Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
- dpo
base_model: MahmoudMohamed/Phi3_MeetingQA_4bit
---
# Uploaded model
- **Developed by:** MahmoudMohamed
- **License:** apache-2.0
- **Finetuned from model :** MahmoudMohamed/Phi3_MeetingQA_4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
hgnoi/Xu2M4AupGghoKv2k | hgnoi | 2024-05-22T13:39:54Z | 129 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-22T13:38:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
moiz1/Mistral-7b-Instruct-v0.2-finetune-summerization-10k-system-prompt-style | moiz1 | 2024-05-22T13:38:03Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-22T12:19:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
matthieuzone/MIMOLETTEter | matthieuzone | 2024-05-22T13:37:42Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-22T06:11:28Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks cheese
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - matthieuzone/MIMOLETTEter
<Gallery />
## Model description
These are matthieuzone/MIMOLETTEter LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matthieuzone/MIMOLETTEter/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
SanskarSharma12/bhagwat | SanskarSharma12 | 2024-05-22T13:36:24Z | 142 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-22T13:35:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
IR-Cocktail/bert-large-uncased-mean-v3-msmarco | IR-Cocktail | 2024-05-22T13:35:35Z | 2 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-05-22T07:58:15Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 16633 with parameters:
```
{'batch_size': 30, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 10000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"correct_bias": false,
"eps": 1e-06,
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
dantepalacio/llama3-8b-instruct-150examples | dantepalacio | 2024-05-22T13:35:22Z | 77 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-05-22T13:32:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ninagroot/Baby-Llama-58M-RUN3_5 | ninagroot | 2024-05-22T13:33:06Z | 139 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-04-29T09:29:47Z | ---
tags:
- generated_from_trainer
model-index:
- name: Baby-Llama-58M-RUN3_5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Baby-Llama-58M-RUN3_5
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.2656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00025
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 50
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 287.9659 | 1.0 | 12 | 256.0041 |
| 230.7873 | 2.0 | 24 | 212.6014 |
| 207.1002 | 3.0 | 36 | 180.9384 |
| 121.5561 | 4.0 | 48 | 107.3193 |
| 81.2108 | 5.0 | 60 | 71.6529 |
| 45.9781 | 6.0 | 72 | 40.4501 |
| 24.5986 | 7.0 | 84 | 22.4212 |
| 15.2205 | 8.0 | 96 | 13.7469 |
| 10.1247 | 9.0 | 108 | 9.8119 |
| 7.975 | 10.0 | 120 | 7.8583 |
| 6.7087 | 11.0 | 132 | 7.0360 |
| 6.1988 | 12.0 | 144 | 6.4104 |
| 5.6752 | 13.0 | 156 | 6.1222 |
| 5.5155 | 14.0 | 168 | 5.8179 |
| 4.7754 | 15.0 | 180 | 5.5676 |
| 4.816 | 16.0 | 192 | 5.4583 |
| 4.817 | 17.0 | 204 | 5.3641 |
| 4.6966 | 18.0 | 216 | 5.3147 |
| 4.8322 | 19.0 | 228 | 5.2867 |
| 4.4875 | 20.0 | 240 | 5.2656 |
### Framework versions
- Transformers 4.39.1
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.0
|
matonier/bloomz-560-m-peft-method | matonier | 2024-05-22T13:30:39Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-22T13:30:35Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Tshr2663/Scenary | Tshr2663 | 2024-05-22T13:29:27Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:apache-2.0",
"region:us"
] | text-to-image | 2024-05-22T13:29:27Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: a hotel
parameters:
negative_prompt: lamp
output:
url: images/00000-1187911551.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: null
license: apache-2.0
---
# Scenery
<Gallery />
## Model description
This model is an experimental fine tuning of stable diffusion model to generate fine tuned custom images
## Download model
[Download](/Tshr2663/Scenary/tree/main) them in the Files & versions tab.
|
Niggendar/waiREALMIX_v50 | Niggendar | 2024-05-22T13:26:39Z | 129 | 1 | diffusers | [
"diffusers",
"safetensors",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-05-22T13:17:22Z | ---
library_name: diffusers
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🧨 diffusers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Likich/llama3-finetune-qualcoding_500_prompt1 | Likich | 2024-05-22T13:26:18Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-05-22T13:15:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
gaianet/Phi-3-medium-128k-instruct-GGUF | gaianet | 2024-05-22T13:22:42Z | 88 | 0 | transformers | [
"transformers",
"gguf",
"phi3",
"text-generation",
"nlp",
"code",
"custom_code",
"multilingual",
"base_model:microsoft/Phi-3-medium-128k-instruct",
"base_model:quantized:microsoft/Phi-3-medium-128k-instruct",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | 2024-05-22T13:02:15Z | ---
base_model: microsoft/Phi-3-medium-128k-instruct
license: mit
license_link: https://huggingface.co/microsoft/Phi-3-medium-128k-instruct/resolve/main/LICENSE
language:
- multilingual
pipeline_tag: text-generation
model_creator: Microsoft
model_name: Phi 3 medium 128k instruct
model_type: phi-msft
quantized_by: Second State Inc.
tags:
- nlp
- code
---

# Phi-3-medium-128k-instruct-GGUF
## Original Model
[microsoft/Phi-3-medium-128k-instruct](https://huggingface.co/microsoft/Phi-3-medium-128k-instruct)
## Run with Gaianet
**Prompt template**
prompt template: `phi-3-chat`
**Context size**
chat_ctx_size: `5120`
**Run with GaiaNet**
- Quick start: https://docs.gaianet.ai/node-guide/quick-start
- Customize your node: https://docs.gaianet.ai/node-guide/customize
## Quantized GGUF Models
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ----- |
| [Phi-3-medium-128k-instruct-Q2_K.gguf](https://huggingface.co/gaianet/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q2_K.gguf) | Q2_K | 2 | 5.14 GB| smallest, significant quality loss - not recommended for most purposes |
| [Phi-3-medium-128k-instruct-Q3_K_L.gguf](https://huggingface.co/gaianet/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q3_K_L.gguf) | Q3_K_L | 3 | 7.49 GB| small, substantial quality loss |
| [Phi-3-medium-128k-instruct-Q3_K_M.gguf](https://huggingface.co/gaianet/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q3_K_M.gguf) | Q3_K_M | 3 | 6.92 GB| very small, high quality loss |
| [Phi-3-medium-128k-instruct-Q3_K_S.gguf](https://huggingface.co/gaianet/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q3_K_S.gguf) | Q3_K_S | 3 | 6.06 GB| very small, high quality loss |
| [Phi-3-medium-128k-instruct-Q4_0.gguf](https://huggingface.co/gaianet/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q4_0.gguf) | Q4_0 | 4 | 7.9 GB| legacy; small, very high quality loss - prefer using Q3_K_M |
| [Phi-3-medium-128k-instruct-Q4_K_M.gguf](https://huggingface.co/gaianet/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q4_K_M.gguf) | Q4_K_M | 4 | 8.57 GB| medium, balanced quality - recommended |
| [Phi-3-medium-128k-instruct-Q4_K_S.gguf](https://huggingface.co/gaianet/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q4_K_S.gguf) | Q4_K_S | 4 | 7.95 GB| small, greater quality loss |
| [Phi-3-medium-128k-instruct-Q5_0.gguf](https://huggingface.co/gaianet/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q5_0.gguf) | Q5_0 | 5 | 9.62 GB| legacy; medium, balanced quality - prefer using Q4_K_M |
| [Phi-3-medium-128k-instruct-Q5_K_M.gguf](https://huggingface.co/gaianet/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q5_K_M.gguf) | Q5_K_M | 5 | 10.1 GB| large, very low quality loss - recommended |
| [Phi-3-medium-128k-instruct-Q5_K_S.gguf](https://huggingface.co/gaianet/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q5_K_S.gguf) | Q5_K_S | 5 | 9.62 GB| large, low quality loss - recommended |
| [Phi-3-medium-128k-instruct-Q6_K.gguf](https://huggingface.co/gaianet/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q6_K.gguf) | Q6_K | 6 | 11.5 GB| very large, extremely low quality loss |
| [Phi-3-medium-128k-instruct-Q8_0.gguf](https://huggingface.co/gaianet/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-Q8_0.gguf) | Q8_0 | 8 | 14.8 GB| very large, extremely low quality loss - not recommended |
| [Phi-3-medium-128k-instruct-f16.gguf](https://huggingface.co/gaianet/Phi-3-medium-128k-instruct-GGUF/blob/main/Phi-3-medium-128k-instruct-f16.gguf) | f16 | 16 | 27.9 GB| |
*Quantized with llama.cpp b2961.*
|
paulh27/iwslt_aligned_smallT5_cont0 | paulh27 | 2024-05-22T13:21:30Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"de",
"en",
"dataset:paulh27/alignment_iwslt2017_de_en",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-06T04:21:13Z | ---
language:
- de
- en
license: apache-2.0
base_model: google/mt5-small
tags:
- generated_from_trainer
datasets:
- paulh27/alignment_iwslt2017_de_en
metrics:
- bleu
model-index:
- name: iwslt_aligned_smallT5_cont0
results:
- task:
name: Translation
type: translation
dataset:
name: paulh27/alignment_iwslt2017_de_en
type: paulh27/alignment_iwslt2017_de_en
metrics:
- name: Bleu
type: bleu
value: 65.6358
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# iwslt_aligned_smallT5_cont0
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the paulh27/alignment_iwslt2017_de_en dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5612
- Bleu: 65.6358
- Gen Len: 28.7691
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adafactor
- lr_scheduler_type: constant
- training_steps: 500000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|
| 1.2426 | 0.78 | 10000 | 0.8300 | 46.2793 | 28.6532 |
| 0.9931 | 1.55 | 20000 | 0.6756 | 52.2709 | 28.6441 |
| 0.8573 | 2.33 | 30000 | 0.6143 | 55.8294 | 28.5405 |
| 0.762 | 3.11 | 40000 | 0.5811 | 57.5135 | 28.366 |
| 0.734 | 3.88 | 50000 | 0.5499 | 58.6125 | 28.5101 |
| 0.6722 | 4.66 | 60000 | 0.5228 | 59.6427 | 28.8356 |
| 0.6215 | 5.43 | 70000 | 0.5161 | 60.4701 | 28.7534 |
| 0.5756 | 6.21 | 80000 | 0.5068 | 62.0864 | 28.6498 |
| 0.5738 | 6.99 | 90000 | 0.5005 | 61.9714 | 28.5788 |
| 0.5384 | 7.76 | 100000 | 0.4909 | 62.407 | 28.5282 |
| 0.5109 | 8.54 | 110000 | 0.4902 | 62.1452 | 28.4617 |
| 0.4816 | 9.32 | 120000 | 0.4875 | 62.6499 | 28.5518 |
| 0.4493 | 10.09 | 130000 | 0.4867 | 62.6694 | 28.6993 |
| 0.4648 | 10.87 | 140000 | 0.4775 | 63.3179 | 28.5495 |
| 0.4414 | 11.64 | 150000 | 0.4787 | 63.6928 | 28.4673 |
| 0.4158 | 12.42 | 160000 | 0.4792 | 63.8752 | 28.5011 |
| 0.3895 | 13.2 | 170000 | 0.4794 | 63.8429 | 28.6498 |
| 0.4031 | 13.97 | 180000 | 0.4757 | 63.9496 | 28.7264 |
| 0.3844 | 14.75 | 190000 | 0.4855 | 63.7498 | 28.8288 |
| 0.3637 | 15.53 | 200000 | 0.4800 | 64.2277 | 28.661 |
| 0.3473 | 16.3 | 210000 | 0.4854 | 64.4683 | 28.786 |
| 0.3243 | 17.08 | 220000 | 0.4903 | 64.7805 | 28.6791 |
| 0.3426 | 17.85 | 230000 | 0.4819 | 64.679 | 28.4809 |
| 0.3295 | 18.63 | 240000 | 0.4852 | 65.3735 | 28.6014 |
| 0.3124 | 19.41 | 250000 | 0.4947 | 64.5641 | 28.6745 |
| 0.2933 | 20.18 | 260000 | 0.4972 | 65.1364 | 28.6419 |
| 0.3101 | 20.96 | 270000 | 0.4902 | 64.6747 | 28.6802 |
| 0.2991 | 21.74 | 280000 | 0.4907 | 64.9732 | 28.5653 |
| 0.2828 | 22.51 | 290000 | 0.5038 | 64.7552 | 28.6261 |
| 0.2688 | 23.29 | 300000 | 0.5042 | 65.0702 | 28.7534 |
| 0.2555 | 24.06 | 310000 | 0.5101 | 65.0378 | 29.089 |
| 0.2692 | 24.84 | 320000 | 0.5022 | 64.9991 | 28.6937 |
| 0.2593 | 25.62 | 330000 | 0.5085 | 65.2478 | 28.6137 |
| 0.2439 | 26.39 | 340000 | 0.5152 | 64.863 | 28.6464 |
| 0.2327 | 27.17 | 350000 | 0.5165 | 65.0748 | 28.7286 |
| 0.249 | 27.95 | 360000 | 0.5116 | 64.7249 | 28.6137 |
| 0.238 | 28.72 | 370000 | 0.5202 | 64.7651 | 28.5968 |
| 0.2297 | 29.5 | 380000 | 0.5243 | 65.3334 | 28.7005 |
| 0.2152 | 30.27 | 390000 | 0.5336 | 64.9364 | 28.6081 |
| 0.2106 | 31.05 | 400000 | 0.5408 | 65.117 | 28.6745 |
| 0.2234 | 31.83 | 410000 | 0.5249 | 64.8926 | 28.6318 |
| 0.2085 | 32.6 | 420000 | 0.5306 | 65.5715 | 28.7984 |
| 0.2018 | 33.38 | 430000 | 0.5429 | 64.9154 | 28.6351 |
| 0.1885 | 34.16 | 440000 | 0.5453 | 65.0538 | 28.8525 |
| 0.2049 | 34.93 | 450000 | 0.5434 | 65.2857 | 28.7207 |
| 0.1957 | 35.71 | 460000 | 0.5491 | 65.3436 | 28.714 |
| 0.1867 | 36.49 | 470000 | 0.5536 | 65.4934 | 28.7939 |
| 0.1765 | 37.26 | 480000 | 0.5583 | 65.5595 | 28.8255 |
| 0.1786 | 38.04 | 490000 | 0.5612 | 65.6358 | 28.7691 |
| 0.1809 | 38.81 | 500000 | 0.5573 | 65.0266 | 28.7455 |
### Framework versions
- Transformers 4.39.3
- Pytorch 2.2.2+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
matthieuzone/FROMAGE_FRAISter | matthieuzone | 2024-05-22T13:21:17Z | 2 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-05-22T06:10:26Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks cheese
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - matthieuzone/FROMAGE_FRAISter
<Gallery />
## Model description
These are matthieuzone/FROMAGE_FRAISter LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks cheese to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](matthieuzone/FROMAGE_FRAISter/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
harveybro/molt5-augmented-default-300-small-caption2smiles | harveybro | 2024-05-22T13:19:25Z | 110 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-05-21T08:02:10Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ayoubkirouane/llava-phi3-instruct-Lora | ayoubkirouane | 2024-05-22T13:16:23Z | 3 | 0 | peft | [
"peft",
"safetensors",
"image-text-to-text",
"conversational",
"en",
"dataset:ayoubkirouane/llava-instruct-small",
"region:us"
] | image-text-to-text | 2024-05-22T12:59:44Z | ---
datasets:
- ayoubkirouane/llava-instruct-small
library_name: peft
pipeline_tag: image-text-to-text
language:
- en
---
## Base model :
- xtuner/llava-phi-3-mini-hf
## Dataset :
- ayoubkirouane/llava-instruct-small
## Get started :
```python
from transformers import AutoModelForCausalLM
from peft import PeftModel
base_model = AutoModelForCausalLM.from_pretrained("xtuner/llava-phi-3-mini-hf")
peft_model_id = "ayoubkirouane/llava-phi3-instruct-Lora"
model = PeftModel.from_pretrained(base_model, peft_model_id)
``` |
MohamedAcadys/PointConImageModelV1-4-V2 | MohamedAcadys | 2024-05-22T13:13:34Z | 29 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"diffusers-training",
"base_model:CompVis/stable-diffusion-v1-4",
"base_model:finetune:CompVis/stable-diffusion-v1-4",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-05-22T12:37:26Z | ---
license: creativeml-openrail-m
library_name: diffusers
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- diffusers-training
base_model: CompVis/stable-diffusion-v1-4
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Text-to-image finetuning - MohamedAcadys/PointConImageModelV1-4-V2
This pipeline was finetuned from **CompVis/stable-diffusion-v1-4** on the **Acadys/PointConImagesV2** dataset. Below are some example images generated with the finetuned pipeline using the following prompts: ['Un patron en costume donne un dossier à un employé']:

## Pipeline usage
You can use the pipeline like so:
```python
from diffusers import DiffusionPipeline
import torch
pipeline = DiffusionPipeline.from_pretrained("MohamedAcadys/PointConImageModelV1-4-V2", torch_dtype=torch.float16)
prompt = "Un patron en costume donne un dossier à un employé"
image = pipeline(prompt).images[0]
image.save("my_image.png")
```
## Training info
These are the key hyperparameters used during training:
* Epochs: 45
* Learning rate: 0.0001
* Batch size: 4
* Gradient accumulation steps: 8
* Image resolution: 512
* Mixed-precision: fp16
More information on all the CLI arguments and the environment are available on your [`wandb` run page](https://wandb.ai/acadys-sadadou/text2image-fine-tune/runs/75ycsmt7).
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
giantdev/dippy-soDBy-sn11m4 | giantdev | 2024-05-22T13:08:20Z | 126 | 0 | transformers | [
"transformers",
"safetensors",
"stablelm",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-05-22T13:06:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
crisp-im/meta-llama3-70b-instruct-ts | crisp-im | 2024-05-22T13:07:36Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2024-05-22T08:55:02Z | ---
license: other
license_name: llama3
license_link: https://llama.meta.com/llama3/license/
---
|
BilalMuftuoglu/deit-base-distilled-patch16-224-85-fold5 | BilalMuftuoglu | 2024-05-22T13:04:44Z | 35 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"deit",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:facebook/deit-base-distilled-patch16-224",
"base_model:finetune:facebook/deit-base-distilled-patch16-224",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-05-22T12:44:53Z | ---
license: apache-2.0
base_model: facebook/deit-base-distilled-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: deit-base-distilled-patch16-224-85-fold5
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9090909090909091
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deit-base-distilled-patch16-224-85-fold5
This model is a fine-tuned version of [facebook/deit-base-distilled-patch16-224](https://huggingface.co/facebook/deit-base-distilled-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3071
- Accuracy: 0.9091
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 2 | 1.1685 | 0.3182 |
| No log | 2.0 | 4 | 0.7295 | 0.4545 |
| No log | 3.0 | 6 | 0.6641 | 0.7045 |
| No log | 4.0 | 8 | 0.7703 | 0.7045 |
| 0.7885 | 5.0 | 10 | 0.6750 | 0.7045 |
| 0.7885 | 6.0 | 12 | 0.6446 | 0.7045 |
| 0.7885 | 7.0 | 14 | 0.6919 | 0.7045 |
| 0.7885 | 8.0 | 16 | 0.6489 | 0.7045 |
| 0.7885 | 9.0 | 18 | 0.5245 | 0.7273 |
| 0.4488 | 10.0 | 20 | 0.8494 | 0.7045 |
| 0.4488 | 11.0 | 22 | 0.9086 | 0.6818 |
| 0.4488 | 12.0 | 24 | 0.5250 | 0.75 |
| 0.4488 | 13.0 | 26 | 0.5179 | 0.7727 |
| 0.4488 | 14.0 | 28 | 0.4423 | 0.7727 |
| 0.3387 | 15.0 | 30 | 0.5114 | 0.7273 |
| 0.3387 | 16.0 | 32 | 0.5048 | 0.75 |
| 0.3387 | 17.0 | 34 | 0.4997 | 0.7045 |
| 0.3387 | 18.0 | 36 | 0.4776 | 0.7045 |
| 0.3387 | 19.0 | 38 | 0.4138 | 0.7955 |
| 0.24 | 20.0 | 40 | 0.3220 | 0.8864 |
| 0.24 | 21.0 | 42 | 0.3363 | 0.8409 |
| 0.24 | 22.0 | 44 | 0.2958 | 0.8636 |
| 0.24 | 23.0 | 46 | 0.3098 | 0.8636 |
| 0.24 | 24.0 | 48 | 0.4030 | 0.8636 |
| 0.1524 | 25.0 | 50 | 0.3094 | 0.8636 |
| 0.1524 | 26.0 | 52 | 0.2721 | 0.8864 |
| 0.1524 | 27.0 | 54 | 0.3363 | 0.8636 |
| 0.1524 | 28.0 | 56 | 0.2731 | 0.8636 |
| 0.1524 | 29.0 | 58 | 0.5660 | 0.7955 |
| 0.1646 | 30.0 | 60 | 0.4949 | 0.8409 |
| 0.1646 | 31.0 | 62 | 0.4087 | 0.7727 |
| 0.1646 | 32.0 | 64 | 0.4467 | 0.8409 |
| 0.1646 | 33.0 | 66 | 0.4130 | 0.8182 |
| 0.1646 | 34.0 | 68 | 0.3727 | 0.8409 |
| 0.136 | 35.0 | 70 | 0.5894 | 0.7727 |
| 0.136 | 36.0 | 72 | 0.9462 | 0.75 |
| 0.136 | 37.0 | 74 | 0.5926 | 0.7273 |
| 0.136 | 38.0 | 76 | 0.3138 | 0.8864 |
| 0.136 | 39.0 | 78 | 0.4173 | 0.8864 |
| 0.163 | 40.0 | 80 | 0.3154 | 0.8636 |
| 0.163 | 41.0 | 82 | 0.3235 | 0.8636 |
| 0.163 | 42.0 | 84 | 0.3902 | 0.8182 |
| 0.163 | 43.0 | 86 | 0.3699 | 0.7955 |
| 0.163 | 44.0 | 88 | 0.4311 | 0.8182 |
| 0.1018 | 45.0 | 90 | 0.3071 | 0.9091 |
| 0.1018 | 46.0 | 92 | 0.2849 | 0.9091 |
| 0.1018 | 47.0 | 94 | 0.3226 | 0.8409 |
| 0.1018 | 48.0 | 96 | 0.2967 | 0.8409 |
| 0.1018 | 49.0 | 98 | 0.2936 | 0.8636 |
| 0.0957 | 50.0 | 100 | 0.2737 | 0.8864 |
| 0.0957 | 51.0 | 102 | 0.2845 | 0.8864 |
| 0.0957 | 52.0 | 104 | 0.3300 | 0.8409 |
| 0.0957 | 53.0 | 106 | 0.4029 | 0.8409 |
| 0.0957 | 54.0 | 108 | 0.4279 | 0.8182 |
| 0.1036 | 55.0 | 110 | 0.3900 | 0.8182 |
| 0.1036 | 56.0 | 112 | 0.4038 | 0.8636 |
| 0.1036 | 57.0 | 114 | 0.3569 | 0.8409 |
| 0.1036 | 58.0 | 116 | 0.5611 | 0.8182 |
| 0.1036 | 59.0 | 118 | 0.6900 | 0.8182 |
| 0.1048 | 60.0 | 120 | 0.5679 | 0.8182 |
| 0.1048 | 61.0 | 122 | 0.4567 | 0.8182 |
| 0.1048 | 62.0 | 124 | 0.3815 | 0.7955 |
| 0.1048 | 63.0 | 126 | 0.3546 | 0.7955 |
| 0.1048 | 64.0 | 128 | 0.3654 | 0.7955 |
| 0.0928 | 65.0 | 130 | 0.3337 | 0.8864 |
| 0.0928 | 66.0 | 132 | 0.4161 | 0.8409 |
| 0.0928 | 67.0 | 134 | 0.3615 | 0.8409 |
| 0.0928 | 68.0 | 136 | 0.4061 | 0.8182 |
| 0.0928 | 69.0 | 138 | 0.4191 | 0.8182 |
| 0.1091 | 70.0 | 140 | 0.3978 | 0.7955 |
| 0.1091 | 71.0 | 142 | 0.5168 | 0.75 |
| 0.1091 | 72.0 | 144 | 0.5268 | 0.75 |
| 0.1091 | 73.0 | 146 | 0.5667 | 0.7955 |
| 0.1091 | 74.0 | 148 | 0.5396 | 0.7727 |
| 0.1009 | 75.0 | 150 | 0.4807 | 0.75 |
| 0.1009 | 76.0 | 152 | 0.3957 | 0.8182 |
| 0.1009 | 77.0 | 154 | 0.3519 | 0.8636 |
| 0.1009 | 78.0 | 156 | 0.3654 | 0.8636 |
| 0.1009 | 79.0 | 158 | 0.3577 | 0.8409 |
| 0.0836 | 80.0 | 160 | 0.3216 | 0.8636 |
| 0.0836 | 81.0 | 162 | 0.3132 | 0.8409 |
| 0.0836 | 82.0 | 164 | 0.3003 | 0.8636 |
| 0.0836 | 83.0 | 166 | 0.3024 | 0.8636 |
| 0.0836 | 84.0 | 168 | 0.3214 | 0.8409 |
| 0.0928 | 85.0 | 170 | 0.3306 | 0.8182 |
| 0.0928 | 86.0 | 172 | 0.3284 | 0.8409 |
| 0.0928 | 87.0 | 174 | 0.3272 | 0.8182 |
| 0.0928 | 88.0 | 176 | 0.3261 | 0.8182 |
| 0.0928 | 89.0 | 178 | 0.3099 | 0.8409 |
| 0.0915 | 90.0 | 180 | 0.2928 | 0.8409 |
| 0.0915 | 91.0 | 182 | 0.2848 | 0.8409 |
| 0.0915 | 92.0 | 184 | 0.2827 | 0.8409 |
| 0.0915 | 93.0 | 186 | 0.2885 | 0.8636 |
| 0.0915 | 94.0 | 188 | 0.3084 | 0.8864 |
| 0.0775 | 95.0 | 190 | 0.3321 | 0.8409 |
| 0.0775 | 96.0 | 192 | 0.3358 | 0.8636 |
| 0.0775 | 97.0 | 194 | 0.3221 | 0.8409 |
| 0.0775 | 98.0 | 196 | 0.3096 | 0.8636 |
| 0.0775 | 99.0 | 198 | 0.3007 | 0.8864 |
| 0.091 | 100.0 | 200 | 0.2976 | 0.8864 |
### Framework versions
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
|
Subsets and Splits