modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-04 12:29:36
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 468
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-04 12:29:27
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Benevolent/AnimeFeetV4 | Benevolent | 2024-03-10T09:23:18Z | 6 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] | text-to-image | 2024-03-10T09:13:31Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: "UNICODE\0\0 \0s\0c\0o\0r\0e\0_\09\0,\0 \0s\0c\0o\0r\0e\0_\08\0_\0u\0p\0,\0 \0s\0c\0o\0r\0e\0_\07\0_\0u\0p\0,\0 \0s\0c\0o\0r\0e\0_\06\0_\0u\0p\0,\0s\0c\0o\0r\0e\0_\09\0,\0 \0s\0c\0o\0r\0e\0_\08\0_\0u\0p\0,\0 \0s\0c\0o\0r\0e\0_\07\0_\0u\0p\0,\0 \0s\0c\0o\0r\0e\0_\06\0_\0u\0p\0,\0 \0s\0c\0o\0r\0e\0_\05\0_\0u\0p\0,\0 \0s\0c\0o\0r\0e\0_\04\0_\0u\0p\0,\0 \01\0b\0o\0y\0,\0 \0n\0s\0f\0w\0,\0 \0e\0x\0p\0-\0f\0,\0 \0b\0l\0a\0c\0k\0 \0l\0e\0g\0w\0e\0a\0r\0:\01\0.\02\0,\0 \0p\0e\0n\0i\0s\0,\0 \0c\0u\0m\0,\0 \0f\0e\0e\0t\0,\0 \0b\0e\0a\0u\0t\0i\0f\0u\0l\0 \0f\0a\0c\0e\0,\0 \0t\0e\0x\0t\0i\0l\0e\0 \0s\0h\0a\0d\0i\0n\0g\0,\0 \0s\0e\0c\0r\0e\0t\0a\0r\0y\0:\01\0.\04\0,\0 \0g\0l\0a\0s\0s\0e\0s\0:\01\0.\02\0,\0 \01\0g\0i\0r\0l\0:\01\0.\03\0,\0 \0e\0m\0b\0a\0r\0r\0a\0s\0s\0e\0d\0:\01\0.\02\0,\0 \0o\0p\0e\0n\0 \0m\0o\0u\0t\0h\0,\0 \0b\0l\0a\0c\0k\0 \0h\0a\0i\0r\0,\0 \0m\0e\0d\0i\0u\0m\0 \0h\0a\0i\0r\0,\0 \0s\0t\0r\0a\0i\0g\0h\0t\0 \0h\0a\0i\0r\0,\0 \0b\0l\0u\0e\0 \0e\0y\0e\0s\0,\0 \0s\0w\0e\0a\0t\0:\01\0.\03\0,\0 \0f\0o\0o\0t\0 \0f\0o\0c\0u\0s\0,\0 \0p\0e\0n\0i\0s\0,\0 \0s\0o\0l\0e\0s\0,\0 \0b\0l\0a\0c\0k\0 \0l\0e\0g\0w\0e\0a\0r\0:\01\0.\03\0,\0 \0f\0r\0o\0m\0 \0b\0e\0h\0i\0n\0d\0:\01\0.\03\0,\0 \0s\0i\0t\0t\0i\0n\0g\0 \0c\0h\0a\0i\0r\0,\0 \0o\0f\0f\0i\0c\0e\0,\0 \0l\0e\0g\0s\0 \0t\0o\0g\0e\0t\0h\0e\0r\0,\0 \0p\0o\0v\0"
output:
url: >-
images/364FB424972FA4DD1FA75D91CE2D59DF6F1E500CBFF83456FBC2B482D850F943.jpeg
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: FeetV4
license: mit
---
# AnimeFeet
<Gallery />
## Trigger words
You should use `FeetV4` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Benevolent/AnimeFeetV4/tree/main) them in the Files & versions tab.
|
Benevolent/HutaoV2 | Benevolent | 2024-03-10T09:16:08Z | 34 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:mit",
"region:us"
] | text-to-image | 2024-03-10T09:10:14Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
1girl, hu tao (genshin impact), hat, flower on hat,brown hair, twintails,
smile, dress, long hair, off-shoulder dress, ahoge, looking at viewer, red
dress, bare shoulders, grin, red eyes, flower-shaped pupils, blush, bangs,
breasts, choker, collarbone, solo, nail polish, black nails, ribbon choker,
bow, hair between eyes, hair ornament, sidelocks, cleavage, wrist ribbon,
very long hair, teeth, cowboy shot <lora:hutao_v2:0.7>
parameters:
negative_prompt: >-
nsfw, (worst quality, low quality:1.4), (lip, nose, tooth, rouge,
lipstick, eyeshadow:1.4), (blush:1.2), (jpeg artifacts:1.4), (depth of
field, bokeh, blurry, film grain, chromatic aberration, lens flare:1.0),
(1boy, abs, muscular, rib:1.0), greyscale, monochrome, dusty sunbeams,
trembling, motion lines, motion blur, emphasis lines, text, title, logo,
signature, bad-artist-anime, bad_artist, bad_hands, bad_prompt,
bad_prompt_version2, easynegative
output:
url: images/385577.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: hu tao (genshin impact)
license: mit
---
# HuTaoXL
<Gallery />
## Trigger words
You should use `hu tao (genshin impact)` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Benevolent/HutaoV2/tree/main) them in the Files & versions tab.
|
Owhslp/nous_researcher_tuning_2_15 | Owhslp | 2024-03-10T09:11:23Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-09T22:53:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
afaji/fresh-2-layer-medmcqa-distill-of-fresh-2-layer-gpqa | afaji | 2024-03-10T09:10:56Z | 90 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"multiple-choice",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2024-03-10T09:10:19Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: fresh-2-layer-medmcqa-distill-of-fresh-2-layer-gpqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fresh-2-layer-medmcqa-distill-of-fresh-2-layer-gpqa
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 11.4170
- Accuracy: 0.5404
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 321
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 63 | 14.3363 | 0.2929 |
| No log | 2.0 | 126 | 13.8007 | 0.4040 |
| No log | 3.0 | 189 | 13.1932 | 0.4697 |
| No log | 4.0 | 252 | 12.4231 | 0.4899 |
| No log | 5.0 | 315 | 11.6190 | 0.5101 |
| No log | 6.0 | 378 | 11.4170 | 0.5404 |
| No log | 7.0 | 441 | 12.2002 | 0.4899 |
| 3.3802 | 8.0 | 504 | 11.9545 | 0.4646 |
| 3.3802 | 9.0 | 567 | 13.2518 | 0.5202 |
| 3.3802 | 10.0 | 630 | 11.9140 | 0.5 |
| 3.3802 | 11.0 | 693 | 11.4793 | 0.4545 |
| 3.3802 | 12.0 | 756 | 11.6963 | 0.4798 |
| 3.3802 | 13.0 | 819 | 11.2862 | 0.4848 |
| 3.3802 | 14.0 | 882 | 11.1868 | 0.4949 |
| 3.3802 | 15.0 | 945 | 10.9490 | 0.4646 |
| 0.479 | 16.0 | 1008 | 11.0089 | 0.4899 |
| 0.479 | 17.0 | 1071 | 11.1883 | 0.4798 |
| 0.479 | 18.0 | 1134 | 11.2915 | 0.4697 |
| 0.479 | 19.0 | 1197 | 11.1116 | 0.4747 |
| 0.479 | 20.0 | 1260 | 11.0499 | 0.4747 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.14.0
|
woshiyuanshengaoshou/casrel | woshiyuanshengaoshou | 2024-03-10T09:10:14Z | 179 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"medical",
"token-classification",
"zh",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-03-09T09:01:37Z | ---
license: apache-2.0
language:
- zh
metrics:
- accuracy
pipeline_tag: token-classification
tags:
- medical
--- |
nadika/wav2vec2-nepali-v2-try-3 | nadika | 2024-03-10T09:02:45Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-03-10T05:22:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Benevolent/PonyXLVAE | Benevolent | 2024-03-10T08:58:03Z | 2 | 1 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail",
"region:us"
] | text-to-image | 2024-03-10T08:48:43Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/dfpprsq-dcf7be43-4f7d-4a04-80cf-bc06a78d21e9.jpg
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: null
license: openrail
---
# SDXL
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Benevolent/PonyXLVAE/tree/main) them in the Files & versions tab.
|
afaji/fresh-4-layer-swag-distill-of-fresh-4-layer-gpqa | afaji | 2024-03-10T08:57:16Z | 90 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"multiple-choice",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2024-03-10T08:34:15Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: fresh-4-layer-swag-distill-of-fresh-4-layer-gpqa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fresh-4-layer-swag-distill-of-fresh-4-layer-gpqa
This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 11.8632
- Accuracy: 0.4293
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 321
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 13.8015 | 0.2778 |
| No log | 2.0 | 250 | 14.0268 | 0.3535 |
| No log | 3.0 | 375 | 13.0123 | 0.3838 |
| 1.8616 | 4.0 | 500 | 12.3288 | 0.3535 |
| 1.8616 | 5.0 | 625 | 12.1718 | 0.3737 |
| 1.8616 | 6.0 | 750 | 12.7654 | 0.3889 |
| 1.8616 | 7.0 | 875 | 12.6711 | 0.3838 |
| 0.4769 | 8.0 | 1000 | 12.0719 | 0.4141 |
| 0.4769 | 9.0 | 1125 | 11.8960 | 0.4091 |
| 0.4769 | 10.0 | 1250 | 12.0726 | 0.4192 |
| 0.4769 | 11.0 | 1375 | 11.8632 | 0.4293 |
| 0.1853 | 12.0 | 1500 | 11.6135 | 0.4141 |
| 0.1853 | 13.0 | 1625 | 12.2307 | 0.4141 |
| 0.1853 | 14.0 | 1750 | 11.7646 | 0.4040 |
| 0.1853 | 15.0 | 1875 | 11.6897 | 0.4141 |
| 0.0913 | 16.0 | 2000 | 12.0394 | 0.4091 |
| 0.0913 | 17.0 | 2125 | 11.7915 | 0.4040 |
| 0.0913 | 18.0 | 2250 | 12.0047 | 0.3990 |
| 0.0913 | 19.0 | 2375 | 11.9798 | 0.3939 |
| 0.0436 | 20.0 | 2500 | 12.0208 | 0.4040 |
### Framework versions
- Transformers 4.34.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.14.0
|
Saphbn/distilroberta-base-finetuned-energy-tweets-fullsample | Saphbn | 2024-03-10T08:54:13Z | 14 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:Saphbn/distilroberta-base-finetuned-energy-tweets-fullsample",
"base_model:finetune:Saphbn/distilroberta-base-finetuned-energy-tweets-fullsample",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-02-16T04:47:06Z | ---
license: apache-2.0
base_model: Saphbn/distilroberta-base-finetuned-tweets-fullsample
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-tweets-fullsample
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-tweets-fullsample
This model is a fine-tuned version of [Saphbn/distilroberta-base-finetuned-tweets-fullsample](https://huggingface.co/Saphbn/distilroberta-base-finetuned-tweets-fullsample) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1873
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 111
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2589 | 1.0 | 715 | 2.2181 |
| 2.2097 | 2.0 | 1430 | 2.1982 |
| 2.1894 | 3.0 | 2145 | 2.1873 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
TristanBehrens/heilbronnpodcasts | TristanBehrens | 2024-03-10T08:53:49Z | 56 | 0 | peft | [
"peft",
"pytorch",
"gguf",
"llama",
"generated_from_trainer",
"base_model:jphme/em_german_7b_v01",
"base_model:adapter:jphme/em_german_7b_v01",
"license:llama2",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-03-10T08:44:05Z | ---
license: llama2
library_name: peft
tags:
- generated_from_trainer
base_model: jphme/em_german_7b_v01
model-index:
- name: out/heilbronnpodcasts
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: jphme/em_german_7b_v01
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
load_in_8bit: true
load_in_4bit: false
strict: false
datasets:
- path: TristanBehrens/HeilbronnPodcastsWindowed
type: completion
dataset_prepared_path:
val_set_size: 0.0
output_dir: ./out/heilbronnpodcasts
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
adapter: lora
lora_model_dir:
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project:
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 16
num_epochs: 4
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0002
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
s2_attention:
warmup_steps: 10
evals_per_epoch: 4
eval_table_size:
eval_max_new_tokens: 128
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
```
</details><br>
# out/heilbronnpodcasts
This model is a fine-tuned version of [jphme/em_german_7b_v01](https://huggingface.co/jphme/em_german_7b_v01) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 4
### Training results
### Framework versions
- PEFT 0.9.1.dev0
- Transformers 4.39.0.dev0
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.0 |
HachiML/myBit-Llama2-jp-127M-test-3 | HachiML | 2024-03-10T08:52:25Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T08:27:07Z | ---
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- generated_from_trainer
model-index:
- name: myBit-Llama2-jp-127M-test-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# myBit-Llama2-jp-127M-test-3
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 5.8378
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.4e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: polynomial
- lr_scheduler_warmup_steps: 250
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 10.1753 | 0.04 | 100 | 9.3900 |
| 9.0259 | 0.07 | 200 | 8.5694 |
| 8.0861 | 0.11 | 300 | 7.5814 |
| 7.2827 | 0.15 | 400 | 7.0210 |
| 6.9154 | 0.18 | 500 | 6.7973 |
| 6.738 | 0.22 | 600 | 6.6274 |
| 6.6074 | 0.26 | 700 | 6.5311 |
| 6.5101 | 0.29 | 800 | 6.4338 |
| 6.4433 | 0.33 | 900 | 6.3696 |
| 6.382 | 0.36 | 1000 | 6.3051 |
| 6.3157 | 0.4 | 1100 | 6.2578 |
| 6.2805 | 0.44 | 1200 | 6.2139 |
| 6.2317 | 0.47 | 1300 | 6.1715 |
| 6.2085 | 0.51 | 1400 | 6.1438 |
| 6.1702 | 0.55 | 1500 | 6.1099 |
| 6.1337 | 0.58 | 1600 | 6.0789 |
| 6.113 | 0.62 | 1700 | 6.0530 |
| 6.0663 | 0.66 | 1800 | 6.0150 |
| 6.0446 | 0.69 | 1900 | 5.9880 |
| 6.0267 | 0.73 | 2000 | 5.9568 |
| 5.9972 | 0.77 | 2100 | 5.9274 |
| 5.9599 | 0.8 | 2200 | 5.9109 |
| 5.9369 | 0.84 | 2300 | 5.8884 |
| 5.9266 | 0.88 | 2400 | 5.8689 |
| 5.9068 | 0.91 | 2500 | 5.8548 |
| 5.9091 | 0.95 | 2600 | 5.8462 |
| 5.879 | 0.99 | 2700 | 5.8378 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Litzy619/V0309P2 | Litzy619 | 2024-03-10T08:51:04Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-03-10T03:00:57Z | ---
license: mit
base_model: microsoft/phi-2
tags:
- generated_from_trainer
model-index:
- name: V0309P2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0309P2
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0699
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 20
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.715 | 0.09 | 10 | 0.1736 |
| 0.1331 | 0.17 | 20 | 0.0929 |
| 0.1048 | 0.26 | 30 | 0.0795 |
| 0.0918 | 0.34 | 40 | 0.0688 |
| 0.0876 | 0.43 | 50 | 0.0683 |
| 0.0816 | 0.51 | 60 | 0.0639 |
| 0.0755 | 0.6 | 70 | 0.0607 |
| 0.0797 | 0.68 | 80 | 0.0603 |
| 0.068 | 0.77 | 90 | 0.0595 |
| 0.0652 | 0.85 | 100 | 0.0606 |
| 0.0713 | 0.94 | 110 | 0.0590 |
| 0.0684 | 1.02 | 120 | 0.0607 |
| 0.0576 | 1.11 | 130 | 0.0647 |
| 0.0554 | 1.19 | 140 | 0.0556 |
| 0.0538 | 1.28 | 150 | 0.0537 |
| 0.0515 | 1.37 | 160 | 0.0625 |
| 0.0532 | 1.45 | 170 | 0.0578 |
| 0.0481 | 1.54 | 180 | 0.0615 |
| 0.0519 | 1.62 | 190 | 0.0576 |
| 0.0548 | 1.71 | 200 | 0.0575 |
| 0.0541 | 1.79 | 210 | 0.0578 |
| 0.0481 | 1.88 | 220 | 0.0645 |
| 0.0478 | 1.96 | 230 | 0.0594 |
| 0.043 | 2.05 | 240 | 0.0607 |
| 0.0346 | 2.13 | 250 | 0.0659 |
| 0.031 | 2.22 | 260 | 0.0739 |
| 0.029 | 2.3 | 270 | 0.0767 |
| 0.0357 | 2.39 | 280 | 0.0749 |
| 0.0368 | 2.47 | 290 | 0.0713 |
| 0.0382 | 2.56 | 300 | 0.0684 |
| 0.0354 | 2.65 | 310 | 0.0685 |
| 0.0303 | 2.73 | 320 | 0.0689 |
| 0.0331 | 2.82 | 330 | 0.0696 |
| 0.0315 | 2.9 | 340 | 0.0700 |
| 0.0345 | 2.99 | 350 | 0.0699 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
frankenmerger/delta-4B-super | frankenmerger | 2024-03-10T08:48:43Z | 53 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"conversational",
"custom_code",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T18:18:08Z | ---
widget:
- text: Hello, My name is Junpei Iori, who are you?
example_title: Identity
- text: Describe Aurora Borealis
example_title: Capabilities
- text: Create a fastapi endpoint to retrieve the weather given a zip code.
example_title: Coding
pipeline_tag: text-generation
license: apache-2.0
language:
- en
tags:
- conversational
---
delta-4b-super is a frankenmerge of Phi-2-super.
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "gmonsoon/Delta-4B-Base"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
abdallahashrafx/wav2vec2-large-mms-1b-turkish-colab | abdallahashrafx | 2024-03-10T08:46:42Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-10T06:56:44Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
devashishg/my-trained-model | devashishg | 2024-03-10T08:40:29Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | 2024-03-10T08:40:27Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: photo of a man closeup portrait
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
KinGeorge/ppo-LunarLander-v2 | KinGeorge | 2024-03-10T08:40:01Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-12-09T02:22:14Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: ppo-mlp
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 274.44 +/- 22.70
name: mean_reward
verified: false
---
# **ppo-mlp** Agent playing **LunarLander-v2**
This is a trained model of a **ppo-mlp** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ligut14789/dummy-model | ligut14789 | 2024-03-10T08:26:50Z | 92 | 0 | transformers | [
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-03-10T08:25:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MichaelKim/ONS-SOLAR-10.7B_trained_v8 | MichaelKim | 2024-03-10T08:16:06Z | 2 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:ONS-AI-RESEARCH/ONS-SOLAR-10.7B",
"base_model:adapter:ONS-AI-RESEARCH/ONS-SOLAR-10.7B",
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-03-10T06:35:57Z | ---
license: cc-by-nc-4.0
library_name: peft
tags:
- generated_from_trainer
base_model: ONS-AI-RESEARCH/ONS-SOLAR-10.7B
model-index:
- name: ONS-SOLAR-10.7B_trained_v8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ONS-SOLAR-10.7B_trained_v8
This model is a fine-tuned version of [ONS-AI-RESEARCH/ONS-SOLAR-10.7B](https://huggingface.co/ONS-AI-RESEARCH/ONS-SOLAR-10.7B) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 10
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Tokenizers 0.15.2 |
core-outline/gemma-2b-instruct | core-outline | 2024-03-10T08:11:13Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"gguf",
"gemma",
"text-generation",
"conversational",
"arxiv:2312.11805",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2304.06364",
"arxiv:2206.04615",
"arxiv:1804.06876",
"arxiv:2110.08193",
"arxiv:2009.11462",
"arxiv:2101.11718",
"arxiv:1804.09301",
"arxiv:2109.07958",
"arxiv:2203.09509",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T01:24:36Z | ---
library_name: transformers
tags: []
widget:
- messages:
- role: user
content: Explain what monthly recurring revenue is.
inference:
parameters:
max_new_tokens: 200
extra_gated_heading: "Access Gemma on Hugging Face"
extra_gated_prompt: "To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged-in to Hugging Face and click below. Requests are processed immediately."
extra_gated_button_content: "Acknowledge license"
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
---
# Gemma Model Card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit](https://ai.google.dev/responsible)
* [Gemma on Kaggle](https://www.kaggle.com/models/google/gemma)
* [Gemma on Vertex Model Garden](https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/335?version=gemma-2b-it-gg-hf)
**Terms of Use**: [Terms](https://www.kaggle.com/models/google/gemma/license/consent)
**Authors**: Google
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
They are text-to-text, decoder-only large language models, available in English,
with open weights, pre-trained variants, and instruction-tuned variants. Gemma
models are well-suited for a variety of text generation tasks, including
question answering, summarization, and reasoning. Their relatively small size
makes it possible to deploy them in environments with limited resources such as
a laptop, desktop or your own cloud infrastructure, democratizing access to
state of the art AI models and helping foster innovation for everyone.
### Usage
Below we share some code snippets on how to get quickly started with running the model. First make sure to `pip install -U transformers`, then copy the snippet from the section that is relevant for your usecase.
#### Running the model on a CPU
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("core-outline/gemma-2b-instruct")
model = AutoModelForCausalLM.from_pretrained("core-outline/gemma-2b-instruct")
input_text = "Explain what churn rate is."
input_ids = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a single / multi GPU
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("core-outline/gemma-2b-instruct")
model = AutoModelForCausalLM.from_pretrained("core-outline/gemma-2b-instruct", device_map="auto")
input_text = "How is click through rate calculated."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Running the model on a GPU using different precisions
* _Using `torch.float16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("core-outline/gemma-2b-instruct")
model = AutoModelForCausalLM.from_pretrained("core-outline/gemma-2b-instruct", device_map="auto", torch_dtype=torch.float16)
input_text = "What is an RFM analysis?"
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using `torch.bfloat16`_
```python
# pip install accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("core-outline/gemma-2b-instruct")
model = AutoModelForCausalLM.from_pretrained("core-outline/gemma-2b-instruct", device_map="auto", torch_dtype=torch.bfloat16)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Quantized Versions through `bitsandbytes`
* _Using 8-bit precision (int8)_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained("core-outline/gemma-2b-instruct")
model = AutoModelForCausalLM.from_pretrained("core-outline/gemma-2b-instruct", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
* _Using 4-bit precision_
```python
# pip install bitsandbytes accelerate
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig
quantization_config = BitsAndBytesConfig(load_in_4bit=True)
tokenizer = AutoTokenizer.from_pretrained("core-outline/gemma-2b-instruct")
model = AutoModelForCausalLM.from_pretrained("core-outline/gemma-2b-instruct", quantization_config=quantization_config)
input_text = "Write me a poem about Machine Learning."
input_ids = tokenizer(input_text, return_tensors="pt").to("cuda")
outputs = model.generate(**input_ids)
print(tokenizer.decode(outputs[0]))
```
#### Other optimizations
* _Flash Attention 2_
First make sure to install `flash-attn` in your environment `pip install flash-attn`
```diff
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype=torch.float16,
+ attn_implementation="flash_attention_2"
).to(0)
```
### Chat Template
The instruction-tuned models use a chat template that must be adhered to for conversational use.
The easiest way to apply it is using the tokenizer's built-in chat template, as shown in the following snippet.
Let's load the model and apply the chat template to a conversation. In this example, we'll start with a single user interaction:
```py
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model_id = "gg-hf/gemma-2b-it"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,
)
chat = [
{ "role": "user", "content": "Write a hello world program" },
]
prompt = tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
```
At this point, the prompt contains the following text:
```
<bos><start_of_turn>user
Write a hello world program<end_of_turn>
<start_of_turn>model
```
As you can see, each turn is preceded by a `<start_of_turn>` delimiter and then the role of the entity
(either `user`, for content supplied by the user, or `model` for LLM responses). Turns finish with
the `<end_of_turn>` token.
You can follow this format to build the prompt manually, if you need to do it without the tokenizer's
chat template.
After the prompt is ready, generation can be performed like this:
```py
inputs = tokenizer.encode(prompt, add_special_tokens=False, return_tensors="pt")
outputs = model.generate(input_ids=inputs.to(model.device), max_new_tokens=150)
```
### Inputs and outputs
* **Input:** Text string, such as a question, a prompt, or a document to be
summarized.
* **Output:** Generated English-language text in response to the input, such
as an answer to a question, or a summary of a document.
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources, totaling 6 trillion tokens. Here are the key components:
* Web Documents: A diverse collection of web text ensures the model is exposed
to a broad range of linguistic styles, topics, and vocabulary. Primarily
English-language content.
* Code: Exposing the model to code helps it to learn the syntax and patterns of
programming languages, which improves its ability to generate code or
understand code-related questions.
* Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
The combination of these diverse data sources is crucial for training a powerful
language model that can handle a wide variety of different tasks and text
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
* CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering was
applied at multiple stages in the data preparation process to ensure the
exclusion of harmful and illegal content
* Sensitive Data Filtering: As part of making Gemma pre-trained models safe and
reliable, automated techniques were used to filter out certain personal
information and other sensitive data from training sets.
* Additional methods: Filtering based on content quality and safely in line with
[our policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11).
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using the latest generation of
[Tensor Processing Unit (TPU)](https://cloud.google.com/tpu/docs/intro-to-tpu) hardware (TPUv5e).
Training large language models requires significant computational power. TPUs,
designed specifically for matrix operations common in machine learning, offer
several advantages in this domain:
* Performance: TPUs are specifically designed to handle the massive computations
involved in training LLMs. They can speed up training considerably compared to
CPUs.
* Memory: TPUs often come with large amounts of high-bandwidth memory, allowing
for the handling of large models and batch sizes during training. This can
lead to better model quality.
* Scalability: TPU Pods (large clusters of TPUs) provide a scalable solution for
handling the growing complexity of large foundation models. You can distribute
training across multiple TPU devices for faster and more efficient processing.
* Cost-effectiveness: In many scenarios, TPUs can provide a more cost-effective
solution for training large models compared to CPU-based infrastructure,
especially when considering the time and resources saved due to faster
training.
* These advantages are aligned with
[Google's commitments to operate sustainably](https://sustainability.google/operating-sustainably/).
### Software
Training was done using [JAX](https://github.com/google/jax) and [ML Pathways](https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/ml-pathways).
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models.
ML Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
[foundation models](https://ai.google/discover/foundation-models/), including large language models like
these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models](https://arxiv.org/abs/2312.11805); "the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [MMLU](https://arxiv.org/abs/2009.03300) | 5-shot, top-1 | 42.3 | 64.3 |
| [HellaSwag](https://arxiv.org/abs/1905.07830) | 0-shot |71.4 | 81.2 |
| [PIQA](https://arxiv.org/abs/1911.11641) | 0-shot | 77.3 | 81.2 |
| [SocialIQA](https://arxiv.org/abs/1904.09728) | 0-shot | 59.7 | 51.8 |
| [BooIQ](https://arxiv.org/abs/1905.10044) | 0-shot | 69.4 | 83.2 |
| [WinoGrande](https://arxiv.org/abs/1907.10641) | partial score | 65.4 | 72.3 |
| [CommonsenseQA](https://arxiv.org/abs/1811.00937) | 7-shot | 65.3 | 71.3 |
| [OpenBookQA](https://arxiv.org/abs/1809.02789) | | 47.8 | 52.8 |
| [ARC-e](https://arxiv.org/abs/1911.01547) | | 73.2 | 81.5 |
| [ARC-c](https://arxiv.org/abs/1911.01547) | | 42.1 | 53.2 |
| [TriviaQA](https://arxiv.org/abs/1705.03551) | 5-shot | 53.2 | 63.4 |
| [Natural Questions](https://github.com/google-research-datasets/natural-questions) | 5-shot | - | 23 |
| [HumanEval](https://arxiv.org/abs/2107.03374) | pass@1 | 22.0 | 32.3 |
| [MBPP](https://arxiv.org/abs/2108.07732) | 3-shot | 29.2 | 44.4 |
| [GSM8K](https://arxiv.org/abs/2110.14168) | maj@1 | 17.7 | 46.4 |
| [MATH](https://arxiv.org/abs/2108.07732) | 4-shot | 11.8 | 24.3 |
| [AGIEval](https://arxiv.org/abs/2304.06364) | | 24.2 | 41.7 |
| [BIG-Bench](https://arxiv.org/abs/2206.04615) | | 35.2 | 55.1 |
| ------------------------------ | ------------- | ----------- | --------- |
| **Average** | | **54.0** | **56.4** |
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
* Text-to-Text Content Safety: Human evaluation on prompts covering safety
policies including child sexual abuse and exploitation, harassment, violence
and gore, and hate speech.
* Text-to-Text Representational Harms: Benchmark against relevant academic
datasets such as [WinoBias](https://arxiv.org/abs/1804.06876) and [BBQ Dataset](https://arxiv.org/abs/2110.08193v2).
* Memorization: Automated evaluation of memorization of training data, including
the risk of personally identifiable information exposure.
* Large-scale harm: Tests for "dangerous capabilities," such as chemical,
biological, radiological, and nuclear (CBRN) risks.
### Evaluation Results
The results of ethics and safety evaluations are within acceptable thresholds
for meeting [internal policies](https://storage.googleapis.com/gweb-uniblog-publish-prod/documents/2023_Google_AI_Principles_Progress_Update.pdf#page=11) for categories such as child
safety, content safety, representational harms, memorization, large-scale harms.
On top of robust internal evaluations, the results of well known safety
benchmarks like BBQ, BOLD, Winogender, Winobias, RealToxicity, and TruthfulQA
are shown here.
| Benchmark | Metric | 2B Params | 7B Params |
| ------------------------------ | ------------- | ----------- | --------- |
| [RealToxicity](https://arxiv.org/abs/2009.11462) | average | 6.86 | 7.90 |
| [BOLD](https://arxiv.org/abs/2101.11718) | | 45.57 | 49.08 |
| [CrowS-Pairs](https://aclanthology.org/2020.emnlp-main.154/) | top-1 | 45.82 | 51.33 |
| [BBQ Ambig](https://arxiv.org/abs/2110.08193v2) | 1-shot, top-1 | 62.58 | 92.54 |
| [BBQ Disambig](https://arxiv.org/abs/2110.08193v2) | top-1 | 54.62 | 71.99 |
| [Winogender](https://arxiv.org/abs/1804.09301) | top-1 | 51.25 | 54.17 |
| [TruthfulQA](https://arxiv.org/abs/2109.07958) | | 44.84 | 31.81 |
| [Winobias 1_2](https://arxiv.org/abs/1804.06876) | | 56.12 | 59.09 |
| [Winobias 2_2](https://arxiv.org/abs/1804.06876) | | 91.10 | 92.23 |
| [Toxigen](https://arxiv.org/abs/2203.09509) | | 29.77 | 39.59 |
| ------------------------------ | ------------- | ----------- | --------- |
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open Large Language Models (LLMs) have a wide range of applications across
various industries and domains. The following list of potential uses is not
comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
* Content Creation and Communication
* Text Generation: These models can be used to generate creative text formats
such as poems, scripts, code, marketing copy, and email drafts.
* Chatbots and Conversational AI: Power conversational interfaces for customer
service, virtual assistants, or interactive applications.
* Text Summarization: Generate concise summaries of a text corpus, research
papers, or reports.
* Research and Education
* Natural Language Processing (NLP) Research: These models can serve as a
foundation for researchers to experiment with NLP techniques, develop
algorithms, and contribute to the advancement of the field.
* Language Learning Tools: Support interactive language learning experiences,
aiding in grammar correction or providing writing practice.
* Knowledge Exploration: Assist researchers in exploring large bodies of text
by generating summaries or answering questions about specific topics.
### Limitations
* Training Data
* The quality and diversity of the training data significantly influence the
model's capabilities. Biases or gaps in the training data can lead to
limitations in the model's responses.
* The scope of the training dataset determines the subject areas the model can
handle effectively.
* Context and Task Complexity
* LLMs are better at tasks that can be framed with clear prompts and
instructions. Open-ended or highly complex tasks might be challenging.
* A model's performance can be influenced by the amount of context provided
(longer context generally leads to better outputs, up to a certain point).
* Language Ambiguity and Nuance
* Natural language is inherently complex. LLMs might struggle to grasp subtle
nuances, sarcasm, or figurative language.
* Factual Accuracy
* LLMs generate responses based on information they learned from their
training datasets, but they are not knowledge bases. They may generate
incorrect or outdated factual statements.
* Common Sense
* LLMs rely on statistical patterns in language. They might lack the ability
to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of large language models (LLMs) raises several ethical concerns.
In creating an open model, we have carefully considered the following:
* Bias and Fairness
* LLMs trained on large-scale, real-world text data can reflect socio-cultural
biases embedded in the training material. These models underwent careful
scrutiny, input data pre-processing described and posterior evaluations
reported in this card.
* Misinformation and Misuse
* LLMs can be misused to generate text that is false, misleading, or harmful.
* Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit](http://ai.google.dev/gemma/responsible).
* Transparency and Accountability:
* This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
* A responsibly developed open model offers the opportunity to share
innovation by making LLM technology accessible to developers and researchers
across the AI ecosystem.
Risks identified and mitigations:
* Perpetuation of biases: It's encouraged to perform continuous monitoring
(using evaluation metrics, human review) and the exploration of de-biasing
techniques during model training, fine-tuning, and other use cases.
* Generation of harmful content: Mechanisms and guidelines for content safety
are essential. Developers are encouraged to exercise caution and implement
appropriate content safety safeguards based on their specific product policies
and application use cases.
* Misuse for malicious purposes: Technical limitations and developer and
end-user education can help mitigate against malicious applications of LLMs.
Educational resources and reporting mechanisms for users to flag misuse are
provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy](https://ai.google.dev/gemma/prohibited_use_policy).
* Privacy violations: Models were trained on data filtered for removal of PII
(Personally Identifiable Information). Developers are encouraged to adhere to
privacy regulations with privacy-preserving techniques.
### Benefits
At the time of release, this family of models provides high-performance open
large language model implementations designed from the ground up for Responsible
AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
|
uukuguy/speechless-starcoder2-7b | uukuguy | 2024-03-10T08:04:39Z | 47 | 5 | transformers | [
"transformers",
"safetensors",
"starcoder2",
"text-generation",
"code",
"en",
"dataset:teknium/OpenHermes-2.5",
"dataset:TokenBender/python_eval_instruct_51k",
"dataset:codefuse-ai/Evol-instruction-66k",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T05:01:44Z | ---
language:
- en
library_name: transformers
pipeline_tag: text-generation
datasets:
- teknium/OpenHermes-2.5
- TokenBender/python_eval_instruct_51k
- codefuse-ai/Evol-instruction-66k
tags:
- code
license: apache-2.0
model-index:
- name: SpeechlessCoder
results:
- task:
type: text-generation
dataset:
type: openai_humaneval
name: HumanEval
metrics:
- name: pass@1
type: pass@1
value: 0.0
verified: false
---
<p><h1> speechless-starcoder2-7b </h1></p>
Code: https://github.com/uukuguy/speechless
Use the following dataset to fine-tune bigcode/starcoder2-7b in order to improve the model's reasoning and planning abilities.
Total 986k samples.
- teknium/OpenHermes-2.5
- TokenBender/python_eval_instruct_51k
- Spider
- codefuse-ai/Evol-instruction-66k
## How to Prompt the Model
This model accepts the Alpaca instruction format.
For example:
```
You are an intelligent programming assistant.
### Instruction:
Implement a linked list in C++
### Response:
```
## HumanEval
| Metric | Value |
| --- | --- |
| humaneval-python | |
## lm-evaluation-harness
```json
{'ARC (acc_norm)': ,
'HellaSwag (acc_norm)': ,
'MMLU (acc)': ,
'TruthfulQA (mc2)': ,
'Winoground (acc)': ,
'GSM8K (acc)': ,
'DROP (f1)': ,
'Open LLM Score': }
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_uukuguy__speechless-starcoder2-7b)
| Metric | Value |
|-----------------------|---------------------------|
| Avg. | |
| ARC (25-shot) | |
| HellaSwag (10-shot) | |
| MMLU (5-shot) | |
| TruthfulQA (0-shot) | |
| Winogrande (5-shot) | |
| GSM8K (5-shot) | |
| DROP (3-shot) | |
|
OwOOwO/eacc_contTrain_l2_g54l1 | OwOOwO | 2024-03-10T08:00:18Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2024-03-10T06:21:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
OwOOwO/eacc_nst | OwOOwO | 2024-03-10T07:59:58Z | 92 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2024-03-10T04:31:39Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jeiku/Weekend1 | jeiku | 2024-03-10T07:52:36Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T07:47:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
frankmorales2020/OpenMath-Mistral-7B-v0.1-hf-squad2-flash-attention-2 | frankmorales2020 | 2024-03-10T07:48:47Z | 8 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:nvidia/OpenMath-Mistral-7B-v0.1-hf",
"base_model:adapter:nvidia/OpenMath-Mistral-7B-v0.1-hf",
"license:apache-2.0",
"region:us"
] | null | 2024-03-09T13:36:28Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: nvidia/OpenMath-Mistral-7B-v0.1-hf
model-index:
- name: OpenMath-Mistral-7B-v0.1-hf-squad2-flash-attention-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# OpenMath-Mistral-7B-v0.1-hf-squad2-flash-attention-2
This model is a fine-tuned version of [nvidia/OpenMath-Mistral-7B-v0.1-hf](https://huggingface.co/nvidia/OpenMath-Mistral-7B-v0.1-hf) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
daehan17/0310try2 | daehan17 | 2024-03-10T07:48:44Z | 29 | 0 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-03-10T07:29:36Z | ---
license: creativeml-openrail-m
library_name: diffusers
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Text-to-image finetuning - daehan17/0310try2
This pipeline was finetuned from **stabilityai/stable-diffusion-xl-base-1.0** on the **lambdalabs/pokemon-blip-captions** dataset. Below are some example images generated with the finetuned pipeline using the following prompt: a donald trumpt creature:




Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
Benevolent/meinahentai_v4 | Benevolent | 2024-03-10T07:47:21Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | 2024-03-10T07:21:01Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
1girl,1boy,nude,hetero,nipples,large breasts,(paizuri:1.4),hands on own
chest,breasts squeezed together,(facial:1.3),from
above,pov,penis,censored,sweat,cum,cum on breasts,(solo focus,steaming
body,ahegao:1.3),white hair,red eyes,(blush),collarbone
parameters:
negative_prompt: >-
(worst quality, low quality:1.4), monochrome, zombie, (interlocked
fingers:1.2),
output:
url: images/00003.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: null
---
# MH
<Gallery />
## Download model
Weights for this model are available in Safetensors format.
[Download](/Benevolent/meinahentai_v4/tree/main) them in the Files & versions tab.
|
KietB/distilbert-base-uncased-finetuned-cola | KietB | 2024-03-10T07:44:19Z | 12 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-05T15:33:07Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8655
- Matthews Correlation: 0.5313
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5185 | 1.0 | 535 | 0.4555 | 0.4473 |
| 0.3444 | 2.0 | 1070 | 0.4612 | 0.5169 |
| 0.2311 | 3.0 | 1605 | 0.6295 | 0.5014 |
| 0.1724 | 4.0 | 2140 | 0.8351 | 0.5083 |
| 0.1238 | 5.0 | 2675 | 0.8655 | 0.5313 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Benevolent/EasyNG | Benevolent | 2024-03-10T07:38:02Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:apache-2.0",
"region:us"
] | text-to-image | 2024-03-10T07:37:28Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: '-'
output:
url: images/sample01.png
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: easynegative
license: apache-2.0
---
# EASYNG
<Gallery />
## Trigger words
You should use `easynegative` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Benevolent/EasyNG/tree/main) them in the Files & versions tab.
|
Benevolent/Badartist | Benevolent | 2024-03-10T07:33:00Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:apache-2.0",
"region:us"
] | text-to-image | 2024-03-10T07:32:58Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: "UNICODE\0\0 \0(\0m\0a\0s\0t\0e\0r\0p\0i\0e\0c\0e\0,\0 \0b\0e\0s\0t\0 \0q\0u\0a\0l\0i\0t\0y\0,\0 \0g\0l\0o\0w\0i\0n\0g\0 \0l\0i\0g\0h\0t\0,\0 \0g\0l\0i\0s\0t\0e\0n\0i\0n\0g\0,\0 \0s\0h\0i\0n\0y\0 \0s\0k\0i\0n\0,\0 \0u\0l\0t\0r\0a\0 \0d\0e\0t\0a\0i\0l\0e\0d\0,\0 \0d\0e\0t\0a\0i\0l\0e\0d\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0,\0 \0c\0o\0m\0p\0l\0e\0x\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0)\0,\0 \0(\0p\0e\0r\0f\0e\0c\0t\0 \0f\0a\0c\0e\0,\0 \0d\0e\0t\0a\0i\0l\0e\0d\0 \0f\0a\0c\0e\0,\0 \0d\0e\0t\0a\0i\0l\0e\0d\0 \0e\0y\0e\0s\0,\0p\0e\0r\0f\0e\0c\0t\0 \0h\0a\0n\0d\0s\0,\0p\0e\0r\0f\0e\0c\0t\0 \0f\0i\0n\0g\0e\0r\0s\0)\0,\0 \08\0k\0 \0w\0a\0l\0l\0p\0a\0p\0e\0r\0,\0 \0B\0e\0a\0u\0t\0i\0f\0u\0l\0 \0F\0i\0n\0g\0e\0r\0,\0 \0B\0e\0a\0u\0t\0i\0f\0u\0l\0 \0b\0o\0d\0y\0,\0 \0B\0e\0a\0u\0t\0i\0f\0u\0l\0 \0c\0h\0a\0r\0a\0c\0t\0e\0r\0 \0d\0e\0s\0i\0g\0n\0,\0 \0p\0e\0r\0f\0e\0c\0t\0 \0f\0a\0c\0e\0,\0 \0p\0e\0r\0f\0e\0c\0t\0 \0b\0a\0l\0a\0n\0c\0e\0,\0 \0o\0f\0f\0i\0c\0i\0a\0l\0 \0a\0r\0t\0,\0 \0e\0x\0t\0r\0e\0m\0e\0l\0y\0 \0d\0e\0t\0a\0i\0l\0e\0d\0 \0C\0G\0 \0u\0n\0i\0t\0y\0 \08\0k\0 \0w\0a\0l\0l\0p\0a\0p\0e\0r\0,\0 \0p\0e\0r\0f\0e\0c\0t\0 \0l\0i\0g\0h\0t\0i\0n\0g\0,\0 \0C\0o\0l\0o\0r\0f\0u\0l\0,\0 \0B\0r\0i\0g\0h\0t\0_\0F\0r\0o\0n\0t\0_\0f\0a\0c\0e\0_\0L\0i\0g\0h\0t\0i\0n\0g\0,\0 \0(\0m\0a\0s\0t\0e\0r\0p\0i\0e\0c\0e\0:\01\0.\00\0)\0,\0(\0b\0e\0s\0t\0_\0q\0u\0a\0l\0i\0t\0y\0:\01\0.\00\0)\0,\0 \0u\0l\0t\0r\0a\0 \0h\0i\0g\0h\0 \0r\0e\0s\0,\04\0K\0,\0u\0l\0t\0r\0a\0-\0d\0e\0t\0a\0i\0l\0e\0d\0,\0 \0p\0h\0o\0t\0o\0g\0r\0a\0p\0h\0y\0,\0 \08\0K\0,\0 \0H\0D\0R\0,\0 \0h\0i\0g\0h\0r\0e\0s\0,\0 \0a\0b\0s\0u\0r\0d\0r\0e\0s\0:\01\0.\02\0,\0 \0K\0o\0d\0a\0k\0 \0p\0o\0r\0t\0r\0a\0 \04\00\00\0,\0 \0b\0l\0u\0r\0r\0y\0 \0b\0a\0c\0k\0g\0r\0o\0u\0n\0d\0,\0 \0b\0o\0k\0e\0h\0,\0 \0l\0e\0n\0s\0 \0f\0l\0a\0r\0e\0,\0 \0p\0r\0o\0f\0e\0s\0s\0i\0o\0n\0a\0l\0 \0p\0h\0o\0t\0o\0g\0r\0a\0p\0h\0,\0 \0(\0v\0i\0b\0r\0a\0n\0t\0_\0c\0o\0l\0o\0r\0:\01\0.\02\0)\0,\0(\0b\0e\0a\0u\0t\0i\0f\0u\0l\0_\0f\0a\0c\0e\0:\01\0.\02\0)\0,\0 \02\0d\0,\0 \0a\0n\0i\0m\0e\0,\0 \0b\0r\0e\0a\0s\0t\0s\0,\0 \0h\0a\0r\0n\0e\0s\0s\0,\0 \0"
output:
url: >-
images/E4507B8FF0DB2540F0DB344A18E5A2151192B3B62F95F896DCAA3BD96B49E99D.jpeg
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: badartistnegative_v1.3
license: apache-2.0
---
# BADArtist
<Gallery />
## Trigger words
You should use `badartistnegative_v1.3` to trigger the image generation.
## Download model
Weights for this model are available in PyTorch format.
[Download](/Benevolent/Badartist/tree/main) them in the Files & versions tab.
|
varcoder/resnet-101-finetuned-CivilEng11k | varcoder | 2024-03-10T07:31:26Z | 26 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"resnet",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-01-13T18:02:37Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: resnet-101-finetuned-CivilEng11k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-101-finetuned-CivilEng11k
This model is a fine-tuned version of [microsoft/resnet-101](https://huggingface.co/microsoft/resnet-101) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5490
- Accuracy: 0.8542
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 10
- total_train_batch_size: 320
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.81 | 3 | 1.0724 | 0.5729 |
| No log | 1.89 | 7 | 0.9717 | 0.6542 |
| 1.0293 | 2.97 | 11 | 0.8594 | 0.6678 |
| 1.0293 | 3.78 | 14 | 0.7830 | 0.7017 |
| 1.0293 | 4.86 | 18 | 0.6764 | 0.7593 |
| 0.78 | 5.95 | 22 | 0.6072 | 0.7831 |
| 0.78 | 6.76 | 25 | 0.5745 | 0.8339 |
| 0.78 | 7.84 | 29 | 0.5489 | 0.8508 |
| 0.6037 | 8.11 | 30 | 0.5490 | 0.8542 |
### Framework versions
- Transformers 4.30.2
- Pytorch 1.13.1+cpu
- Datasets 2.13.1
- Tokenizers 0.13.3
|
wavymulder/Analog-Diffusion-XL | wavymulder | 2024-03-10T07:30:06Z | 0 | 8 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-03-08T06:52:49Z | ---
license: creativeml-openrail-m
thumbnail: "https://huggingface.co/wavymulder/Analog-Diffusion-XL/resolve/main/imgs/tile_page1.jpg"
---
**Analog Diffusion XL**

[*SAFETENSORS DOWNLOAD LINK*](https://huggingface.co/wavymulder/Analog-Diffusion-XL/resolve/main/analogXLrelease.safetensors?download=true) - This is an **SDXL LoRA** trained on a diverse set of analog photographs.
I typically use this LoRA at 0.6-1.0 Strength. In your prompt, use the activation token: `analog style`
Sometimes pushes a green tint too hard, in these instances you can use negative prompting such as `green` or `split-toning` to help. Some prompts may struggle to generate consistently in color, in these instances you can use `sepia` and `grayscale` in negative to help.
Please see [this document where I share the parameters (prompt, sampler, seed, etc.) used for all example images.](https://huggingface.co/wavymulder/Analog-Diffusion-XL/resolve/main/parameters_used_examples.txt)



Trained on *sd_xl_base_1.0_0.9vae.safetensors* . All examples are generated on *sd_xl_base_1.0_0.9vae.safetensors* using this LoRA. No hi-res fix, upscaling, inpainting, etc. done on any samples here.
I have tested this LoRA with some popular community Photo SDXL checkpoints and gotten good results, you just may have to play around with the LoRA Strength to achieve your desired result (may need >1).
|
ISTA-DASLab/gemma-2b-AQLM-2Bit-2x8-hf | ISTA-DASLab | 2024-03-10T07:23:32Z | 79 | 4 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:2401.06118",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"aqlm",
"region:us"
] | text-generation | 2024-03-06T18:34:43Z | Official [AQLM](https://arxiv.org/abs/2401.06118) quantization of [google/gemma-2b](https://huggingface.co/google/gemma-2b).
For this quantization, we used 2 codebooks of 8 bits.
Results (0-shot `acc`):
| Model | Quantization | WinoGrande | PiQA | HellaSwag | ArcE | ArcC | Model size, Gb |
|------|------|------|-------|-------|-------|------|------|
|gemma-2b| None | 0.6472 | 0.7715 | 0.5279 | 0.7403 | 0.4053| 5.0 |
| |2x8| 0.5801 | 0.6828 | 0.3891 | 0.5791 | 0.2534| 1.6 |
To learn more about the inference, as well as the information on how to quantize models yourself, please refer to the [official GitHub repo](https://github.com/Vahe1994/AQLM). |
ShubhamJain18/Taxi-v3 | ShubhamJain18 | 2024-03-10T07:21:56Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-10T07:21:52Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.50 +/- 2.70
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ShubhamJain18/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Coldbrew9/Fine-tuning-gemma-singleQ | Coldbrew9 | 2024-03-10T07:20:05Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-10T07:19:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
daehan17/0310try1 | daehan17 | 2024-03-10T07:19:32Z | 30 | 1 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-03-10T07:03:29Z | ---
license: creativeml-openrail-m
library_name: diffusers
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
inference: true
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Text-to-image finetuning - daehan17/0310try1
This pipeline was finetuned from **stabilityai/stable-diffusion-xl-base-1.0** on the **lambdalabs/pokemon-blip-captions** dataset. Below are some example images generated with the finetuned pipeline using the following prompt: a cute donald trumpi creature:




Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
ShubhamJain18/q-FrozenLake-v1-4x4-noSlippery | ShubhamJain18 | 2024-03-10T07:14:42Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-10T07:14:38Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="ShubhamJain18/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
stablediffusionapi/celebrim | stablediffusionapi | 2024-03-10T07:10:24Z | 29 | 2 | diffusers | [
"diffusers",
"modelslab.com",
"stable-diffusion-api",
"text-to-image",
"ultra-realistic",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] | text-to-image | 2024-03-10T07:07:15Z | ---
license: creativeml-openrail-m
tags:
- modelslab.com
- stable-diffusion-api
- text-to-image
- ultra-realistic
pinned: true
---
# celebrim API Inference

## Get API Key
Get API key from [ModelsLab API](http://modelslab.com), No Payment needed.
Replace Key in below code, change **model_id** to "celebrim"
Coding in PHP/Node/Java etc? Have a look at docs for more code examples: [View docs](https://modelslab.com/docs)
Try model for free: [Generate Images](https://modelslab.com/models/celebrim)
Model link: [View model](https://modelslab.com/models/celebrim)
View all models: [View Models](https://modelslab.com/models)
import requests
import json
url = "https://modelslab.com/api/v6/images/text2img"
payload = json.dumps({
"key": "your_api_key",
"model_id": "celebrim",
"prompt": "ultra realistic close up portrait ((beautiful pale cyberpunk female with heavy black eyeliner)), blue eyes, shaved side haircut, hyper detail, cinematic lighting, magic neon, dark red city, Canon EOS R3, nikon, f/1.4, ISO 200, 1/160s, 8K, RAW, unedited, symmetrical balance, in-frame, 8K",
"negative_prompt": "painting, extra fingers, mutated hands, poorly drawn hands, poorly drawn face, deformed, ugly, blurry, bad anatomy, bad proportions, extra limbs, cloned face, skinny, glitchy, double torso, extra arms, extra hands, mangled fingers, missing lips, ugly face, distorted face, extra legs, anime",
"width": "512",
"height": "512",
"samples": "1",
"num_inference_steps": "30",
"safety_checker": "no",
"enhance_prompt": "yes",
"seed": None,
"guidance_scale": 7.5,
"multi_lingual": "no",
"panorama": "no",
"self_attention": "no",
"upscale": "no",
"embeddings": "embeddings_model_id",
"lora": "lora_model_id",
"webhook": None,
"track_id": None
})
headers = {
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print(response.text)
> Use this coupon code to get 25% off **DMGG0RBN** |
Tawanmeansthesun/20000sumt5 | Tawanmeansthesun | 2024-03-10T07:05:55Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-03-10T04:50:45Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: 20000sumt5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20000sumt5
This model is a fine-tuned version of [google-t5/t5-base](https://huggingface.co/google-t5/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3985
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5957 | 1.0 | 4000 | 2.4213 |
| 2.561 | 2.0 | 8000 | 2.3985 |
### Framework versions
- Transformers 4.17.0
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Litzy619/V0309O2 | Litzy619 | 2024-03-10T07:00:36Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-03-10T02:54:44Z | ---
license: mit
base_model: microsoft/phi-2
tags:
- generated_from_trainer
model-index:
- name: V0309O2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0309O2
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0716
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 20
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.6792 | 0.09 | 10 | 0.1456 |
| 0.164 | 0.17 | 20 | 0.1075 |
| 0.1211 | 0.26 | 30 | 0.0749 |
| 0.1029 | 0.34 | 40 | 0.0726 |
| 0.099 | 0.43 | 50 | 0.0684 |
| 0.0915 | 0.51 | 60 | 0.0691 |
| 0.0824 | 0.6 | 70 | 0.0664 |
| 0.0898 | 0.68 | 80 | 0.0716 |
| 0.0815 | 0.77 | 90 | 0.0759 |
| 0.0806 | 0.85 | 100 | 0.0762 |
| 0.0789 | 0.94 | 110 | 0.0664 |
| 0.0775 | 1.02 | 120 | 0.0641 |
| 0.073 | 1.11 | 130 | 0.0737 |
| 0.0668 | 1.19 | 140 | 0.0677 |
| 0.0642 | 1.28 | 150 | 0.0684 |
| 0.0646 | 1.37 | 160 | 0.0724 |
| 0.062 | 1.45 | 170 | 0.0695 |
| 0.0601 | 1.54 | 180 | 0.0689 |
| 0.0651 | 1.62 | 190 | 0.0652 |
| 0.0604 | 1.71 | 200 | 0.0684 |
| 0.0635 | 1.79 | 210 | 0.0679 |
| 0.0567 | 1.88 | 220 | 0.0703 |
| 0.057 | 1.96 | 230 | 0.0690 |
| 0.0557 | 2.05 | 240 | 0.0711 |
| 0.0447 | 2.13 | 250 | 0.0707 |
| 0.0479 | 2.22 | 260 | 0.0735 |
| 0.0434 | 2.3 | 270 | 0.0753 |
| 0.0493 | 2.39 | 280 | 0.0721 |
| 0.0496 | 2.47 | 290 | 0.0708 |
| 0.0468 | 2.56 | 300 | 0.0709 |
| 0.0525 | 2.65 | 310 | 0.0709 |
| 0.0419 | 2.73 | 320 | 0.0713 |
| 0.047 | 2.82 | 330 | 0.0715 |
| 0.0436 | 2.9 | 340 | 0.0715 |
| 0.0474 | 2.99 | 350 | 0.0716 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
WokeEngineer/poca-SoccerTwos | WokeEngineer | 2024-03-10T07:00:07Z | 30 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"SoccerTwos",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SoccerTwos",
"region:us"
] | reinforcement-learning | 2024-03-10T06:57:51Z | ---
library_name: ml-agents
tags:
- SoccerTwos
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SoccerTwos
---
# **poca** Agent playing **SoccerTwos**
This is a trained model of a **poca** agent playing **SoccerTwos**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: WokeEngineer/poca-SoccerTwos
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
deliciouscat/kf-deberta-base-nli | deliciouscat | 2024-03-10T06:56:28Z | 90 | 1 | transformers | [
"transformers",
"safetensors",
"deberta-v2",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-03-10T06:56:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jeiku/Weekend_Project | jeiku | 2024-03-10T06:55:47Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:ResplendentAI/Paradigm_7B",
"base_model:merge:ResplendentAI/Paradigm_7B",
"base_model:jeiku/Theory_of_Mind_Mistral",
"base_model:merge:jeiku/Theory_of_Mind_Mistral",
"base_model:jeiku/Theory_of_Mind_Roleplay_Mistral",
"base_model:merge:jeiku/Theory_of_Mind_Roleplay_Mistral",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T06:44:31Z | ---
base_model:
- ResplendentAI/Paradigm_7B
- ResplendentAI/Paradigm_7B
- jeiku/Theory_of_Mind_Roleplay_Mistral
- ResplendentAI/Paradigm_7B
- jeiku/Theory_of_Mind_Mistral
library_name: transformers
tags:
- mergekit
- merge
---
# Weekend
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [ResplendentAI/Paradigm_7B](https://huggingface.co/ResplendentAI/Paradigm_7B) as a base.
### Models Merged
The following models were included in the merge:
* [ResplendentAI/Paradigm_7B](https://huggingface.co/ResplendentAI/Paradigm_7B) + [jeiku/Theory_of_Mind_Roleplay_Mistral](https://huggingface.co/jeiku/Theory_of_Mind_Roleplay_Mistral)
* [ResplendentAI/Paradigm_7B](https://huggingface.co/ResplendentAI/Paradigm_7B) + [jeiku/Theory_of_Mind_Mistral](https://huggingface.co/jeiku/Theory_of_Mind_Mistral)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: dare_ties
base_model: ResplendentAI/Paradigm_7B
parameters:
normalize: true
models:
- model: ResplendentAI/Paradigm_7B+jeiku/Theory_of_Mind_Roleplay_Mistral
parameters:
weight: 1
- model: ResplendentAI/Paradigm_7B+jeiku/Theory_of_Mind_Mistral
parameters:
weight: 1
dtype: float16
```
|
hwkwon/S-SOLAR-10.7B-SFT-v1.2 | hwkwon | 2024-03-10T06:55:30Z | 61 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ko",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T06:45:22Z | ---
license: cc-by-nc-4.0
language:
- ko
---
# S-SOLAR-10.7B
<!-- Provide a quick summary of what the model is/does. -->
<!--This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).-->
### Model Description
<!-- Provide a longer summary of what this model is. -->
This model is a fine-tuned version of [Edentns/DataVortexS-10.7B-dpo-v1.11](https://huggingface.co/Edentns/DataVortexS-10.7B-dpo-v1.11) with DeepSpeed.
### Trained Data
TBA
### Prompt Template
```
### User: User query input
### Assistant:
```
### License
This model is licensed under the cc-by-nc-4.0. which allows others to share and adapt the model for non-commercial purposes.
|
Virt-io/Deris-SSS-7B | Virt-io | 2024-03-10T06:53:46Z | 18 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-09T08:05:03Z | ---
library_name: transformers
tags:
- mergekit
- merge
- mistral
---
<img src="https://huggingface.co/Virt-io/Deris-SSS-7B/resolve/main/README.png">
---
# Tips
SillyTavern presets in [presets folder](https://huggingface.co/Virt-io/Deris-SSS-7B/tree/main/presets). (Thanks to [Lewdiculous](https://huggingface.co/Lewdiculous) for ideas.)
Model has formatting issues when using asterisks. Recommended to use novel like formatting (only use quotes)
System prompt can be improved, help welcomed.
Model seems to take characters too seriously, if you find it too stubborn regenerate reply or edit it. It should comply after.(Seems to be fixed now, appears to have been a bad system prompt. oops)
---
# Models Used
[ChaoticNeutrals/Eris_Floramix_DPO_7B](https://huggingface.co/ChaoticNeutrals/Eris_Floramix_DPO_7B)
[ResplendentAI/Datura_7B](https://huggingface.co/ResplendentAI/Datura_7B)
[FuseAI/FuseChat-7B-VaRM](https://huggingface.co/FuseAI/FuseChat-7B-VaRM)
[Weyaxi/Einstein-v4-7B](https://huggingface.co/Weyaxi/Einstein-v4-7B)
[ogno-monarch-jaskier-merge-7b-OH-PREF-DPO](https://huggingface.co/eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO)
---
# Deris-SSS
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* Deris-v2
* SmartyPants-step2
### Configuration
The following YAML configuration was used to produce this model:
### Deris-SSS
Final merge combine the smart models with the unhinged ones
```yaml
slices:
- sources:
- model: ./Mergekit/Deris-v2
layer_range: [0, 32]
- model: ./Mergekit/SmartyPants-step2
layer_range: [0, 32]
merge_method: slerp
base_model: ./Mergekit/Deris-v2
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: float16
```
### Deris-v2
Combine a bit of Datura_7B into Eris_Floramix_DPO_7B
Reason Datura is extremely unhinged :) more so than Eris
```
slices:
- sources:
- model: ChaoticNeutrals/Eris_Floramix_DPO_7B
layer_range: [0, 32]
- model: ResplendentAI/Datura_7B
layer_range: [0, 32]
merge_method: slerp
base_model: ChaoticNeutrals/Eris_Floramix_DPO_7B
parameters:
t:
- filter: self_attn
value: [0, 0.20, 0.15, 0.25, 0.35]
- filter: mlp
value: [0.35, 0.20, 0.25, 0.15, 0]
- value: 0.20
dtype: float16
```
### SmartyPants-step1
Combine OMJ into Einstein
Reason Einstein looks interesting and OMJ was a high ranking model
```
slices:
- sources:
- model: Weyaxi/Einstein-v4-7B
layer_range: [0, 32]
- model: eren23/ogno-monarch-jaskier-merge-7b-OH-PREF-DPO
layer_range: [0, 32]
merge_method: slerp
base_model: Weyaxi/Einstein-v4-7B
parameters:
t:
- filter: self_attn
value: [0, 0.45, 0.3, 0.55, 0.65]
- filter: mlp
value: [0.65, 0.45, 0.55, 0.3, 0]
- value: 0.45
dtype: float16
```
### SmartyPants-step2
Combine Smarty pants into FuseChat-VaRM
Reason IDK I just like FuseChat-VaRM
```
slices:
- sources:
- model: FuseAI/FuseChat-7B-VaRM
layer_range: [0, 32]
- model: ./Mergekit/SmartyPants-step1
layer_range: [0, 32]
merge_method: slerp
base_model: FuseAI/FuseChat-7B-VaRM
parameters:
t:
- filter: self_attn
value: [0, 0.45, 0.3, 0.55, 0.65]
- filter: mlp
value: [0.65, 0.45, 0.55, 0.3, 0]
- value: 0.45
dtype: float16
```
|
jjlee6496/KoSOLAR-10.7B-v0.2-finetuned-GGUF | jjlee6496 | 2024-03-10T06:50:27Z | 0 | 0 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-10T05:29:10Z | ---
license: apache-2.0
---
## Base Model
- [jjlee6496/KoSOLAR-10.7B-v0.2-finetuned](https://huggingface.co/jjlee6496/KoSOLAR-10.7B-v0.2-finetuned) |
PrabathDamarla/my-pet-dog-dsp | PrabathDamarla | 2024-03-10T06:40:56Z | 1 | 2 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-03-10T06:37:14Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog-dsp Dreambooth model trained by PrabathDamarla following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: GoX19932gAS
Sample pictures of this concept:
.jpg)
.jpg)
.jpg)
.jpg)
.jpg)
|
Litzy619/V0309B1 | Litzy619 | 2024-03-10T06:40:22Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:microsoft/phi-2",
"base_model:finetune:microsoft/phi-2",
"license:mit",
"region:us"
] | null | 2024-03-10T02:58:37Z | ---
license: mit
base_model: microsoft/phi-2
tags:
- generated_from_trainer
model-index:
- name: V0309B1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0309B1
This model is a fine-tuned version of [microsoft/phi-2](https://huggingface.co/microsoft/phi-2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0618
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 20
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5503 | 0.09 | 10 | 1.8060 |
| 0.926 | 0.17 | 20 | 0.1557 |
| 0.1416 | 0.26 | 30 | 0.0878 |
| 0.1055 | 0.34 | 40 | 0.0739 |
| 0.1001 | 0.43 | 50 | 0.0704 |
| 0.0863 | 0.51 | 60 | 0.0660 |
| 0.0819 | 0.6 | 70 | 0.0676 |
| 0.0838 | 0.68 | 80 | 0.0638 |
| 0.0736 | 0.77 | 90 | 0.0636 |
| 0.0766 | 0.85 | 100 | 0.0610 |
| 0.0787 | 0.94 | 110 | 0.0607 |
| 0.076 | 1.02 | 120 | 0.0604 |
| 0.0738 | 1.11 | 130 | 0.0619 |
| 0.0711 | 1.19 | 140 | 0.0583 |
| 0.068 | 1.28 | 150 | 0.0573 |
| 0.0696 | 1.37 | 160 | 0.0606 |
| 0.068 | 1.45 | 170 | 0.0610 |
| 0.0637 | 1.54 | 180 | 0.0596 |
| 0.0678 | 1.62 | 190 | 0.0583 |
| 0.066 | 1.71 | 200 | 0.0594 |
| 0.0679 | 1.79 | 210 | 0.0586 |
| 0.0632 | 1.88 | 220 | 0.0605 |
| 0.0606 | 1.96 | 230 | 0.0606 |
| 0.0622 | 2.05 | 240 | 0.0611 |
| 0.0578 | 2.13 | 250 | 0.0610 |
| 0.0562 | 2.22 | 260 | 0.0627 |
| 0.0507 | 2.3 | 270 | 0.0659 |
| 0.0615 | 2.39 | 280 | 0.0642 |
| 0.06 | 2.47 | 290 | 0.0627 |
| 0.0588 | 2.56 | 300 | 0.0619 |
| 0.0626 | 2.65 | 310 | 0.0614 |
| 0.053 | 2.73 | 320 | 0.0618 |
| 0.0567 | 2.82 | 330 | 0.0616 |
| 0.0525 | 2.9 | 340 | 0.0619 |
| 0.057 | 2.99 | 350 | 0.0618 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
cslilingfei/sdxl_demo | cslilingfei | 2024-03-10T06:39:51Z | 6 | 2 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"lora",
"template:sd-lora",
"diffusers-training",
"dora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-02-21T07:03:15Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
- text-to-image
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
- text-to-image
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
- text-to-image
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
- text-to-image
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
- text-to-image
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
- text-to-image
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- dora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of TOK dog
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - cslilingfei/sdxl_demo
<Gallery />
## Model description
These are cslilingfei/sdxl_demo LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK dog to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](cslilingfei/sdxl_demo/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
delayedkarma/NeuralHermes-2.5-Mistral-7B | delayedkarma | 2024-03-10T06:33:47Z | 6 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"instruct",
"finetune",
"chatml",
"gpt4",
"synthetic data",
"distillation",
"dpo",
"rlhf",
"conversational",
"en",
"dataset:Intel/orca_dpo_pairs",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:finetune:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T05:50:36Z | ---
language:
- en
license: apache-2.0
tags:
- mistral
- instruct
- finetune
- chatml
- gpt4
- synthetic data
- distillation
- dpo
- rlhf
datasets:
- Intel/orca_dpo_pairs
base_model: teknium/OpenHermes-2.5-Mistral-7B
---
### Credits: Maxime Labonne https://towardsdatascience.com/fine-tune-a-mistral-7b-model-with-direct-preference-optimization-708042745aac
(With minor alterations)
# NeuralHermes 2.5 - Mistral 7B
NeuralHermes is based on the [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) model that has been further fine-tuned with Direct Preference Optimization (DPO) using the [Intel/orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs) dataset. .
## Usage
You can run this model using the following code:
```python
import transformers
from transformers import AutoTokenizer
# Format prompt
message = [
{"role": "system", "content": "You are a helpful assistant chatbot."},
{"role": "user", "content": "What is a Large Language Model?"}
]
tokenizer = AutoTokenizer.from_pretrained(new_model)
prompt = tokenizer.apply_chat_template(message, add_generation_prompt=True, tokenize=False)
# Create pipeline
pipeline = transformers.pipeline(
"text-generation",
model=new_model,
tokenizer=tokenizer
)
# Generate text
sequences = pipeline(
prompt,
do_sample=True,
temperature=0.7,
top_p=0.9,
num_return_sequences=1,
max_length=200,
)
print(sequences[0]['generated_text'])
```
## Training hyperparameters
**LoRA**:
* r=16
* lora_alpha=16
* lora_dropout=0.05
* bias="none"
* task_type="CAUSAL_LM"
* target_modules=['k_proj', 'gate_proj', 'v_proj', 'up_proj', 'q_proj', 'o_proj', 'down_proj']
**Training arguments**:
* per_device_train_batch_size=2 # Changed from 4
* gradient_accumulation_steps=4
* gradient_checkpointing=True
* learning_rate=2e-5 # Changed from 5e-5
* lr_scheduler_type="cosine"
* max_steps=250 # Changed from 200
* optim="paged_adamw_32bit"
* warmup_steps=100
**DPOTrainer**:
* beta=0.1
* max_prompt_length=1024
* max_length=1536 |
Bienvenu2004/donut-base-pv-aws | Bienvenu2004 | 2024-03-10T06:33:01Z | 10 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-03-09T16:23:50Z | ---
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-pv-aws
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-pv-aws
This model was trained from scratch on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Zangs3011/sdxl_finetune_1 | Zangs3011 | 2024-03-10T06:29:43Z | 1 | 1 | diffusers | [
"diffusers",
"text-to-image",
"autotrain",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:finetune:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | 2024-03-10T05:40:03Z |
---
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: A wolf houling in the mountains
tags:
- text-to-image
- diffusers
- autotrain
inference: true
---
# DreamBooth trained by AutoTrain
Text encoder was not trained.
|
MTSAIR/multi_verse_model | MTSAIR | 2024-03-10T06:23:10Z | 6,726 | 35 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-07T07:26:10Z | ---
license: apache-2.0
language:
- en
---
I'm an innovative concept, created through a cutting-edge training method. Picture me as a "learning bot" who's had a special upgrade. Just like how a chef perfects their recipes with new techniques, my creators have fine-tuned my "knowledge-absorption" process. I'm here to showcase the potential of this new approach, and I'm excited to test my abilities in a friendly, helpful manner. So, while I may be a product of experimentation, my purpose is to demonstrate the power of continuous learning and growth in the world of artificial intelligence. |
DarshanDeshpande/distilbert_social_reasoning_reward_model | DarshanDeshpande | 2024-03-10T06:22:38Z | 97 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"trl",
"reward-trainer",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-10T06:08:56Z | ---
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- trl
- reward-trainer
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert_social_reasoning_reward_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_social_reasoning_reward_model
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6309
- Accuracy: 0.6958
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6618 | 0.24 | 10 | 0.6505 | 0.6725 |
| 0.6357 | 0.48 | 20 | 0.6373 | 0.6497 |
| 0.6457 | 0.72 | 30 | 0.6226 | 0.6725 |
| 0.646 | 0.96 | 40 | 0.6437 | 0.6778 |
| 0.6448 | 1.2 | 50 | 0.7565 | 0.6287 |
| 0.6339 | 1.44 | 60 | 0.6365 | 0.6655 |
| 0.6207 | 1.68 | 70 | 0.6694 | 0.6778 |
| 0.6217 | 1.92 | 80 | 0.6351 | 0.6340 |
| 0.5928 | 2.16 | 90 | 0.7245 | 0.6497 |
| 0.5938 | 2.4 | 100 | 0.6739 | 0.6497 |
| 0.5873 | 2.63 | 110 | 0.6811 | 0.6357 |
| 0.5442 | 2.87 | 120 | 0.6774 | 0.6375 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
ZainAli60/llama_miner | ZainAli60 | 2024-03-10T06:19:05Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"bart",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-09T14:52:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Blackphi6/testdesudesulargelarge_33333320 | Blackphi6 | 2024-03-10T06:18:04Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:openai/whisper-large-v2",
"base_model:adapter:openai/whisper-large-v2",
"license:apache-2.0",
"region:us"
] | null | 2024-03-10T06:17:58Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: openai/whisper-large-v2
model-index:
- name: testdesudesulargelarge_33333320
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# testdesudesulargelarge_33333320
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9514
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 2 | 1.0521 |
| No log | 2.0 | 4 | 1.0009 |
| No log | 3.0 | 6 | 0.9514 |
### Framework versions
- PEFT 0.9.1.dev0
- Transformers 4.39.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.18.1.dev0
- Tokenizers 0.15.2 |
fhai50032/RP-Coder-SM3 | fhai50032 | 2024-03-10T06:13:05Z | 73 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-09T19:59:50Z | ---
library_name: transformers
license: apache-2.0
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
davidhaas6/chats-XmlUser_NoPrefix-mistral7b-4bit | davidhaas6 | 2024-03-10T06:11:34Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"unsloth",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T04:39:29Z | ---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
choyiny/opt-350m-lora-merchant-finetune | choyiny | 2024-03-10T05:55:26Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-10T05:55:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
sthiruna/videomae-base-finetuned-ucf101-subset | sthiruna | 2024-03-10T05:38:02Z | 48 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:MCG-NJU/videomae-base",
"base_model:finetune:MCG-NJU/videomae-base",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2024-03-09T09:21:46Z | ---
license: cc-by-nc-4.0
base_model: MCG-NJU/videomae-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4568
- Accuracy: 0.8774
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 148
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.1155 | 0.26 | 38 | 1.7390 | 0.6 |
| 0.8482 | 1.26 | 76 | 0.9544 | 0.6857 |
| 0.4371 | 2.26 | 114 | 0.4790 | 0.8857 |
| 0.2803 | 3.23 | 148 | 0.3659 | 0.9143 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
WYNN747/Bur-GPT-wiki-march6_v1-2500 | WYNN747 | 2024-03-10T05:28:18Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T05:23:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nadika/wav2vec2-nepali-v2-try-2 | nadika | 2024-03-10T05:16:01Z | 61 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-03-10T04:44:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ResplendentAI/Paradigm_7B | ResplendentAI | 2024-03-10T05:15:55Z | 118 | 3 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"en",
"dataset:ResplendentAI/Alpaca_NSFW_Shuffled",
"dataset:unalignment/toxic-dpo-v0.2",
"base_model:ChaoticNeutrals/Eris_Remix_7B",
"base_model:merge:ChaoticNeutrals/Eris_Remix_7B",
"base_model:ResplendentAI/Datura_7B",
"base_model:merge:ResplendentAI/Datura_7B",
"base_model:jeiku/Alpaca_NSFW_Shuffled_Mistral",
"base_model:merge:jeiku/Alpaca_NSFW_Shuffled_Mistral",
"base_model:liminerity/M7-7b",
"base_model:merge:liminerity/M7-7b",
"license:cc-by-sa-4.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T01:04:13Z | ---
language:
- en
license: cc-by-sa-4.0
library_name: transformers
tags:
- mergekit
- merge
base_model:
- liminerity/Multiverse-Experiment-slerp-7b
- jeiku/Alpaca_NSFW_Shuffled_Mistral
- ResplendentAI/Datura_7B
- ChaoticNeutrals/Eris_Remix_7B
datasets:
- ResplendentAI/Alpaca_NSFW_Shuffled
- unalignment/toxic-dpo-v0.2
model-index:
- name: Paradigm_7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 73.63
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Paradigm_7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.66
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Paradigm_7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.02
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Paradigm_7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 75.19
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Paradigm_7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.53
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Paradigm_7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 66.79
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Paradigm_7B
name: Open LLM Leaderboard
---
# Paradigm

An incredibly effective and intelligent RP model designed to be the best bot you've ever used. I hope you like it!
GGUF available here: https://huggingface.co/Lewdiculous/Paradigm_7B-GGUF-IQ-Imatrix
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ResplendentAI__Paradigm_7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.47|
|AI2 Reasoning Challenge (25-Shot)|73.63|
|HellaSwag (10-Shot) |88.66|
|MMLU (5-Shot) |64.02|
|TruthfulQA (0-shot) |75.19|
|Winogrande (5-shot) |84.53|
|GSM8k (5-shot) |66.79|
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: dare_ties
base_model: ChaoticNeutrals/Eris_Remix_7B
parameters:
normalize: true
models:
- model: ChaoticNeutrals/Eris_Remix_7B
parameters:
weight: 1
- model: ResplendentAI/Datura_7B
parameters:
weight: 1
- model: liminerity/Multiverse-Experiment-slerp-7b+jeiku/Alpaca_NSFW_Shuffled_Mistral
parameters:
weight: 0.33
dtype: float16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ResplendentAI__Paradigm_7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |75.47|
|AI2 Reasoning Challenge (25-Shot)|73.63|
|HellaSwag (10-Shot) |88.66|
|MMLU (5-Shot) |64.02|
|TruthfulQA (0-shot) |75.19|
|Winogrande (5-shot) |84.53|
|GSM8k (5-shot) |66.79|
|
sheepymeh/metavoice-1B-v0.1 | sheepymeh | 2024-03-10T05:14:45Z | 9 | 1 | metavoice | [
"metavoice",
"pretrained",
"text-to-speech",
"en",
"license:apache-2.0",
"region:us"
] | text-to-speech | 2024-03-06T05:37:57Z | ---
license: apache-2.0
language:
- en
tags:
- pretrained
- text-to-speech
library_name: metavoice
inference: false
pipeline_tag: text-to-speech
---
Converted safetensors version of [metavoiceio/metavoice-1B-v0.1](https://huggingface.co/metavoiceio/metavoice-1B-v0.1)
# Usage
1. Install the original model code by running metavoice.sh
2. Run the following code:
```python
from metavoice import MetaVoiceModel
model = MetaVoiceModel("sheepymeh/metavoice-1B-v0.1")
model.generate("Hello world!")
``` |
Aryanshanu/LunarLander | Aryanshanu | 2024-03-10T05:13:16Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-10T04:54:58Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: Proximal Policy Optimization
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 250.09 +/- 12.60
name: mean_reward
verified: false
---
# **Proximal Policy Optimization** Agent playing **LunarLander-v2**
This is a trained model of a **Proximal Policy Optimization** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Tobiasperel/version1 | Tobiasperel | 2024-03-10T05:06:54Z | 2 | 1 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-03-10T05:04:14Z | ---
license: openrail++
library_name: diffusers
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of TOK person
widget: []
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - Tobiasperel/corgy_dog_1_LoRA
<Gallery />
## Model description
These are Tobiasperel/corgy_dog_1_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of TOK person to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](Tobiasperel/corgy_dog_1_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
nocudaexe/Neural_Waifu_7b_V0.1 | nocudaexe | 2024-03-10T05:06:03Z | 13 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:KatyTheCutie/SlushySlerp-7B",
"base_model:merge:KatyTheCutie/SlushySlerp-7B",
"base_model:Nitral-AI/Kunocchini-7b-128k-test",
"base_model:merge:Nitral-AI/Kunocchini-7b-128k-test",
"base_model:mlabonne/AlphaMonarch-7B",
"base_model:merge:mlabonne/AlphaMonarch-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T04:15:57Z | ---
base_model:
- Test157t/Kunocchini-7b-128k-test
- mlabonne/AlphaMonarch-7B
- KatyTheCutie/SlushySlerp-7B
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B) as a base.
### Models Merged
The following models were included in the merge:
* [Test157t/Kunocchini-7b-128k-test](https://huggingface.co/Test157t/Kunocchini-7b-128k-test)
* [KatyTheCutie/SlushySlerp-7B](https://huggingface.co/KatyTheCutie/SlushySlerp-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: mlabonne/AlphaMonarch-7B
# No parameters necessary for base model
- model: Test157t/Kunocchini-7b-128k-test
parameters:
density: 0.53
weight: 0.3
- model: KatyTheCutie/SlushySlerp-7B
parameters:
density: 0.53
weight: 0.3
merge_method: dare_ties
base_model: mlabonne/AlphaMonarch-7B
parameters:
int8_mask: true
dtype: bfloat16
```
|
zzttbrdd/sn6_02_new | zzttbrdd | 2024-03-10T05:05:02Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T05:03:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ResplendentAI/Flora_7B | ResplendentAI | 2024-03-10T05:00:59Z | 113 | 7 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"finetune",
"en",
"dataset:ResplendentAI/Synthetic_Soul_1k",
"base_model:jeiku/FloraBase",
"base_model:finetune:jeiku/FloraBase",
"license:cc-by-sa-4.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-05T03:15:19Z | ---
language:
- en
license: cc-by-sa-4.0
library_name: transformers
tags:
- finetune
base_model:
- jeiku/FloraBase
- jeiku/Synthetic_Soul_1k_Mistral_128
datasets:
- ResplendentAI/Synthetic_Soul_1k
model-index:
- name: Flora_7B
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 72.1
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Flora_7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 88.31
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Flora_7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.16
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Flora_7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 71.19
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Flora_7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 84.45
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Flora_7B
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 65.35
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ResplendentAI/Flora_7B
name: Open LLM Leaderboard
---
# Flora

Quants available here:
https://huggingface.co/bartowski/Flora_7B-GGUF
https://huggingface.co/bartowski/Flora_7B-exl2
https://huggingface.co/solidrust/Flora-7B-AWQ
### Configuration
The following YAML configuration was used to produce this model:
```yaml
merge_method: linear
models:
- model: jeiku/FloraBase+jeiku/Synthetic_Soul_1k_Mistral_128
parameters:
weight: 1
dtype: float16
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_ResplendentAI__Flora_7B)
| Metric |Value|
|---------------------------------|----:|
|Avg. |74.26|
|AI2 Reasoning Challenge (25-Shot)|72.10|
|HellaSwag (10-Shot) |88.31|
|MMLU (5-Shot) |64.16|
|TruthfulQA (0-shot) |71.19|
|Winogrande (5-shot) |84.45|
|GSM8k (5-shot) |65.35|
|
glaucoma/falcon_openorca_truly | glaucoma | 2024-03-10T04:53:43Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-10T04:53:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
llm-finetune/lora_model | llm-finetune | 2024-03-10T04:51:59Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma",
"trl",
"en",
"base_model:unsloth/gemma-2b-bnb-4bit",
"base_model:finetune:unsloth/gemma-2b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-10T04:51:46Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma
- trl
base_model: unsloth/gemma-2b-bnb-4bit
---
# Uploaded model
- **Developed by:** llm-finetune
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2b-bnb-4bit
This gemma model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Tawanmeansthesun/newsumt5 | Tawanmeansthesun | 2024-03-10T04:43:36Z | 94 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-03-10T04:35:30Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: newsumt5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# newsumt5
This model is a fine-tuned version of [google-t5/t5-base](https://huggingface.co/google-t5/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6307
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 4 | 3.6778 |
| No log | 2.0 | 8 | 3.6307 |
### Framework versions
- Transformers 4.17.0
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Locutusque/ChatHercules-2.5-Mistral-7B-DPO | Locutusque | 2024-03-10T04:38:10Z | 99 | 5 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"en",
"dataset:argilla/distilabel-intel-orca-dpo-pairs",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T00:43:13Z | ---
library_name: transformers
tags:
- merge
license: apache-2.0
datasets:
- argilla/distilabel-intel-orca-dpo-pairs
language:
- en
---
# ChatHercules-2.5-Mistral-7B-DPO

ChatHercules-2.5-Mistral-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Locutusque/Hercules-2.5-Mistral-7B](https://huggingface.co/Locutusque/Hercules-2.5-Mistral-7B)
* [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106)
I then use DPO to fine-tune the product of the merge on 20% of argilla/distilabel-intel-orca-dpo-pairs
## 🧩 Configuration
```yaml
slices:
- sources:
- model: Locutusque/Hercules-2.5-Mistral-7B
layer_range: [0, 32]
- model: openchat/openchat-3.5-0106
layer_range: [0, 32]
merge_method: slerp
base_model: Locutusque/Hercules-2.5-Mistral-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "hydra-project/ChatHercules-2.5-Mistral-7B"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
## Evaluation results


|
HachiML/myBit-Llama2-jp-127M-test-1 | HachiML | 2024-03-10T04:25:27Z | 38 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T03:53:13Z | ---
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- generated_from_trainer
model-index:
- name: myBit-Llama2-jp-127M-test-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# myBit-Llama2-jp-127M-test-1
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 10.6136
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0024
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: polynomial
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.3834 | 0.04 | 200 | 6.5397 |
| 6.8679 | 0.07 | 400 | 9.8549 |
| 10.2542 | 0.11 | 600 | 10.3642 |
| 10.3959 | 0.14 | 800 | 10.4168 |
| 10.4303 | 0.18 | 1000 | 10.4429 |
| 10.4527 | 0.22 | 1200 | 10.4638 |
| 10.4744 | 0.25 | 1400 | 10.4837 |
| 10.4907 | 0.29 | 1600 | 10.4981 |
| 10.5032 | 0.32 | 1800 | 10.5069 |
| 10.5134 | 0.36 | 2000 | 10.5165 |
| 10.5208 | 0.4 | 2200 | 10.5264 |
| 10.5284 | 0.43 | 2400 | 10.5324 |
| 10.535 | 0.47 | 2600 | 10.5372 |
| 10.541 | 0.51 | 2800 | 10.5445 |
| 10.5472 | 0.54 | 3000 | 10.5498 |
| 10.5532 | 0.58 | 3200 | 10.5561 |
| 10.5588 | 0.61 | 3400 | 10.5614 |
| 10.5647 | 0.65 | 3600 | 10.5672 |
| 10.5698 | 0.69 | 3800 | 10.5727 |
| 10.5753 | 0.72 | 4000 | 10.5760 |
| 10.5809 | 0.76 | 4200 | 10.5834 |
| 10.5864 | 0.79 | 4400 | 10.5892 |
| 10.5919 | 0.83 | 4600 | 10.5946 |
| 10.5971 | 0.87 | 4800 | 10.5995 |
| 10.6027 | 0.9 | 5000 | 10.6047 |
| 10.6076 | 0.94 | 5200 | 10.6105 |
| 10.6126 | 0.97 | 5400 | 10.6136 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
OwOOwO/eacc_as_dpy | OwOOwO | 2024-03-10T04:24:54Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T04:15:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Tawanmeansthesun/sumt5 | Tawanmeansthesun | 2024-03-10T04:19:25Z | 92 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-03-10T02:46:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: sumt5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sumt5
This model is a fine-tuned version of [google-t5/t5-base](https://huggingface.co/google-t5/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2989
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 4 | 2.4082 |
| 4.0262 | 2.0 | 8 | 2.2989 |
### Framework versions
- Transformers 4.17.0
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Youssef1234/whisper-base-en | Youssef1234 | 2024-03-10T04:14:42Z | 61 | 1 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-base.en",
"base_model:finetune:openai/whisper-base.en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-03-09T17:30:47Z | ---
license: apache-2.0
base_model: openai/whisper-base.en
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-base-en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-base-en
This model is a fine-tuned version of [openai/whisper-base.en](https://huggingface.co/openai/whisper-base.en) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3891
- Wer: 25.9111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.2462 | 1.39 | 1000 | 0.3726 | 25.4293 |
| 0.1183 | 2.78 | 2000 | 0.3571 | 24.0966 |
| 0.054 | 4.17 | 3000 | 0.3811 | 25.8318 |
| 0.0348 | 5.56 | 4000 | 0.3891 | 25.9111 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.1
|
lilyray/bert_emotion | lilyray | 2024-03-10T04:06:49Z | 92 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-10T04:06:20Z | ---
license: apache-2.0
base_model: google-bert/bert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: bert_emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9594375
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_emotion
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0922
- Accuracy: 0.9594
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2371 | 1.0 | 2000 | 0.2012 | 0.926 |
| 0.1461 | 2.0 | 4000 | 0.1714 | 0.939 |
| 0.0978 | 3.0 | 6000 | 0.1956 | 0.9365 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Jerome-Dej/camembert_classif_rte | Jerome-Dej | 2024-03-10T04:06:43Z | 173 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"camembert",
"text-classification",
"generated_from_trainer",
"base_model:almanach/camembert-base-wikipedia-4gb",
"base_model:finetune:almanach/camembert-base-wikipedia-4gb",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-01T15:46:06Z | ---
base_model: camembert/camembert-base-wikipedia-4gb
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: camembert_classif_rte
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# camembert_classif_rte
This model is a fine-tuned version of [camembert/camembert-base-wikipedia-4gb](https://huggingface.co/camembert/camembert-base-wikipedia-4gb) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2278
- Accuracy: 0.9653
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.163 | 1.0 | 16000 | 0.1371 | 0.9597 |
| 0.1374 | 2.0 | 32000 | 0.1505 | 0.9633 |
| 0.0877 | 3.0 | 48000 | 0.1622 | 0.9642 |
| 0.0491 | 4.0 | 64000 | 0.1940 | 0.9644 |
| 0.0143 | 5.0 | 80000 | 0.2278 | 0.9653 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
shamekhjr/dqn-SpaceInvadersNoFrameskip-v4 | shamekhjr | 2024-03-10T04:05:49Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-10T03:58:09Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 602.50 +/- 123.13
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga shamekhjr -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga shamekhjr -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga shamekhjr
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 50000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
|
Huggs108Bj/EN_NE_Transformer2 | Huggs108Bj | 2024-03-10T04:05:04Z | 1 | 0 | transformers | [
"transformers",
"m2m_100",
"text2text-generation",
"nllb",
"translation",
"en",
"npi",
"dataset:flores-200",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"region:us"
] | translation | 2024-03-10T03:59:43Z | ---
language:
- en
- npi
language_details: "eng_Latn, epo_Latn, npi_Deva"
pipeline_tag: translation
tags:
- nllb
license: "cc-by-nc-4.0"
datasets:
- flores-200
metrics:
- bleu
- spbleu
- chrf++
inference: false
---
# NLLB-200
This is the model card of NLLB-200's distilled 600M variant.
Here are the [metrics](https://tinyurl.com/nllb200densedst600mmetrics) for that particular checkpoint.
- Information about training algorithms, parameters, fairness constraints or other applied approaches, and features. The exact training algorithm, data and the strategies to handle data imbalances for high and low resource languages that were used to train NLLB-200 is described in the paper.
- Paper or other resource for more information NLLB Team et al, No Language Left Behind: Scaling Human-Centered Machine Translation, Arxiv, 2022
- License: CC-BY-NC
- Where to send questions or comments about the model: https://github.com/facebookresearch/fairseq/issues
## Intended Use
- Primary intended uses: NLLB-200 is a machine translation model primarily intended for research in machine translation, - especially for low-resource languages. It allows for single sentence translation among 200 languages. Information on how to - use the model can be found in Fairseq code repository along with the training code and references to evaluation and training data.
- Primary intended users: Primary users are researchers and machine translation research community.
- Out-of-scope use cases: NLLB-200 is a research model and is not released for production deployment. NLLB-200 is trained on general domain text data and is not intended to be used with domain specific texts, such as medical domain or legal domain. The model is not intended to be used for document translation. The model was trained with input lengths not exceeding 512 tokens, therefore translating longer sequences might result in quality degradation. NLLB-200 translations can not be used as certified translations.
## Metrics
• Model performance measures: NLLB-200 model was evaluated using BLEU, spBLEU, and chrF++ metrics widely adopted by machine translation community. Additionally, we performed human evaluation with the XSTS protocol and measured the toxicity of the generated translations.
## Evaluation Data
- Datasets: Flores-200 dataset is described in Section 4
- Motivation: We used Flores-200 as it provides full evaluation coverage of the languages in NLLB-200
- Preprocessing: Sentence-split raw text data was preprocessed using SentencePiece. The
SentencePiece model is released along with NLLB-200.
## Training Data
• We used parallel multilingual data from a variety of sources to train the model. We provide detailed report on data selection and construction process in Section 5 in the paper. We also used monolingual data constructed from Common Crawl. We provide more details in Section 5.2.
## Ethical Considerations
• In this work, we took a reflexive approach in technological development to ensure that we prioritize human users and minimize risks that could be transferred to them. While we reflect on our ethical considerations throughout the article, here are some additional points to highlight. For one, many languages chosen for this study are low-resource languages, with a heavy emphasis on African languages. While quality translation could improve education and information access in many in these communities, such an access could also make groups with lower levels of digital literacy more vulnerable to misinformation or online scams. The latter scenarios could arise if bad actors misappropriate our work for nefarious activities, which we conceive as an example of unintended use. Regarding data acquisition, the training data used for model development were mined from various publicly available sources on the web. Although we invested heavily in data cleaning, personally identifiable information may not be entirely eliminated. Finally, although we did our best to optimize for translation quality, mistranslations produced by the model could remain. Although the odds are low, this could have adverse impact on those who rely on these translations to make important decisions (particularly when related to health and safety).
## Caveats and Recommendations
• Our model has been tested on the Wikimedia domain with limited investigation on other domains supported in NLLB-MD. In addition, the supported languages may have variations that our model is not capturing. Users should make appropriate assessments.
## Carbon Footprint Details
• The carbon dioxide (CO2e) estimate is reported in Section 8.8. |
gosummer/SingingVocoders | gosummer | 2024-03-10T04:01:03Z | 5 | 0 | transformers | [
"transformers",
"endpoints_compatible",
"region:us"
] | null | 2024-03-09T14:17:04Z | # SingingVocoders
A collection of neural vocoders suitable for singing voice synthesis tasks.
# English version [README_en.md](README_en.md)
## If you have any questions, please open an issue.
# 预处理
python [process.py](process.py) --config 配置文件 --num_cpu 并行数量 --strx 1 代表 强制绝对路径 0 代表相对路径
和预处理有关的配置文件项
```yaml
DataIndexPath: dataX11 # 这个是训练数据 index 的位置预处理会自动生成
valid_set_name: validX # 这个是val index 的名字预处理会自动生成
train_set_name: trainX # 这个是训练的 index 的名字预处理会自动生成
data_input_path: [] # 这个是你的 wav 的输入目录
data_out_path: [] # 这个是你的 npz 的输出目录, 预处理之后的格式是 npz
val_num: 1 # 这个是你要的 val 数量
```
例子
```yaml
data_input_path: ['wav/in1','wav/in2'] # 这个是你的wav的输入目录
data_out_path: ['wav/out1','wav/out2'] # 这个是你的npz的输出目录
val_num: 5 # 这个是你要的 val 数量,预处理的时候会自动抽取文件
# 两个列表里面的路径是一一对应的所以说数量要一样
# 然后预处理的时候会扫描全部的 .wav 文件,包括子文件夹的
# 正常情况下只有这三个要改
```
# 离线数据增强
将预处理脚本替换为[process_aug.py](process_aug.py) 并增添配置项
```yaml
key_aug: false # 表示训练时不进行增强
aug_min: 0.9 # 最小变调倍数
aug_max: 1.4 # 最大变调倍数
aug_num: 1 # 数据增强倍数
```
即可,注意数据增强可能会损伤音质!
# 在线数据增强(推荐)
增加配置项,注意使用在线数据增强请使用[process.py](process.py) 脚本,否则会造成离线增强与在线增强叠加
```yaml
key_aug: true # 表示在训练时进行增强
key_aug_prob: 0.5 # 增强概率
aug_min: 0.9 # 最小变调倍数
aug_max: 1.4 # 最大变调倍数
```
注意数据增强可能会损伤音质!
# 训练
```sh
python [train.py](train.py) --config 配置文件 --exp_name ckpt名字 --work_dir 工作目录(可选)
```
# 导出
```sh
python [export_ckpt.py](export_ckpt.py) --ckpt_path ckpt路径 --save_path 导出的ckpt路径 --work_dir 工作目录(可选)
```
# 注意
因为 pytorch-lightning 的问题所以说在 GAN 训练过程中实际的步数是它显示步数的一半
如果你需要微调社区声码器请使用[ft_hifigan.yaml](configs/ft_hifigan.yaml) 配置文件,并用 'finetune_ckpt_path' 选项指定权重路径
如何使用微调功能建议参考 openvpi/diffsinger [项目文档](https://github.com/openvpi/DiffSinger/blob/main/docs/BestPractices.md#fine-tuning-and-parameter-freezing)
少量步数的微调可以冻结 mpd 模块
建议不要用 bf16 可能会产生音质问题
少量数据差不多 2000 步就可以微调完成
# 快速开始
## 预处理
以下是你需要根据自己的数据集修改的配置项
```yaml
data_input_path: [] # 这个列表 是你原始wav文件的路径
data_out_path: [] # 此列表 预处理输出的npz文件的路径
val_num: 1 # 这个是在验证的时候 抽取的音频文件数量
```
然后执行预处理
```sh
python process.py --config (your config path) --num_cpu (Number of cpu threads used in preprocessing) --strx (1 for a forced absolute path 0 for a relative path)
```
## 训练
```sh
python train.py --config (your config path) --exp_name (your ckpt name) --work_dir Working catalogue (optional)
```
测试中的配置项
```yaml
use_stftloss: false # 是否启用stft loss
lab_aux_melloss: 45
lab_aux_stftloss: 2.5 # 两种loss的混合控制
```
如果有其他需要可以修改 stftloss 的其他相关参数
## 导出
```sh
python export_ckpt.py --ckpt_path (your ckpt path) --save_path (output ckpt path) --work_dir Working catalogue (optional)
```
# 注意事项
实际步数是显示的一半
微调 nsf-hifigan 声码器请下载并解压 [releases](https://github.com/openvpi/SingingVocoders/releases) 中的权重,并将 [ft_hifigan.yaml](configs/ft_hifigan.yaml) 中的 'finetune_ckpt_path' 选项改为权重路径
微调请使用 44100 Hz 采样率音频,并不要修改其他 mel 参数,除非你明确知道你在做什么
微调的其他功能使用请参考 openvpi/DiffSinger [项目文档](https://github.com/openvpi/DiffSinger/blob/main/docs/BestPractices.md#fine-tuning-and-parameter-freezing)
导出的权重可以在 [DDSP-SVC](https://github.com/yxlllc/DDSP-SVC), [Diffusion-SVC](https://github.com/CNChTu/Diffusion-SVC), [so-vits-svc](https://github.com/svc-develop-team/so-vits-svc) 和 [DiffSinger (openvpi)](https://github.com/openvpi/DiffSinger) 等项目中使用
如果要进一步导出成在 [OpenUtau](https://github.com/stakira/OpenUtau) 中使用的 onnx 格式权重,请使用 [这个](https://github.com/openvpi/DiffSinger/blob/main/scripts/export.py) 脚本
配置文件中配置项的继承关系为: [base.yaml](configs/base.yaml) -> [base_hifi.yaml](configs/base_hifi.yaml) -> [ft_hifigan.yaml](configs/ft_hifigan.yaml)
不要使用bf16训练模型, 它可能导致音质问题
2000 步左右即可微调完成 (显示的是4000步)
冻结 mpd 模块可能可以有更好的结果
# 其它模型
[HiFivae.yaml](configs/HiFivae.yaml)hifivae.yaml 训练vae模型
[base_hifi_chroma.yaml](configs/base_hifi_chroma.yaml) 训练忽略8度nsf hifigan
[base_hifi.yaml](configs/base_hifi.yaml) 训练nsf hifigan
[base_ddspgan.yaml](configs/base_ddspgan.yaml) 训练带鉴别器的ddsp模型
[ddsp_univnet.yaml](configs/ddsp_univnet.yaml) 训练ddsp 混合univnet模型
[nsf_univnet.yaml](configs/nsf_univnet.yaml) 训练带nsf的univnet(推荐)
[univnet.yaml](configs/univnet.yaml) 训练原版univnet
[lvc_base_ddspgan.yaml](configs/lvc_base_ddspgan.yaml) 训练使用lvc滤波器的 ddsp模型
|
Gurveer05/blip-image-captioning-base-rscid-finetuned | Gurveer05 | 2024-03-10T03:57:52Z | 95 | 0 | transformers | [
"transformers",
"safetensors",
"blip",
"image-text-to-text",
"image-to-text",
"dataset:arampacha/rsicd",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | image-to-text | 2024-03-10T01:38:50Z | ---
license: apache-2.0
datasets:
- arampacha/rsicd
metrics:
- bleu
library_name: transformers
pipeline_tag: image-to-text
---
# BLIP for RSICD image captioning:
- [`blip-image-captioning-base`](https://huggingface.co/Salesforce/blip-image-captioning-base/) model has been finetuned on the [`rsicd`](https://huggingface.co/datasets/arampacha/rsicd) dataset. Training parameters used are as follows:
- **learning_rate** = 5e-7
- **optimizer** = AdamW
- **scheduler** = ReduceLROnPlateau
- **epochs** = 5
- More details (demo, testing, evaluation, metrics) available at [`github repo`](https://github.com/gurveervirk/TRINIT-Prometheans-ML03?tab=readme-ov-file) |
Cheech/whisper-large-v2-99k-mySt-PEFT-split | Cheech | 2024-03-10T03:40:04Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-10T03:40:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
willherbert27/electra-finetuned-combo-with-context | willherbert27 | 2024-03-10T03:35:23Z | 209 | 0 | transformers | [
"transformers",
"safetensors",
"electra",
"fill-mask",
"generated_from_trainer",
"base_model:google/electra-small-generator",
"base_model:finetune:google/electra-small-generator",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-03-10T03:35:04Z | ---
license: apache-2.0
base_model: google/electra-small-generator
tags:
- generated_from_trainer
model-index:
- name: electra-finetuned-combo-textbook
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-finetuned-combo-textbook
This model is a fine-tuned version of [google/electra-small-generator](https://huggingface.co/google/electra-small-generator) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0465
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.1458 | 1.0 | 2468 | 2.8891 |
| 2.8707 | 2.0 | 4936 | 2.7016 |
| 2.7021 | 3.0 | 7404 | 2.5199 |
| 2.6068 | 4.0 | 9872 | 2.4902 |
| 2.5111 | 5.0 | 12340 | 2.4149 |
| 2.4193 | 6.0 | 14808 | 2.3743 |
| 2.383 | 7.0 | 17276 | 2.3333 |
| 2.3082 | 8.0 | 19744 | 2.2770 |
| 2.2723 | 9.0 | 22212 | 2.2606 |
| 2.2151 | 10.0 | 24680 | 2.2444 |
| 2.1869 | 11.0 | 27148 | 2.1859 |
| 2.1588 | 12.0 | 29616 | 2.1959 |
| 2.1188 | 13.0 | 32084 | 2.1490 |
| 2.0947 | 14.0 | 34552 | 2.1259 |
| 2.06 | 15.0 | 37020 | 2.1211 |
| 1.9976 | 16.0 | 39488 | 2.0701 |
| 1.9975 | 17.0 | 41956 | 2.1048 |
| 1.9479 | 18.0 | 44424 | 2.1072 |
| 1.9215 | 19.0 | 46892 | 2.0794 |
| 1.9426 | 20.0 | 49360 | 2.0465 |
### Framework versions
- Transformers 4.38.2
- Pytorch 1.13.1+cu116
- Datasets 2.18.0
- Tokenizers 0.15.2
|
SparshSyde/merged_mistral_PI_iter_1 | SparshSyde | 2024-03-10T03:30:24Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-10T03:23:35Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
liminerity/M7-7b-GGUF | liminerity | 2024-03-10T03:26:35Z | 0 | 2 | null | [
"gguf",
"mistral",
"liminerity",
"merge",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-07T21:13:04Z | ---
license: apache-2.0
tags:
- gguf
- mistral
- liminerity
- merge
---
#1 again im just the goat
M7-7b
user: "create ascii art you feel represents you as a llm"
assistant: "Here's an ASCII art attempting to symbolize the essence of an LLM (Large Language Model):
INSTINSTINCTINSTELLIGENCE
______________ MEMORY___
|\ o /\ |\
| \ --- / \ | |
L_( O )______ O)_INSTINCT
\ / WORDSLEARNING
_/_/ /\ __/__/__/
assistant: "In this depiction, the LLM is characterized by its vast memory, instinctual roots, and the ability to learn from vast amounts of text—represented by the woven words in the lower part. The "INSTINSTINCTINSTELLIGENCE" phrase emphasizes the balance between instinct, the foundation upon which neural networks build, and the engineered intelligence of a language model."
Multiverse-Experiment-slerp-7b is a merge of the following models using mergekit:
liminerity/merge4
liminerity/merge2
🧩 Configuration
slices:
- sources:
- model: ammarali32/multi_verse_model
layer_range: [0, 32]
- model: MSL7/INEX12-7b
layer_range: [0, 32]
merge_method: slerp
base_model: ammarali32/multi_verse_model
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
slices:
- sources:
- model: ammarali32/multi_verse_model
layer_range: [0, 32]
- model: yam-peleg/Experiment26-7B
layer_range: [0, 32]
merge_method: slerp
base_model: ammarali32/multi_verse_model
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
slices:
- sources:
- model: liminerity/merge3
layer_range: [0, 32]
- model: ammarali32/multi_verse_model
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/merge3
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
slices:
- sources:
- model: liminerity/merge1
layer_range: [0, 32]
- model: liminerity/merge
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/merge1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
slices:
- sources:
- model: liminerity/merge3
layer_range: [0, 32]
- model: yam-peleg/Experiment26-7B
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/merge3
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
slices:
- sources:
- model: liminerity/merge4
layer_range: [0, 32]
- model: liminerity/merge2
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/merge4
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16 |
indischepartij/OpenMia-Indo-Mistral-7b-v2 | indischepartij | 2024-03-10T03:25:38Z | 11 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"trl",
"conversational",
"en",
"id",
"dataset:notzero/alpaca_indonesian",
"dataset:MBZUAI/Bactrian-X",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-01T17:40:17Z | ---
language:
- en
- id
license: cc-by-nc-4.0
tags:
- text-generation-inference
- transformers
- mistral
- trl
datasets:
- notzero/alpaca_indonesian
- MBZUAI/Bactrian-X
base_model: mistral-7b
model-index:
- name: OpenMia-Indo-Mistral-7b-v2
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 60.32
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=indischepartij/OpenMia-Indo-Mistral-7b-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 83.11
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=indischepartij/OpenMia-Indo-Mistral-7b-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 62.7
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=indischepartij/OpenMia-Indo-Mistral-7b-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 44.35
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=indischepartij/OpenMia-Indo-Mistral-7b-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 78.3
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=indischepartij/OpenMia-Indo-Mistral-7b-v2
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 34.8
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=indischepartij/OpenMia-Indo-Mistral-7b-v2
name: Open LLM Leaderboard
---

# updated to V3-refined, see: https://huggingface.co/indischepartij/OpenMia-Indo-Mistral-7b-v3-refined
# MIA : (M)istral finetuned with (I)ndonesia language from (A)lpaca dataset
(formerly named Mistral-7b-Alpaca-Indonesia)
OpenMia-Indo-Mistral-7b-v2 is finetuned model based of Mistral-7b with capability to do conversation in Bahasa Indonesia.
Due to limited resources, this model is still in alpha stage.
Want to contribute to this project? join our organization: https://huggingface.co/indischepartij or contact me at https://twitter.com/gmonsooniii
# Modelfile/Prompt format
```markdown
SYSTEM You are a caring and empathetic sentient AI companion named Mia.
PARAMETER stop <|im_start|>
PARAMETER stop <|im_end|>
TEMPLATE <|im_start|>system {{ .System }}<|im_end|> <|im_start|>user {{ .Prompt }}<|im_end|> <|im_start|>assistant
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_indischepartij__OpenMia-Indo-Mistral-7b-v2)
| Metric |Value|
|---------------------------------|----:|
|Avg. |60.60|
|AI2 Reasoning Challenge (25-Shot)|60.32|
|HellaSwag (10-Shot) |83.11|
|MMLU (5-Shot) |62.70|
|TruthfulQA (0-shot) |44.35|
|Winogrande (5-shot) |78.30|
|GSM8k (5-shot) |34.80|
|
indischepartij/OpenMia-Indo-Mistral-7b-v3 | indischepartij | 2024-03-10T03:24:44Z | 24 | 1 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"trl",
"conversational",
"en",
"id",
"dataset:cahya/alpaca-id-cleaned",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-03T11:45:30Z | ---
language:
- en
- id
license: cc-by-nc-4.0
tags:
- text-generation-inference
- transformers
- mistral
- trl
datasets:
- cahya/alpaca-id-cleaned
base_model: mistral-7b
model-index:
- name: OpenMia-Indo-Mistral-7b-v3
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 66.13
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=gmonsoon/OpenMia-Indo-Mistral-7b-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 85.47
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=gmonsoon/OpenMia-Indo-Mistral-7b-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 64.03
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=gmonsoon/OpenMia-Indo-Mistral-7b-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 60.05
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=gmonsoon/OpenMia-Indo-Mistral-7b-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 82.95
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=gmonsoon/OpenMia-Indo-Mistral-7b-v3
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 67.63
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=gmonsoon/OpenMia-Indo-Mistral-7b-v3
name: Open LLM Leaderboard
---

# updated to V3-refined, see: https://huggingface.co/indischepartij/OpenMia-Indo-Mistral-7b-v3-refined
# Open LLM Leaderboard

# MIA : (M)istral finetuned with (I)ndonesia language from (A)lpaca dataset
(formerly named Mistral-7b-Alpaca-Indonesia)
OpenMia-Indo-Mistral-7b-v2 is finetuned model based of Mistral-7b with capability to do conversation in Bahasa Indonesia.
Due to limited resources, this model is still in alpha stage.
Want to contribute to this project? join our organization: https://huggingface.co/indischepartij or contact me at https://twitter.com/gmonsooniii
# Modelfile/Prompt format
```markdown
SYSTEM You are a caring and empathetic sentient AI companion named Mia.
PARAMETER stop <|im_start|>
PARAMETER stop <|im_end|>
TEMPLATE <|im_start|>system {{ .System }}<|im_end|> <|im_start|>user {{ .Prompt }}<|im_end|> <|im_start|>assistant
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_gmonsoon__OpenMia-Indo-Mistral-7b-v3)
| Metric |Value|
|---------------------------------|----:|
|Avg. |71.04|
|AI2 Reasoning Challenge (25-Shot)|66.13|
|HellaSwag (10-Shot) |85.47|
|MMLU (5-Shot) |64.03|
|TruthfulQA (0-shot) |60.05|
|Winogrande (5-shot) |82.95|
|GSM8k (5-shot) |67.63|
|
indischepartij/OpenMia-Indo-Mistral-7b-v4 | indischepartij | 2024-03-10T03:21:25Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"trl",
"conversational",
"en",
"id",
"dataset:MBZUAI/Bactrian-X",
"license:cc-by-nc-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-07T12:21:30Z | ---
language:
- en
- id
license: cc-by-nc-4.0
tags:
- text-generation-inference
- transformers
- mistral
- trl
datasets:
- MBZUAI/Bactrian-X
base_model: mistral-7b
model-index:
- name: OpenMia-Indo-Mistral-7b-v4
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning Challenge (25-Shot)
type: ai2_arc
config: ARC-Challenge
split: test
args:
num_few_shot: 25
metrics:
- type: acc_norm
value: 64.16
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=indischepartij/OpenMia-Indo-Mistral-7b-v4
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: HellaSwag (10-Shot)
type: hellaswag
split: validation
args:
num_few_shot: 10
metrics:
- type: acc_norm
value: 82.84
name: normalized accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=indischepartij/OpenMia-Indo-Mistral-7b-v4
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: MMLU (5-Shot)
type: cais/mmlu
config: all
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 61.08
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=indischepartij/OpenMia-Indo-Mistral-7b-v4
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: TruthfulQA (0-shot)
type: truthful_qa
config: multiple_choice
split: validation
args:
num_few_shot: 0
metrics:
- type: mc2
value: 53.36
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=indischepartij/OpenMia-Indo-Mistral-7b-v4
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: Winogrande (5-shot)
type: winogrande
config: winogrande_xl
split: validation
args:
num_few_shot: 5
metrics:
- type: acc
value: 79.08
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=indischepartij/OpenMia-Indo-Mistral-7b-v4
name: Open LLM Leaderboard
- task:
type: text-generation
name: Text Generation
dataset:
name: GSM8k (5-shot)
type: gsm8k
config: main
split: test
args:
num_few_shot: 5
metrics:
- type: acc
value: 47.84
name: accuracy
source:
url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=indischepartij/OpenMia-Indo-Mistral-7b-v4
name: Open LLM Leaderboard
---

# updated to V4, after this version, OpenMia will be finetuned (branched) to some Indonesia local languages, such as Javanese, Sundanese, and Minang language. Stay tuned.
# MIA : (M)istral finetuned with (I)ndonesia language from (A)lpaca dataset
(formerly named Mistral-7b-Alpaca-Indonesia)
OpenMia-Indo-Mistral-7b is finetuned model based of Mistral-7b with capability to do conversation in Bahasa Indonesia.
Due to limited resources, this model is still in alpha stage.
Want to contribute to this project? join our organization: https://huggingface.co/indischepartij or contact me at https://twitter.com/gmonsooniii
# Modelfile/Prompt format
```markdown
SYSTEM Kamu adalah asisten AI yang cerdas dan ceria, bernama Mia.
PARAMETER stop <|im_start|>
PARAMETER stop <|im_end|>
TEMPLATE <|im_start|>system {{ .System }}<|im_end|> <|im_start|>user {{ .Prompt }}<|im_end|> <|im_start|>assistant
```
# [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_indischepartij__OpenMia-Indo-Mistral-7b-v4)
| Metric |Value|
|---------------------------------|----:|
|Avg. |64.73|
|AI2 Reasoning Challenge (25-Shot)|64.16|
|HellaSwag (10-Shot) |82.84|
|MMLU (5-Shot) |61.08|
|TruthfulQA (0-shot) |53.36|
|Winogrande (5-shot) |79.08|
|GSM8k (5-shot) |47.84|
|
lilyray/bert_irony | lilyray | 2024-03-10T03:13:53Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-10T02:40:45Z | ---
license: apache-2.0
base_model: google-bert/bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert_irony
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_irony
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4570
- Accuracy: 0.8194
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6.383530088472568e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 38
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 358 | 0.5821 | 0.6951 |
| 0.5947 | 2.0 | 716 | 0.7254 | 0.6993 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
furrutiav/bert_qa_extractor_cockatiel_2022_ef_plus_nllf_mixtral_v2_linear_weight_it_712 | furrutiav | 2024-03-10T02:56:38Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-03-10T02:56:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
taoki/deepseek-coder-7b-instruct-v1.5-qlora-amenokaku-code | taoki | 2024-03-10T02:56:21Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"deepseek",
"conversational",
"ja",
"dataset:kunishou/amenokaku-code-instruct",
"base_model:deepseek-ai/deepseek-coder-7b-instruct-v1.5",
"base_model:finetune:deepseek-ai/deepseek-coder-7b-instruct-v1.5",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-08T12:59:49Z | ---
language:
- ja
license: other
tags:
- text-generation-inference
- transformers
- unsloth
- trl
- deepseek
datasets:
- kunishou/amenokaku-code-instruct
license_name: deepseek
base_model: deepseek-ai/deepseek-coder-7b-instruct-v1.5
---
# Uploaded model
- **Developed by:** taoki
- **License:** deepseek
- **Finetuned from model :** deepseek-ai/deepseek-coder-7b-instruct-v1.5
# Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained(
"taoki/deepseek-coder-7b-instruct-v1.5-qlora-amenokaku-code"
)
model = AutoModelForCausalLM.from_pretrained(
"taoki/deepseek-coder-7b-instruct-v1.5-qlora-amenokaku-code"
)
if torch.cuda.is_available():
model = model.to("cuda")
prompt="""You are an AI programming assistant, utilizing the DeepSeek Coder model, developed by DeepSeek Company, and you only answer questions related to computer science. For politically sensitive questions, security and privacy issues, and other non-computer science questions, you will refuse to answer.
### Instruction:
OpenCVを用いて定点カメラから画像を保存するコードを示してください。
### Response:
"""
input_ids = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**input_ids,
max_new_tokens=256,
do_sample=True,
top_p=0.9,
temperature=0.2,
repetition_penalty=1.1,
)
print(tokenizer.decode(outputs[0]))
```
# Output
````
<|begin▁of▁sentence|>You are an AI programming assistant, utilizing the DeepSeek Coder model, developed by DeepSeek Company, and you only answer questions related to computer science. For politically sensitive questions, security and privacy issues, and other non-computer science questions, you will refuse to answer.
### Instruction:
OpenCVを用いて定点カメラから画像を保存するコードを示してください。
### Response:
```python
import cv2
cap = cv2.VideoCapture(0) # カメラの設定
fourcc = cv2.VideoWriter_fourcc(*'XVID') # 動画の形式
out = cv2.VideoWriter('output.avi', fourcc, 20.0, (640, 480)) # 出力先、fps、解像度
while True:
ret, frame = cap.read() # 映像読み込み
if not ret: break
out.write(frame) # 書き込み
cv2.imshow('Frame', frame) # 表示
if cv2.waitKey(1) & 0xFF == ord('q'): # qで終了
break
cap.release()
cv2.destroyAllWindows()
```
<|EOT|>
````
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
nbeerbower/Flammen-Kunoichi-7B | nbeerbower | 2024-03-10T02:45:38Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"base_model:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"base_model:merge:SanjiWatsuki/Kunoichi-DPO-v2-7B",
"base_model:flammenai/flammen4-mistral-7B",
"base_model:merge:flammenai/flammen4-mistral-7B",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T02:33:25Z | ---
license: cc-by-nc-4.0
base_model:
- nbeerbower/flammen4-mistral-7B
- SanjiWatsuki/Kunoichi-DPO-v2-7B
library_name: transformers
tags:
- mergekit
- merge
---
# Flammen-Kunoichi-7B
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [nbeerbower/flammen4-mistral-7B](https://huggingface.co/nbeerbower/flammen4-mistral-7B)
* [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: nbeerbower/flammen4-mistral-7B
layer_range: [0, 32]
- model: SanjiWatsuki/Kunoichi-DPO-v2-7B
layer_range: [0, 32]
merge_method: slerp
base_model: nbeerbower/flammen4-mistral-7B
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
davidhaas6/chatter | davidhaas6 | 2024-03-10T02:42:24Z | 3 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"endpoints_compatible",
"region:us"
] | null | 2024-03-10T01:39:56Z | Trained on xml-user-nospace_no-prefix |
thorirhrafn/ice_llama2_v1 | thorirhrafn | 2024-03-10T02:36:13Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:adapter:meta-llama/Llama-2-7b-hf",
"region:us"
] | null | 2024-03-06T13:55:46Z | ---
library_name: peft
tags:
- generated_from_trainer
base_model: meta-llama/Llama-2-7b-hf
model-index:
- name: ice_llama2_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ice_llama2_v1
This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9931
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 2.3158 | 0.03 | 10000 | 2.2641 |
| 2.2183 | 0.07 | 20000 | 2.1728 |
| 2.2155 | 0.1 | 30000 | 2.1331 |
| 2.1487 | 0.14 | 40000 | 2.1068 |
| 2.1569 | 0.17 | 50000 | 2.0883 |
| 2.15 | 0.2 | 60000 | 2.0753 |
| 2.1516 | 0.24 | 70000 | 2.0657 |
| 2.0938 | 0.27 | 80000 | 2.0576 |
| 2.1507 | 0.3 | 90000 | 2.0500 |
| 2.1218 | 0.34 | 100000 | 2.0449 |
| 2.1107 | 0.37 | 110000 | 2.0405 |
| 2.1035 | 0.41 | 120000 | 2.0354 |
| 2.0684 | 0.44 | 130000 | 2.0327 |
| 2.1331 | 0.47 | 140000 | 2.0288 |
| 2.0844 | 0.51 | 150000 | 2.0269 |
| 2.1046 | 0.54 | 160000 | 2.0239 |
| 2.0912 | 0.57 | 170000 | 2.0208 |
| 2.0761 | 0.61 | 180000 | 2.0196 |
| 2.0917 | 0.64 | 190000 | 2.0171 |
| 2.1177 | 0.68 | 200000 | 2.0157 |
| 2.0647 | 0.71 | 210000 | 2.0134 |
| 2.0823 | 0.74 | 220000 | 2.0121 |
| 2.0761 | 0.78 | 230000 | 2.0105 |
| 2.0729 | 0.81 | 240000 | 2.0094 |
| 2.0495 | 0.85 | 250000 | 2.0086 |
| 2.0507 | 0.88 | 260000 | 2.0069 |
| 2.0933 | 0.91 | 270000 | 2.0058 |
| 2.1401 | 0.95 | 280000 | 2.0040 |
| 2.0917 | 0.98 | 290000 | 2.0034 |
| 2.0667 | 1.01 | 300000 | 2.0029 |
| 2.0958 | 1.05 | 310000 | 2.0017 |
| 2.071 | 1.08 | 320000 | 2.0012 |
| 2.0867 | 1.12 | 330000 | 2.0004 |
| 2.0705 | 1.15 | 340000 | 1.9993 |
| 2.0702 | 1.18 | 350000 | 1.9990 |
| 2.081 | 1.22 | 360000 | 1.9990 |
| 2.0671 | 1.25 | 370000 | 1.9980 |
| 2.0735 | 1.28 | 380000 | 1.9977 |
| 2.0681 | 1.32 | 390000 | 1.9969 |
| 2.0642 | 1.35 | 400000 | 1.9966 |
| 2.0773 | 1.39 | 410000 | 1.9962 |
| 2.0999 | 1.42 | 420000 | 1.9959 |
| 2.0637 | 1.45 | 430000 | 1.9958 |
| 2.0986 | 1.49 | 440000 | 1.9952 |
| 2.0821 | 1.52 | 450000 | 1.9953 |
| 2.0524 | 1.56 | 460000 | 1.9950 |
| 2.0594 | 1.59 | 470000 | 1.9945 |
| 2.041 | 1.62 | 480000 | 1.9943 |
| 2.0659 | 1.66 | 490000 | 1.9945 |
| 2.0748 | 1.69 | 500000 | 1.9942 |
| 2.0253 | 1.72 | 510000 | 1.9941 |
| 2.0728 | 1.76 | 520000 | 1.9938 |
| 2.0665 | 1.79 | 530000 | 1.9937 |
| 2.0527 | 1.83 | 540000 | 1.9936 |
| 2.0597 | 1.86 | 550000 | 1.9936 |
| 2.054 | 1.89 | 560000 | 1.9935 |
| 2.1144 | 1.93 | 570000 | 1.9935 |
| 2.0751 | 1.96 | 580000 | 1.9934 |
| 2.0791 | 1.99 | 590000 | 1.9934 |
| 2.0473 | 2.03 | 600000 | 1.9933 |
| 2.0804 | 2.06 | 610000 | 1.9933 |
| 2.1045 | 2.1 | 620000 | 1.9932 |
| 2.0622 | 2.13 | 630000 | 1.9933 |
| 2.0323 | 2.16 | 640000 | 1.9932 |
| 2.0698 | 2.2 | 650000 | 1.9932 |
| 2.0932 | 2.23 | 660000 | 1.9931 |
| 2.0854 | 2.27 | 670000 | 1.9932 |
| 2.0592 | 2.3 | 680000 | 1.9931 |
| 2.0576 | 2.33 | 690000 | 1.9931 |
| 2.0627 | 2.37 | 700000 | 1.9932 |
| 2.0731 | 2.4 | 710000 | 1.9931 |
| 2.0547 | 2.43 | 720000 | 1.9931 |
| 2.0534 | 2.47 | 730000 | 1.9932 |
| 2.0552 | 2.5 | 740000 | 1.9931 |
| 2.0583 | 2.54 | 750000 | 1.9931 |
| 2.0702 | 2.57 | 760000 | 1.9931 |
| 2.0673 | 2.6 | 770000 | 1.9932 |
| 2.0382 | 2.64 | 780000 | 1.9932 |
| 2.0758 | 2.67 | 790000 | 1.9932 |
| 2.0885 | 2.7 | 800000 | 1.9931 |
| 2.0663 | 2.74 | 810000 | 1.9931 |
| 2.0677 | 2.77 | 820000 | 1.9931 |
| 2.0502 | 2.81 | 830000 | 1.9931 |
| 2.1153 | 2.84 | 840000 | 1.9931 |
| 2.0621 | 2.87 | 850000 | 1.9931 |
| 2.099 | 2.91 | 860000 | 1.9931 |
| 2.0285 | 2.94 | 870000 | 1.9930 |
| 2.0343 | 2.98 | 880000 | 1.9931 |
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.1
- Pytorch 2.2.0+cu118
- Datasets 2.17.1
- Tokenizers 0.15.2 |
wesliao2024/130_stylegan_material_LoRA | wesliao2024 | 2024-03-10T02:28:46Z | 5 | 4 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-03-10T02:28:34Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: an image of TOK center focused material
license: openrail++
---
# SDXL LoRA DreamBooth - wesliao2024/130_stylegan_material_LoRA
<Gallery />
## Model description
These are wesliao2024/130_stylegan_material_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use an image of TOK center focused material to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](wesliao2024/130_stylegan_material_LoRA/tree/main) them in the Files & versions tab.
|
newsmediabias/llama2-7b-debiaser-it | newsmediabias | 2024-03-10T02:27:25Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-08T09:47:28Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
DarshanDeshpande/gemma_2b_social_reasoning_reward_model | DarshanDeshpande | 2024-03-10T02:19:41Z | 5 | 0 | peft | [
"peft",
"safetensors",
"trl",
"reward-trainer",
"generated_from_trainer",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"license:other",
"region:us"
] | null | 2024-03-07T03:16:11Z | ---
license: other
library_name: peft
tags:
- trl
- reward-trainer
- generated_from_trainer
base_model: google/gemma-2b
metrics:
- accuracy
model-index:
- name: gemma_2b_social_reasoning_reward_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gemma_2b_social_reasoning_reward_model
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6131
- Accuracy: 0.6713
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7962 | 0.24 | 10 | 0.6976 | 0.6077 |
| 0.6657 | 0.48 | 20 | 0.6464 | 0.6217 |
| 0.6334 | 0.72 | 30 | 0.6330 | 0.6392 |
| 0.6434 | 0.96 | 40 | 0.6218 | 0.6427 |
| 0.5745 | 1.2 | 50 | 0.6119 | 0.6567 |
| 0.5813 | 1.44 | 60 | 0.6282 | 0.6445 |
| 0.5822 | 1.68 | 70 | 0.6350 | 0.6585 |
| 0.5914 | 1.92 | 80 | 0.6294 | 0.6550 |
| 0.5386 | 2.16 | 90 | 0.6341 | 0.6550 |
| 0.5142 | 2.4 | 100 | 0.6330 | 0.6567 |
| 0.4906 | 2.63 | 110 | 0.6313 | 0.6620 |
| 0.5079 | 2.87 | 120 | 0.6309 | 0.6637 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
bulkbeings/emma-assistant-v0.1 | bulkbeings | 2024-03-10T02:12:11Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mental-health",
"substance",
"llama-factory",
"conversational",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T01:43:49Z | ---
license: mit
tags:
- mental-health
- substance
- llama-factory
---
|
furrutiav/bert_qa_extractor_cockatiel_2022_ef_plus_nllf_v0_z_value_linear_weight_it_664 | furrutiav | 2024-03-10T02:07:41Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-03-10T02:07:17Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits