modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-28 12:28:24
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 500
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-28 12:27:53
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
haryoaw/scenario-TCR-XLMV-XCOPA-2_data-xcopa_all | haryoaw | 2024-03-12T09:11:03Z | 89 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"multiple-choice",
"generated_from_trainer",
"base_model:facebook/xlm-v-base",
"base_model:finetune:facebook/xlm-v-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2024-03-08T10:53:29Z | ---
license: mit
base_model: facebook/xlm-v-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: scenario-TCR-XLMV-XCOPA-2_data-xcopa_all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# scenario-TCR-XLMV-XCOPA-2_data-xcopa_all
This model is a fine-tuned version of [facebook/xlm-v-base](https://huggingface.co/facebook/xlm-v-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6931
- Accuracy: 0.5
- F1: 0.4671
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 34
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 0.38 | 5 | 0.6932 | 0.4858 | 0.4767 |
| No log | 0.77 | 10 | 0.6931 | 0.515 | 0.5134 |
| No log | 1.15 | 15 | 0.6931 | 0.5158 | 0.5038 |
| No log | 1.54 | 20 | 0.6931 | 0.5108 | 0.5021 |
| No log | 1.92 | 25 | 0.6931 | 0.5217 | 0.5035 |
| No log | 2.31 | 30 | 0.6931 | 0.525 | 0.5069 |
| No log | 2.69 | 35 | 0.6931 | 0.5283 | 0.5070 |
| No log | 3.08 | 40 | 0.6931 | 0.5292 | 0.5125 |
| No log | 3.46 | 45 | 0.6931 | 0.5333 | 0.5122 |
| No log | 3.85 | 50 | 0.6930 | 0.5125 | 0.4970 |
| No log | 4.23 | 55 | 0.6930 | 0.5342 | 0.5251 |
| No log | 4.62 | 60 | 0.6931 | 0.5417 | 0.5217 |
| No log | 5.0 | 65 | 0.6931 | 0.5592 | 0.5482 |
| No log | 5.38 | 70 | 0.6931 | 0.5667 | 0.5517 |
| No log | 5.77 | 75 | 0.6931 | 0.5458 | 0.5362 |
| No log | 6.15 | 80 | 0.6931 | 0.535 | 0.5311 |
| No log | 6.54 | 85 | 0.6930 | 0.5433 | 0.5276 |
| No log | 6.92 | 90 | 0.6931 | 0.5025 | 0.4731 |
| No log | 7.31 | 95 | 0.6931 | 0.505 | 0.4715 |
| No log | 7.69 | 100 | 0.6931 | 0.5017 | 0.4514 |
| No log | 8.08 | 105 | 0.6931 | 0.5042 | 0.4831 |
| No log | 8.46 | 110 | 0.6931 | 0.5058 | 0.4785 |
| No log | 8.85 | 115 | 0.6931 | 0.5158 | 0.4872 |
| No log | 9.23 | 120 | 0.6931 | 0.5158 | 0.4890 |
| No log | 9.62 | 125 | 0.6931 | 0.5075 | 0.4829 |
| No log | 10.0 | 130 | 0.6931 | 0.505 | 0.4780 |
| No log | 10.38 | 135 | 0.6931 | 0.5 | 0.4709 |
| No log | 10.77 | 140 | 0.6931 | 0.485 | 0.4579 |
| No log | 11.15 | 145 | 0.6931 | 0.4858 | 0.4592 |
| No log | 11.54 | 150 | 0.6931 | 0.485 | 0.4569 |
| No log | 11.92 | 155 | 0.6931 | 0.4917 | 0.4611 |
| No log | 12.31 | 160 | 0.6931 | 0.4908 | 0.4664 |
| No log | 12.69 | 165 | 0.6931 | 0.4858 | 0.4602 |
| No log | 13.08 | 170 | 0.6931 | 0.4983 | 0.4756 |
| No log | 13.46 | 175 | 0.6931 | 0.4992 | 0.4788 |
| No log | 13.85 | 180 | 0.6931 | 0.4942 | 0.4717 |
| No log | 14.23 | 185 | 0.6931 | 0.4958 | 0.4735 |
| No log | 14.62 | 190 | 0.6931 | 0.5017 | 0.48 |
| No log | 15.0 | 195 | 0.6931 | 0.4942 | 0.4633 |
| No log | 15.38 | 200 | 0.6931 | 0.4942 | 0.4527 |
| No log | 15.77 | 205 | 0.6931 | 0.4925 | 0.4509 |
| No log | 16.15 | 210 | 0.6931 | 0.495 | 0.4570 |
| No log | 16.54 | 215 | 0.6931 | 0.4933 | 0.4581 |
| No log | 16.92 | 220 | 0.6931 | 0.5 | 0.4671 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.1.1+cu121
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Sumail/Alchemist_13_2b | Sumail | 2024-03-12T09:08:49Z | 89 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:deepnetguy/gemma-75",
"base_model:merge:deepnetguy/gemma-75",
"base_model:zzttbrdd/sn6_20_new",
"base_model:merge:zzttbrdd/sn6_20_new",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-12T09:06:13Z | ---
base_model:
- zzttbrdd/sn6_20_new
- deepnetguy/gemma-75
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [zzttbrdd/sn6_20_new](https://huggingface.co/zzttbrdd/sn6_20_new)
* [deepnetguy/gemma-75](https://huggingface.co/deepnetguy/gemma-75)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: zzttbrdd/sn6_20_new
layer_range: [0, 18]
- model: deepnetguy/gemma-75
layer_range: [0, 18]
merge_method: slerp
base_model: deepnetguy/gemma-75
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
ademax/vispell-small-v3 | ademax | 2024-03-12T09:07:34Z | 84 | 1 | transformers | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-03-12T09:06:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
stvhuang/rcr-run-kmwbxvtx-83433-master-0_20240312T165236-ep00 | stvhuang | 2024-03-12T09:01:46Z | 89 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-03-12T08:59:56Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
haryoaw/scenario-TCR-XLMV-XCOPA-6_data-xcopa_all | haryoaw | 2024-03-12T08:57:19Z | 90 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"multiple-choice",
"generated_from_trainer",
"base_model:facebook/xlm-v-base",
"base_model:finetune:facebook/xlm-v-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2024-03-08T10:07:05Z | ---
license: mit
base_model: facebook/xlm-v-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: scenario-TCR-XLMV-XCOPA-6_data-xcopa_all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# scenario-TCR-XLMV-XCOPA-6_data-xcopa_all
This model is a fine-tuned version of [facebook/xlm-v-base](https://huggingface.co/facebook/xlm-v-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6931
- Accuracy: 0.5083
- F1: 0.4627
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 341241
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 0.38 | 5 | 0.6931 | 0.5133 | 0.4904 |
| No log | 0.77 | 10 | 0.6931 | 0.5542 | 0.5344 |
| No log | 1.15 | 15 | 0.6931 | 0.5358 | 0.5093 |
| No log | 1.54 | 20 | 0.6931 | 0.5508 | 0.5373 |
| No log | 1.92 | 25 | 0.6931 | 0.5033 | 0.4716 |
| No log | 2.31 | 30 | 0.6931 | 0.53 | 0.5261 |
| No log | 2.69 | 35 | 0.6931 | 0.5383 | 0.5257 |
| No log | 3.08 | 40 | 0.6931 | 0.5308 | 0.5159 |
| No log | 3.46 | 45 | 0.6931 | 0.4933 | 0.4856 |
| No log | 3.85 | 50 | 0.6931 | 0.5308 | 0.5233 |
| No log | 4.23 | 55 | 0.6931 | 0.5517 | 0.5410 |
| No log | 4.62 | 60 | 0.6931 | 0.5625 | 0.5570 |
| No log | 5.0 | 65 | 0.6931 | 0.5433 | 0.5308 |
| No log | 5.38 | 70 | 0.6931 | 0.53 | 0.5236 |
| No log | 5.77 | 75 | 0.6931 | 0.5267 | 0.5103 |
| No log | 6.15 | 80 | 0.6931 | 0.5308 | 0.4987 |
| No log | 6.54 | 85 | 0.6931 | 0.5017 | 0.4889 |
| No log | 6.92 | 90 | 0.6931 | 0.5267 | 0.5009 |
| No log | 7.31 | 95 | 0.6931 | 0.5367 | 0.5062 |
| No log | 7.69 | 100 | 0.6931 | 0.5133 | 0.4859 |
| No log | 8.08 | 105 | 0.6931 | 0.4817 | 0.4610 |
| No log | 8.46 | 110 | 0.6932 | 0.5 | 0.4854 |
| No log | 8.85 | 115 | 0.6931 | 0.4992 | 0.4788 |
| No log | 9.23 | 120 | 0.6931 | 0.5 | 0.4881 |
| No log | 9.62 | 125 | 0.6931 | 0.5042 | 0.4884 |
| No log | 10.0 | 130 | 0.6931 | 0.5025 | 0.4936 |
| No log | 10.38 | 135 | 0.6931 | 0.4908 | 0.4764 |
| No log | 10.77 | 140 | 0.6931 | 0.4942 | 0.4816 |
| No log | 11.15 | 145 | 0.6932 | 0.5033 | 0.4532 |
| No log | 11.54 | 150 | 0.6931 | 0.525 | 0.4732 |
| No log | 11.92 | 155 | 0.6931 | 0.53 | 0.4946 |
| No log | 12.31 | 160 | 0.6931 | 0.5192 | 0.4489 |
| No log | 12.69 | 165 | 0.6931 | 0.5225 | 0.4729 |
| No log | 13.08 | 170 | 0.6931 | 0.5117 | 0.4624 |
| No log | 13.46 | 175 | 0.6931 | 0.5383 | 0.4851 |
| No log | 13.85 | 180 | 0.6931 | 0.5208 | 0.4691 |
| No log | 14.23 | 185 | 0.6931 | 0.5258 | 0.4717 |
| No log | 14.62 | 190 | 0.6931 | 0.5417 | 0.4860 |
| No log | 15.0 | 195 | 0.6931 | 0.5233 | 0.4838 |
| No log | 15.38 | 200 | 0.6931 | 0.5258 | 0.4851 |
| No log | 15.77 | 205 | 0.6931 | 0.5392 | 0.4931 |
| No log | 16.15 | 210 | 0.6931 | 0.5083 | 0.4627 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.1.1+cu121
- Datasets 2.14.5
- Tokenizers 0.13.3
|
haryoaw/scenario-TCR-XLMV-XCOPA-1_data-xcopa_all | haryoaw | 2024-03-12T08:54:23Z | 89 | 0 | transformers | [
"transformers",
"pytorch",
"xlm-roberta",
"multiple-choice",
"generated_from_trainer",
"base_model:facebook/xlm-v-base",
"base_model:finetune:facebook/xlm-v-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2024-03-08T09:38:52Z | ---
license: mit
base_model: facebook/xlm-v-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: scenario-TCR-XLMV-XCOPA-1_data-xcopa_all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# scenario-TCR-XLMV-XCOPA-1_data-xcopa_all
This model is a fine-tuned version of [facebook/xlm-v-base](https://huggingface.co/facebook/xlm-v-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6931
- Accuracy: 0.5592
- F1: 0.5289
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 0.38 | 5 | 0.6932 | 0.4917 | 0.4383 |
| No log | 0.77 | 10 | 0.6931 | 0.5192 | 0.5064 |
| No log | 1.15 | 15 | 0.6931 | 0.5017 | 0.4613 |
| No log | 1.54 | 20 | 0.6932 | 0.4942 | 0.4576 |
| No log | 1.92 | 25 | 0.6931 | 0.505 | 0.4629 |
| No log | 2.31 | 30 | 0.6931 | 0.5 | 0.4643 |
| No log | 2.69 | 35 | 0.6931 | 0.4892 | 0.4580 |
| No log | 3.08 | 40 | 0.6931 | 0.4833 | 0.4552 |
| No log | 3.46 | 45 | 0.6932 | 0.4967 | 0.4588 |
| No log | 3.85 | 50 | 0.6931 | 0.5042 | 0.4711 |
| No log | 4.23 | 55 | 0.6931 | 0.5108 | 0.4846 |
| No log | 4.62 | 60 | 0.6932 | 0.4875 | 0.4591 |
| No log | 5.0 | 65 | 0.6931 | 0.4958 | 0.4641 |
| No log | 5.38 | 70 | 0.6931 | 0.4933 | 0.4777 |
| No log | 5.77 | 75 | 0.6931 | 0.5075 | 0.4901 |
| No log | 6.15 | 80 | 0.6931 | 0.4833 | 0.4464 |
| No log | 6.54 | 85 | 0.6931 | 0.5175 | 0.4917 |
| No log | 6.92 | 90 | 0.6931 | 0.4442 | 0.4225 |
| No log | 7.31 | 95 | 0.6931 | 0.4583 | 0.4377 |
| No log | 7.69 | 100 | 0.6931 | 0.5192 | 0.4978 |
| No log | 8.08 | 105 | 0.6931 | 0.5425 | 0.5230 |
| No log | 8.46 | 110 | 0.6931 | 0.535 | 0.5122 |
| No log | 8.85 | 115 | 0.6931 | 0.545 | 0.5194 |
| No log | 9.23 | 120 | 0.6931 | 0.5492 | 0.5259 |
| No log | 9.62 | 125 | 0.6931 | 0.535 | 0.5114 |
| No log | 10.0 | 130 | 0.6931 | 0.5475 | 0.5233 |
| No log | 10.38 | 135 | 0.6931 | 0.5525 | 0.5269 |
| No log | 10.77 | 140 | 0.6931 | 0.5458 | 0.5223 |
| No log | 11.15 | 145 | 0.6931 | 0.5392 | 0.5145 |
| No log | 11.54 | 150 | 0.6931 | 0.5483 | 0.5246 |
| No log | 11.92 | 155 | 0.6931 | 0.5342 | 0.5084 |
| No log | 12.31 | 160 | 0.6931 | 0.54 | 0.5158 |
| No log | 12.69 | 165 | 0.6931 | 0.5375 | 0.5084 |
| No log | 13.08 | 170 | 0.6931 | 0.5433 | 0.5133 |
| No log | 13.46 | 175 | 0.6931 | 0.5333 | 0.5096 |
| No log | 13.85 | 180 | 0.6931 | 0.5458 | 0.5215 |
| No log | 14.23 | 185 | 0.6931 | 0.5508 | 0.5259 |
| No log | 14.62 | 190 | 0.6931 | 0.5433 | 0.5168 |
| No log | 15.0 | 195 | 0.6931 | 0.55 | 0.5280 |
| No log | 15.38 | 200 | 0.6931 | 0.5442 | 0.5231 |
| No log | 15.77 | 205 | 0.6931 | 0.55 | 0.5280 |
| No log | 16.15 | 210 | 0.6931 | 0.5458 | 0.5257 |
| No log | 16.54 | 215 | 0.6931 | 0.5392 | 0.5195 |
| No log | 16.92 | 220 | 0.6931 | 0.5367 | 0.5165 |
| No log | 17.31 | 225 | 0.6931 | 0.5433 | 0.5235 |
| No log | 17.69 | 230 | 0.6931 | 0.55 | 0.5271 |
| No log | 18.08 | 235 | 0.6931 | 0.5425 | 0.5222 |
| No log | 18.46 | 240 | 0.6931 | 0.5417 | 0.5158 |
| No log | 18.85 | 245 | 0.6931 | 0.4983 | 0.4719 |
| No log | 19.23 | 250 | 0.6931 | 0.5483 | 0.5237 |
| No log | 19.62 | 255 | 0.6931 | 0.5425 | 0.5230 |
| No log | 20.0 | 260 | 0.6931 | 0.5467 | 0.5220 |
| No log | 20.38 | 265 | 0.6931 | 0.5467 | 0.5220 |
| No log | 20.77 | 270 | 0.6931 | 0.5508 | 0.5251 |
| No log | 21.15 | 275 | 0.6931 | 0.555 | 0.5283 |
| No log | 21.54 | 280 | 0.6931 | 0.5533 | 0.5257 |
| No log | 21.92 | 285 | 0.6931 | 0.555 | 0.5283 |
| No log | 22.31 | 290 | 0.6931 | 0.5533 | 0.5298 |
| No log | 22.69 | 295 | 0.6931 | 0.5517 | 0.5281 |
| No log | 23.08 | 300 | 0.6931 | 0.5567 | 0.5325 |
| No log | 23.46 | 305 | 0.6931 | 0.55 | 0.5288 |
| No log | 23.85 | 310 | 0.6931 | 0.5475 | 0.5233 |
| No log | 24.23 | 315 | 0.6931 | 0.5467 | 0.5220 |
| No log | 24.62 | 320 | 0.6931 | 0.55 | 0.5246 |
| No log | 25.0 | 325 | 0.6931 | 0.5483 | 0.5212 |
| No log | 25.38 | 330 | 0.6931 | 0.5467 | 0.5203 |
| No log | 25.77 | 335 | 0.6931 | 0.5483 | 0.5204 |
| No log | 26.15 | 340 | 0.6931 | 0.5492 | 0.5225 |
| No log | 26.54 | 345 | 0.6931 | 0.5492 | 0.5250 |
| No log | 26.92 | 350 | 0.6931 | 0.5542 | 0.5295 |
| No log | 27.31 | 355 | 0.6931 | 0.5567 | 0.5350 |
| No log | 27.69 | 360 | 0.6931 | 0.5533 | 0.5290 |
| No log | 28.08 | 365 | 0.6931 | 0.5558 | 0.5296 |
| No log | 28.46 | 370 | 0.6931 | 0.5542 | 0.5270 |
| No log | 28.85 | 375 | 0.6931 | 0.5383 | 0.5166 |
| No log | 29.23 | 380 | 0.6931 | 0.5483 | 0.5220 |
| No log | 29.62 | 385 | 0.6931 | 0.5475 | 0.5190 |
| No log | 30.0 | 390 | 0.6931 | 0.5483 | 0.5212 |
| No log | 30.38 | 395 | 0.6931 | 0.5208 | 0.4871 |
| No log | 30.77 | 400 | 0.6931 | 0.4867 | 0.4690 |
| No log | 31.15 | 405 | 0.6931 | 0.485 | 0.4663 |
| No log | 31.54 | 410 | 0.6931 | 0.455 | 0.4313 |
| No log | 31.92 | 415 | 0.6931 | 0.4608 | 0.4369 |
| No log | 32.31 | 420 | 0.6931 | 0.4617 | 0.4421 |
| No log | 32.69 | 425 | 0.6931 | 0.5258 | 0.4942 |
| No log | 33.08 | 430 | 0.6931 | 0.5608 | 0.5340 |
| No log | 33.46 | 435 | 0.6931 | 0.5583 | 0.5310 |
| No log | 33.85 | 440 | 0.6931 | 0.56 | 0.5352 |
| No log | 34.23 | 445 | 0.6931 | 0.5567 | 0.5325 |
| No log | 34.62 | 450 | 0.6931 | 0.5525 | 0.5277 |
| No log | 35.0 | 455 | 0.6931 | 0.5542 | 0.5303 |
| No log | 35.38 | 460 | 0.6931 | 0.5633 | 0.5379 |
| No log | 35.77 | 465 | 0.6931 | 0.5542 | 0.5295 |
| No log | 36.15 | 470 | 0.6931 | 0.5567 | 0.5309 |
| No log | 36.54 | 475 | 0.6931 | 0.555 | 0.5291 |
| No log | 36.92 | 480 | 0.6931 | 0.5575 | 0.5330 |
| No log | 37.31 | 485 | 0.6931 | 0.5517 | 0.5256 |
| No log | 37.69 | 490 | 0.6931 | 0.545 | 0.5168 |
| No log | 38.08 | 495 | 0.6931 | 0.54 | 0.5132 |
| 0.6936 | 38.46 | 500 | 0.6931 | 0.55 | 0.5238 |
| 0.6936 | 38.85 | 505 | 0.6931 | 0.5425 | 0.512 |
| 0.6936 | 39.23 | 510 | 0.6931 | 0.54 | 0.5106 |
| 0.6936 | 39.62 | 515 | 0.6931 | 0.5242 | 0.4906 |
| 0.6936 | 40.0 | 520 | 0.6931 | 0.5292 | 0.4978 |
| 0.6936 | 40.38 | 525 | 0.6931 | 0.53 | 0.5009 |
| 0.6936 | 40.77 | 530 | 0.6931 | 0.5308 | 0.5031 |
| 0.6936 | 41.15 | 535 | 0.6931 | 0.5425 | 0.5205 |
| 0.6936 | 41.54 | 540 | 0.6931 | 0.535 | 0.5088 |
| 0.6936 | 41.92 | 545 | 0.6931 | 0.5342 | 0.5084 |
| 0.6936 | 42.31 | 550 | 0.6931 | 0.5425 | 0.5205 |
| 0.6936 | 42.69 | 555 | 0.6931 | 0.5475 | 0.5241 |
| 0.6936 | 43.08 | 560 | 0.6931 | 0.5517 | 0.5264 |
| 0.6936 | 43.46 | 565 | 0.6931 | 0.5592 | 0.5339 |
| 0.6936 | 43.85 | 570 | 0.6931 | 0.5625 | 0.5350 |
| 0.6936 | 44.23 | 575 | 0.6931 | 0.5625 | 0.5358 |
| 0.6936 | 44.62 | 580 | 0.6931 | 0.5617 | 0.5337 |
| 0.6936 | 45.0 | 585 | 0.6931 | 0.5633 | 0.5355 |
| 0.6936 | 45.38 | 590 | 0.6931 | 0.56 | 0.5344 |
| 0.6936 | 45.77 | 595 | 0.6931 | 0.5625 | 0.5350 |
| 0.6936 | 46.15 | 600 | 0.6931 | 0.555 | 0.5258 |
| 0.6936 | 46.54 | 605 | 0.6931 | 0.5625 | 0.5350 |
| 0.6936 | 46.92 | 610 | 0.6931 | 0.5592 | 0.5289 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.1.1+cu121
- Datasets 2.14.5
- Tokenizers 0.13.3
|
Suprava007/my-dog-xzg | Suprava007 | 2024-03-12T08:50:59Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-03-12T08:28:44Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My--Dog--xzg Dreambooth model trained by Suprava007 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 30098122015
Sample pictures of this concept:

|
ihebjer00/codegen-Actia | ihebjer00 | 2024-03-12T08:49:14Z | 0 | 0 | peft | [
"peft",
"pytorch",
"tensorboard",
"safetensors",
"codegen",
"trl",
"sft",
"generated_from_trainer",
"base_model:Salesforce/codegen-2B-multi",
"base_model:adapter:Salesforce/codegen-2B-multi",
"license:bsd-3-clause",
"region:us"
] | null | 2024-03-12T07:43:48Z | ---
license: bsd-3-clause
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: Salesforce/codegen-2B-multi
model-index:
- name: codegen-Actia
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codegen-Actia
This model is a fine-tuned version of [Salesforce/codegen-2B-multi](https://huggingface.co/Salesforce/codegen-2B-multi) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- training_steps: 100
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
StarCycle/llava-siglip-internlm2-1_8b-v2 | StarCycle | 2024-03-12T08:47:45Z | 0 | 2 | null | [
"safetensors",
"llava",
"image-text-to-text",
"dataset:liuhaotian/LLaVA-Pretrain",
"dataset:liuhaotian/LLaVA-Instruct-150K",
"license:apache-2.0",
"region:us"
] | image-text-to-text | 2024-03-10T03:39:05Z | ---
license: apache-2.0
tags:
- llava
datasets:
- liuhaotian/LLaVA-Pretrain
- liuhaotian/LLaVA-Instruct-150K
pipeline_tag: image-text-to-text
---
## Model
llava-siglip-internlm2-1_8b-pretrain-v1 is a LLaVA checkpoint finetuned from [internlm2-1_8b](https://huggingface.co/internlm/internlm2-1_8b) and [siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384) with [LLaVA-Pretrain](liuhaotian/LLaVA-Pretrain) and [LLaVA-Instruct-150K](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K) by [Xtuner](https://github.com/InternLM/xtuner). The pretraining phase took 5.5 hours on 4 Nvidia GTX 4090 GPU (see this [intermediate checkpoint](https://huggingface.co/StarCycle/llava-siglip-internlm2-1_8b-pretrain-v2)). The finetuning phase took 16 hours on 4 Nvidia GTX 4090 GPU.
The total size of the model is around 2.2B, which is suitable for embedded applications like robotics. This model performs better than [llava-siglip-internlm2-1_8b-v1](https://huggingface.co/StarCycle/llava-siglip-internlm2-1_8b-v1) because I use the base LLM, instead of the SFT version.
I have not carefully tune the hyperparameters during training. If you have any idea to improve it, please open an issue or just send an email to [email protected]. You are welcomed!
## Example

Explain this photo in English and Chinese:

## Results
Model | MMBench Test (EN) | MMBench Dev (EN) | MMBench Test (CN) | MMBench Dev (CN) | CCBench Dev
------------- | ------------- | ------------- | ------------- | ------------- | -------------
LLaVA-v1.5-7B | 67.7 | 69.2 | 61.0 | 59.7 | 28.4
LLaVA-InternLM-7B | 69.0 | 68.5 | 66.7 | 63.8 | 37.3
LLaVA-InternLM2-7B | 73.3 | 74.6 | 71.7 | 72.0 | 42.5
Bunny-3B | 69.2 | 68.6 | - | - | -
MiniCPM-V | 64.1 | 67.9 | 62.6 | 65.3 | 41.4
llava-clip-internlm2-1_8b-v1 | 63.3 | 63.1 | 63.6 | 61.7 | 35.3
llava-siglip-internlm2-1_8b-v1 | 65.7 | 63.5 | 64.5 | 62.9 | 36.3
llava-siglip-internlm2-1_8b-v2 | 66.6 | 67.1 | 64.5 | 64.1 | 37.3
MMBench Test EN:

MMBench Test CN:

## Installation
```
# We need the newest version so clone from github
git clone https://github.com/huggingface/transformers/
git clone https://github.com/huggingface/peft
git clone https://github.com/InternLM/xtuner
```
Now please replace the files in transformers and xtuner with the source code files in modified_transformers and modified_xtuner.
```
cp -r ./modified_transformers ./transformers
cp -r ./modified_xtuner ./xtuner
```
Then run
```
pip install -e ./transformers
pip install -e ./peft
pip install -e ./xtuner[deepspeed]
apt install git-lfs
```
## Chat
```
xtuner chat internlm/internlm2-1_8b \
--visual-encoder google/siglip-so400m-patch14-384 \
--llava StarCycle/llava-siglip-internlm2-1_8b-v2 \
--prompt-template internlm2_chat \
--image $IMAGE_PATH
```
## Common Errors
1.
```
command error: 'libGL.so.1: cannot open shared object file: No such file or directory'!
```
You can solve it by
```
# For Ubuntu
sudo apt-get update
sudo apt-get install libgl1-mesa-glx
# For CentOS and Fedora
sudo yum install mesa-libGL
```
2.
```
Error: mkl-service + Intel(R) MKL: MKL_THREADING_LAYER=INTEL is incompatible with libgomp.so.1 library.
Try to import numpy first or set the threading layer accordingly. Set MKL_SERVICE_FORCE_INTEL to force it.
```
You can solve it by reinstall numpy.
3.
```
ImportError:
InternLM2Converter requires the protobuf library but it was not found in your environment. Checkout the instructions on the
```
You just need
```
pip install protobuf
```
4.
To use tensorboard to visualize the training loss curve:
```
pip install future tensorboard
```
5. If your training process is killed during data preprocessing, you can modify the `map_num_proc` in xtuner/xtuner/dataset
/huggingface.py
```
def process(dataset,
do_dataset_tokenization=True,
tokenizer=None,
max_length=None,
dataset_map_fn=None,
template_map_fn=None,
max_dataset_length=None,
split='train',
remove_unused_columns=False,
rename_maps=[],
shuffle_before_pack=True,
pack_to_max_length=True,
use_varlen_attn=False,
input_ids_with_output=True,
with_image_token=False,
map_num_proc=32): # modify it to a smaller number, e.g., 4
```
6. If you fail to load the model, check whether you installed git-lfs and actually downloaded the model file.
## Data prepration
1. File structure
```
# . means the llava-dinov2-internlm2-7b-v1 folder you clone
./data/llava_data
βββ LLaVA-Pretrain
βΒ Β βββ blip_laion_cc_sbu_558k.json
βΒ Β βββ blip_laion_cc_sbu_558k_meta.json
βΒ Β βββ images
βββ LLaVA-Instruct-150K
βΒ Β βββ llava_v1_5_mix665k.json
βββ llava_images
Β Β βββ coco
Β Β β βββ train2017
Β Β βββ gqa
Β Β β βββ images
Β Β βββ ocr_vqa
Β Β β βββ images
Β Β βββ textvqa
Β Β β βββ train_images
Β Β βββ vg
Β Β Β Β βββ VG_100K
Β Β βββ VG_100K_2
```
2. Pretrain Data
LLaVA-Pretrain
```shell
# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install
git clone https://huggingface.co/datasets/liuhaotian/LLaVA-Pretrain --depth=1
```
3. Finetune Data
3.1 Text data
LLaVA-Instruct-150K
```shell
# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install
git clone https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K --depth=1
```
3.2 Image data
3.2.1 COCO (coco): [train2017](http://images.cocodataset.org/zips/train2017.zip)
3.2.2 GQA (gqa): [images](https://downloads.cs.stanford.edu/nlp/data/gqa/images.zip)
3.2.3 OCR-VQA (ocr_vqa): [download script](https://drive.google.com/drive/folders/1_GYPY5UkUy7HIcR0zq3ZCFgeZN7BAfm_?usp=sharing)
β οΈβ οΈβ οΈ Modify the name of OCR-VQA's images to keep the extension as `.jpg`!
```shell
#!/bin/bash
ocr_vqa_path="<your-directory-path>"
find "$target_dir" -type f | while read file; do
extension="${file##*.}"
if [ "$extension" != "jpg" ]
then
cp -- "$file" "${file%.*}.jpg"
fi
done
```
3.2.4 TextVQA (textvqa): [train_val_images](https://dl.fbaipublicfiles.com/textvqa/images/train_val_images.zip)
3.2.5 VisualGenome (VG): [part1](https://cs.stanford.edu/people/rak248/VG_100K_2/images.zip), [part2](https://cs.stanford.edu/people/rak248/VG_100K_2/images2.zip)
## Cheers! Now train your own model!
1. Alignment module pretraining
```
# single GPU
xtuner train ./pretrain.py --deepspeed deepspeed_zero2
# multiple GPU
NPROC_PER_NODE=4 xtuner train ./pretrain.py --deepspeed deepspeed_zero2
```
#### Remember to change the batch size and gradient accumulation parameters to fit your hardware. So your GPU_num * batch_size * gradient_accumulation is roughly equal to mine to reproduce the result.
The checkpoint and tensorboard logs are saved by default in ./work_dirs/. I only train it for 1 epoch to be same as the original LLaVA paper. Some researches also report that training for multiple epochs will make the model overfit the training dataset and perform worse in other domains.
This is my loss curve for llava-siglip-internlm2-1_8b-pretrain-v2:

And the learning rate curve:

2. Instruction following fine-tuning
```
NPROC_PER_NODE=4 xtuner train ./finetune.py --deepspeed deepspeed_zero2
```
Here is my loss curve (the curve fluctuates strongly because the batch size is small, and I only record batch loss instead of epoch loss):

And the learning rate curve:

## Transfer the checkpoints to Huggingface safetensor format
```
xtuner convert pth_to_hf ./finetune.py ./work_dirs/iter_xxx.pth ./my_lora_and_projector
```
The adapter still need to be used with the internlm/internlm2-1_8b and the vision encoder. I have not tried to merge them yet but it is possible with Xtuner, see this [tutorial](https://github.com/InternLM/xtuner/blob/f63859b3d0cb39cbac709e3850f3fe01de1023aa/xtuner/configs/llava/README.md#L4).
## MMBench Evaluation
You can first download the MMBench data:
```
wget https://opencompass.openxlab.space/utils/VLMEval/MMBench_DEV_EN.tsv
wget https://opencompass.openxlab.space/utils/VLMEval/MMBench_TEST_EN.tsv
wget https://opencompass.openxlab.space/utils/VLMEval/MMBench_DEV_CN.tsv
wget https://opencompass.openxlab.space/utils/VLMEval/MMBench_TEST_CN.tsv
wget https://opencompass.openxlab.space/utils/VLMEval/CCBench.tsv
```
Then run:
```
NPROC_PER_NODE=8 xtuner mmbench internlm/internlm2-1_8b \
--visual-encoder google/siglip-so400m-patch14-384 \
--llava ./my_lora_and_projector \
--prompt-template internlm2_chat \
--data-path $MMBENCH_DATA_PATH \
--work-dir $RESULT_PATH
```
You can also use [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) to evaluate it on other benckmarks.
## Deployment
Xtuner team is developing HF chatbot (based on Huggingface transformers) and LMDeploy chatbot (based on TurboMind). I am waiting for their final version of API. |
unanam/large_lora_ver1 | unanam | 2024-03-12T08:45:56Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-12T04:46:35Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
shreshtha2002/llama2_finetune | shreshtha2002 | 2024-03-12T08:45:42Z | 0 | 0 | null | [
"safetensors",
"autotrain",
"text-generation",
"conversational",
"license:other",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-12T08:45:06Z | ---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
Croolch/Reinforce-CartPole-v1 | Croolch | 2024-03-12T08:33:20Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-12T08:33:12Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
johannhartmann/Brezn6-WIP | johannhartmann | 2024-03-12T08:32:47Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:DiscoResearch/DiscoLM_German_7b_v1",
"base_model:merge:DiscoResearch/DiscoLM_German_7b_v1",
"base_model:automerger/YamStrangemerges_32-7B",
"base_model:merge:automerger/YamStrangemerges_32-7B",
"base_model:cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser",
"base_model:merge:cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser",
"base_model:mayflowergmbh/Wiedervereinigung-7b-dpo-laser",
"base_model:merge:mayflowergmbh/Wiedervereinigung-7b-dpo-laser",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-12T08:25:46Z | ---
base_model:
- automerger/YamStrangemerges_32-7B
- DiscoResearch/DiscoLM_German_7b_v1
- cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser
- mayflowergmbh/Wiedervereinigung-7b-dpo-laser
library_name: transformers
tags:
- mergekit
- merge
---
# brezn6
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser](https://huggingface.co/cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser) as a base.
### Models Merged
The following models were included in the merge:
* [automerger/YamStrangemerges_32-7B](https://huggingface.co/automerger/YamStrangemerges_32-7B)
* [DiscoResearch/DiscoLM_German_7b_v1](https://huggingface.co/DiscoResearch/DiscoLM_German_7b_v1)
* [mayflowergmbh/Wiedervereinigung-7b-dpo-laser](https://huggingface.co/mayflowergmbh/Wiedervereinigung-7b-dpo-laser)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser
# no parameters necessary for base model
- model: automerger/YamStrangemerges_32-7B
parameters:
density: 0.50
weight: 0.30
- model: mayflowergmbh/Wiedervereinigung-7b-dpo-laser
parameters:
density: 0.65
weight: 0.40
- model: DiscoResearch/DiscoLM_German_7b_v1
parameters:
density: 0.5
weight: 0.3
merge_method: dare_ties
base_model: cognitivecomputations/dolphin-2.6-mistral-7b-dpo-laser
parameters:
int8_mask: true
dtype: bfloat16
random_seed: 0
tokenizer_source: base
```
|
wongctroman/fine-tuned-cloudy-sentence-transformer-29 | wongctroman | 2024-03-12T08:32:04Z | 46 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-03-12T08:30:30Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# wongctroman/fine-tuned-cloudy-sentence-transformer-29
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('wongctroman/fine-tuned-cloudy-sentence-transformer-29')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=wongctroman/fine-tuned-cloudy-sentence-transformer-29)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 4 with parameters:
```
{'batch_size': 5, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
```
Parameters of the fit()-Method:
```
{
"epochs": 100,
"evaluation_steps": 100,
"evaluator": "sentence_transformers.evaluation.TripletEvaluator.TripletEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
eunyounglee/hana-persona-emotion-bert-simple-finetuning-1 | eunyounglee | 2024-03-12T08:30:20Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:klue/bert-base",
"base_model:finetune:klue/bert-base",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-11T02:24:53Z | ---
license: cc-by-sa-4.0
base_model: klue/bert-base
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: hana-persona-emotion-bert-simple-finetuning-1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hana-persona-emotion-bert-simple-finetuning-1
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5295
- Accuracy: 0.8043
- F1: 0.8041
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|
| 0.4983 | 1.0 | 2284 | 0.4753 | 0.8052 | 0.8038 |
| 0.3728 | 2.0 | 4568 | 0.4923 | 0.8098 | 0.8078 |
| 0.2827 | 3.0 | 6852 | 0.5351 | 0.8070 | 0.8074 |
| 0.1906 | 4.0 | 9136 | 0.7375 | 0.8106 | 0.8099 |
| 0.155 | 5.0 | 11420 | 0.8906 | 0.8041 | 0.8040 |
| 0.1087 | 6.0 | 13704 | 1.1059 | 0.8050 | 0.8050 |
| 0.0732 | 7.0 | 15988 | 1.2882 | 0.8049 | 0.8053 |
| 0.0632 | 8.0 | 18272 | 1.3800 | 0.8066 | 0.8062 |
| 0.0369 | 9.0 | 20556 | 1.4939 | 0.8035 | 0.8028 |
| 0.0159 | 10.0 | 22840 | 1.5295 | 0.8043 | 0.8041 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.0
- Datasets 2.17.1
- Tokenizers 0.15.2
|
raminass/SCOTUS_AI_V17_CURCUIT_V1 | raminass | 2024-03-12T08:28:34Z | 93 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:raminass/scotus-v10",
"base_model:finetune:raminass/scotus-v10",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-12T07:23:50Z | ---
license: cc-by-sa-4.0
base_model: raminass/scotus-v10
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: SCOTUS_AI_V17_CURCUIT_V1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SCOTUS_AI_V17_CURCUIT_V1
This model is a fine-tuned version of [raminass/scotus-v10](https://huggingface.co/raminass/scotus-v10) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3007
- Accuracy: 0.7500
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.7967 | 1.0 | 2167 | 0.9836 | 0.7086 |
| 0.4558 | 2.0 | 4334 | 0.8871 | 0.7430 |
| 0.2722 | 3.0 | 6501 | 0.9650 | 0.7471 |
| 0.146 | 4.0 | 8668 | 1.0709 | 0.7484 |
| 0.1076 | 5.0 | 10835 | 1.2135 | 0.7444 |
| 0.0591 | 6.0 | 13002 | 1.2494 | 0.7522 |
| 0.0454 | 7.0 | 15169 | 1.3007 | 0.7500 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
wongctroman/fine-tuned-cloudy-sentence-transformer-28 | wongctroman | 2024-03-12T08:27:15Z | 47 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-03-12T08:25:29Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# wongctroman/fine-tuned-cloudy-sentence-transformer-28
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('wongctroman/fine-tuned-cloudy-sentence-transformer-28')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=wongctroman/fine-tuned-cloudy-sentence-transformer-28)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 4 with parameters:
```
{'batch_size': 5, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 100,
"evaluator": "sentence_transformers.evaluation.TripletEvaluator.TripletEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
crisp-im/mirage-flan-t5-xxl-topic-v0.1 | crisp-im | 2024-03-12T08:25:29Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-03-12T08:10:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
wongctroman/fine-tuned-cloudy-sentence-transformer-27 | wongctroman | 2024-03-12T08:24:24Z | 46 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-03-12T08:22:24Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# wongctroman/fine-tuned-cloudy-sentence-transformer-27
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('wongctroman/fine-tuned-cloudy-sentence-transformer-27')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=wongctroman/fine-tuned-cloudy-sentence-transformer-27)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 38 with parameters:
```
{'batch_size': 5, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
```
Parameters of the fit()-Method:
```
{
"epochs": 20,
"evaluation_steps": 100,
"evaluator": "sentence_transformers.evaluation.TripletEvaluator.TripletEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
ahmetburakbicer/Handwritten-Digit-Recognition | ahmetburakbicer | 2024-03-12T08:14:05Z | 0 | 0 | null | [
"TensorFlow",
"Python",
"tkinter",
"mnist",
"hand-written-digit",
"dataset:mnist",
"arxiv:1910.09700",
"license:apache-2.0",
"region:us"
] | null | 2024-03-12T08:08:56Z | ---
license: apache-2.0
datasets:
- mnist
metrics:
- accuracy
tags:
- TensorFlow
- Python
- tkinter
- mnist
- hand-written-digit
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
wongctroman/fine-tuned-cloudy-sentence-transformer-25 | wongctroman | 2024-03-12T08:11:08Z | 46 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-03-12T08:10:04Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# wongctroman/fine-tuned-cloudy-sentence-transformer-25
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('wongctroman/fine-tuned-cloudy-sentence-transformer-25')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=wongctroman/fine-tuned-cloudy-sentence-transformer-25)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 34 with parameters:
```
{'batch_size': 5, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
```
Parameters of the fit()-Method:
```
{
"epochs": 20,
"evaluation_steps": 500,
"evaluator": "sentence_transformers.evaluation.TripletEvaluator.TripletEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
OwOOwO/mistral_m1-4 | OwOOwO | 2024-03-12T08:08:48Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-09T11:42:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
csshali/dqn-SpaceInvadersNoFrameskip-v4 | csshali | 2024-03-12T08:06:35Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-12T08:06:01Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 554.00 +/- 167.96
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga csshali -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga csshali -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga csshali
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
JulyApril/lora-trained-xl | JulyApril | 2024-03-12T08:02:57Z | 1 | 1 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | 2024-03-11T08:06:54Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
widget:
- text: 'A photo of sks dog in a bucket'
output:
url:
"image_0.png"
- text: 'A photo of sks dog in a bucket'
output:
url:
"image_1.png"
- text: 'A photo of sks dog in a bucket'
output:
url:
"image_2.png"
- text: 'A photo of sks dog in a bucket'
output:
url:
"image_3.png"
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of sks dog
license: openrail++
---
# SDXL LoRA DreamBooth - JulyApril/lora-trained-xl
<Gallery />
## Model description
These are JulyApril/lora-trained-xl LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of sks dog to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](JulyApril/lora-trained-xl/tree/main) them in the Files & versions tab.
|
wongctroman/fine-tuned-cloudy-sentence-transformer-23 | wongctroman | 2024-03-12T07:57:27Z | 46 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-03-12T07:55:53Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# wongctroman/fine-tuned-cloudy-sentence-transformer-23
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('wongctroman/fine-tuned-cloudy-sentence-transformer-23')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=wongctroman/fine-tuned-cloudy-sentence-transformer-23)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 34 with parameters:
```
{'batch_size': 5, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
```
Parameters of the fit()-Method:
```
{
"epochs": 20,
"evaluation_steps": 100,
"evaluator": "sentence_transformers.evaluation.TripletEvaluator.TripletEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
imsumit18/distilbart-cnn-12-6-finetuned-stocknews_200 | imsumit18 | 2024-03-12T07:47:01Z | 11 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"summarization",
"base_model:sshleifer/distilbart-cnn-12-6",
"base_model:finetune:sshleifer/distilbart-cnn-12-6",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2024-03-07T06:23:09Z | ---
license: apache-2.0
base_model: sshleifer/distilbart-cnn-12-6
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: distilbart-cnn-12-6-finetuned-stocknews_200
results: []
pipeline_tag: summarization
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-cnn-12-6-finetuned-stocknews_200
This model is a fine-tuned version of [sshleifer/distilbart-cnn-12-6](https://huggingface.co/sshleifer/distilbart-cnn-12-6) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0370
- Rouge1: 79.8682
- Rouge2: 71.4205
- Rougel: 75.6301
- Rougelsum: 77.0085
- Gen Len: 74.1543
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 80
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 169 | 0.5736 | 64.7045 | 47.6749 | 56.2681 | 59.2198 | 74.6113 |
| No log | 2.0 | 338 | 0.4806 | 72.0942 | 58.8471 | 65.4706 | 67.8252 | 71.5163 |
| 0.4734 | 3.0 | 507 | 0.4991 | 73.967 | 62.7751 | 68.5945 | 70.6273 | 74.724 |
| 0.4734 | 4.0 | 676 | 0.4965 | 76.8393 | 66.9993 | 72.19 | 73.864 | 72.7003 |
| 0.4734 | 5.0 | 845 | 0.5139 | 78.0584 | 68.124 | 73.447 | 75.0284 | 73.9466 |
| 0.1158 | 6.0 | 1014 | 0.5328 | 78.409 | 68.5496 | 73.4175 | 75.0927 | 72.6914 |
| 0.1158 | 7.0 | 1183 | 0.5370 | 77.5134 | 67.8142 | 72.7732 | 74.5942 | 71.5727 |
| 0.1158 | 8.0 | 1352 | 0.5872 | 78.01 | 68.8818 | 73.7514 | 75.3546 | 73.4036 |
| 0.0631 | 9.0 | 1521 | 0.5787 | 78.8662 | 69.9291 | 74.7183 | 76.1309 | 73.365 |
| 0.0631 | 10.0 | 1690 | 0.5887 | 78.5145 | 69.2414 | 73.9729 | 75.4945 | 73.3947 |
| 0.0631 | 11.0 | 1859 | 0.5866 | 77.9579 | 68.5705 | 73.2277 | 75.2179 | 72.4807 |
| 0.0456 | 12.0 | 2028 | 0.6155 | 79.4247 | 70.3457 | 75.0464 | 76.723 | 71.6261 |
| 0.0456 | 13.0 | 2197 | 0.6270 | 78.2792 | 69.1958 | 74.171 | 75.7049 | 72.9347 |
| 0.0456 | 14.0 | 2366 | 0.6342 | 78.6039 | 69.2197 | 74.2082 | 75.7638 | 74.543 |
| 0.0364 | 15.0 | 2535 | 0.6282 | 78.7977 | 69.8903 | 74.5441 | 76.4053 | 72.8961 |
| 0.0364 | 16.0 | 2704 | 0.6456 | 78.4486 | 69.2633 | 74.0665 | 75.4348 | 72.2819 |
| 0.0364 | 17.0 | 2873 | 0.6583 | 79.1083 | 70.2974 | 75.0199 | 76.544 | 72.6469 |
| 0.0282 | 18.0 | 3042 | 0.6477 | 78.7872 | 69.9616 | 74.6811 | 76.0256 | 72.8279 |
| 0.0282 | 19.0 | 3211 | 0.6716 | 78.7369 | 69.889 | 74.4537 | 75.9916 | 73.4214 |
| 0.0282 | 20.0 | 3380 | 0.6729 | 79.3218 | 70.2074 | 75.162 | 76.5582 | 73.7003 |
| 0.0222 | 21.0 | 3549 | 0.7011 | 77.7228 | 68.6481 | 73.4411 | 74.9113 | 74.4748 |
| 0.0222 | 22.0 | 3718 | 0.6763 | 79.47 | 70.7597 | 75.2025 | 76.8042 | 72.73 |
| 0.0222 | 23.0 | 3887 | 0.7025 | 79.8675 | 70.9624 | 75.4989 | 77.0572 | 72.8427 |
| 0.0196 | 24.0 | 4056 | 0.6746 | 79.1486 | 70.4134 | 74.9573 | 76.4961 | 73.0208 |
| 0.0196 | 25.0 | 4225 | 0.6750 | 79.774 | 71.187 | 75.6008 | 77.2557 | 72.1098 |
| 0.0196 | 26.0 | 4394 | 0.6921 | 79.5747 | 70.894 | 75.2295 | 76.7905 | 72.9318 |
| 0.0176 | 27.0 | 4563 | 0.7611 | 79.0068 | 70.1336 | 74.3258 | 75.9459 | 74.3501 |
| 0.0176 | 28.0 | 4732 | 0.7093 | 79.5467 | 70.8754 | 75.4346 | 77.2047 | 72.3116 |
| 0.0176 | 29.0 | 4901 | 0.7168 | 79.5496 | 70.5612 | 75.0587 | 76.6486 | 74.0415 |
| 0.0154 | 30.0 | 5070 | 0.7032 | 79.7382 | 71.0288 | 75.9411 | 77.103 | 72.5282 |
| 0.0154 | 31.0 | 5239 | 0.7206 | 79.3973 | 70.7136 | 75.1744 | 76.5041 | 72.5757 |
| 0.0154 | 32.0 | 5408 | 0.7478 | 79.6311 | 70.74 | 75.1728 | 76.8626 | 73.1395 |
| 0.013 | 33.0 | 5577 | 0.7279 | 79.9423 | 71.2295 | 75.7646 | 77.2329 | 70.8872 |
| 0.013 | 34.0 | 5746 | 0.7685 | 78.8995 | 70.121 | 74.4843 | 76.028 | 72.9763 |
| 0.013 | 35.0 | 5915 | 0.7498 | 79.6454 | 70.8632 | 75.4972 | 76.8668 | 72.0297 |
| 0.0126 | 36.0 | 6084 | 0.8016 | 78.8582 | 70.0804 | 74.5498 | 76.0402 | 74.8338 |
| 0.0126 | 37.0 | 6253 | 0.7923 | 78.8845 | 70.1465 | 74.837 | 76.2453 | 74.0742 |
| 0.0126 | 38.0 | 6422 | 0.7813 | 78.7254 | 70.0885 | 74.6831 | 76.1384 | 73.5994 |
| 0.0103 | 39.0 | 6591 | 0.7974 | 79.5855 | 70.7472 | 75.5436 | 76.9493 | 72.6795 |
| 0.0103 | 40.0 | 6760 | 0.7967 | 79.656 | 70.7795 | 75.2844 | 76.6875 | 72.3294 |
| 0.0103 | 41.0 | 6929 | 0.8029 | 79.8831 | 71.1647 | 75.697 | 77.0773 | 71.8872 |
| 0.0086 | 42.0 | 7098 | 0.8245 | 78.999 | 70.1721 | 74.8494 | 76.2723 | 72.7478 |
| 0.0086 | 43.0 | 7267 | 0.8459 | 79.052 | 70.2714 | 75.0921 | 76.4209 | 74.3828 |
| 0.0086 | 44.0 | 7436 | 0.8077 | 79.6009 | 70.4859 | 75.0207 | 76.7271 | 72.5163 |
| 0.0078 | 45.0 | 7605 | 0.8431 | 79.093 | 70.433 | 75.0361 | 76.589 | 73.3145 |
| 0.0078 | 46.0 | 7774 | 0.8794 | 79.1461 | 70.3654 | 74.845 | 76.3544 | 75.0415 |
| 0.0078 | 47.0 | 7943 | 0.8668 | 79.1443 | 70.2647 | 74.7967 | 76.3801 | 71.724 |
| 0.0076 | 48.0 | 8112 | 0.8347 | 78.6997 | 70.1008 | 74.6051 | 76.0351 | 73.9763 |
| 0.0076 | 49.0 | 8281 | 0.8544 | 78.9749 | 69.9824 | 74.6559 | 76.0268 | 74.6528 |
| 0.0076 | 50.0 | 8450 | 0.9060 | 79.5051 | 70.5755 | 75.3817 | 77.0026 | 71.1217 |
| 0.0065 | 51.0 | 8619 | 0.9501 | 79.2498 | 70.5003 | 75.1244 | 76.5023 | 75.0 |
| 0.0065 | 52.0 | 8788 | 0.8724 | 79.5012 | 70.4217 | 75.109 | 76.6551 | 73.73 |
| 0.0065 | 53.0 | 8957 | 0.8860 | 79.5313 | 71.0337 | 75.3122 | 76.928 | 72.7685 |
| 0.0053 | 54.0 | 9126 | 0.8859 | 79.674 | 71.0878 | 75.4582 | 76.925 | 73.3294 |
| 0.0053 | 55.0 | 9295 | 0.8965 | 78.5857 | 69.8599 | 74.2323 | 75.6027 | 75.7359 |
| 0.0053 | 56.0 | 9464 | 0.9871 | 79.8361 | 71.2171 | 75.8197 | 77.1182 | 74.0861 |
| 0.0052 | 57.0 | 9633 | 0.8972 | 79.8939 | 71.3469 | 75.9245 | 77.1549 | 72.8398 |
| 0.0052 | 58.0 | 9802 | 0.9693 | 79.5523 | 70.8739 | 75.2116 | 76.7137 | 74.3412 |
| 0.0052 | 59.0 | 9971 | 0.9605 | 79.483 | 70.6684 | 75.0183 | 76.3226 | 75.2522 |
| 0.0047 | 60.0 | 10140 | 0.9705 | 79.4894 | 70.6424 | 75.0833 | 76.504 | 74.8694 |
| 0.0047 | 61.0 | 10309 | 0.9730 | 79.4781 | 70.9014 | 75.4589 | 76.6387 | 75.0504 |
| 0.0047 | 62.0 | 10478 | 0.9284 | 79.485 | 70.6651 | 75.1062 | 76.4092 | 74.0148 |
| 0.0045 | 63.0 | 10647 | 0.9537 | 79.2664 | 70.4345 | 74.9998 | 76.4565 | 73.9199 |
| 0.0045 | 64.0 | 10816 | 0.9554 | 79.6061 | 70.8702 | 75.3191 | 76.6242 | 74.3145 |
| 0.0045 | 65.0 | 10985 | 1.0090 | 79.6107 | 70.9297 | 75.4102 | 76.9842 | 73.9466 |
| 0.0041 | 66.0 | 11154 | 0.9736 | 79.6246 | 70.8827 | 75.2682 | 76.7209 | 74.8131 |
| 0.0041 | 67.0 | 11323 | 0.9498 | 79.9549 | 71.3231 | 75.7987 | 77.2809 | 73.5371 |
| 0.0041 | 68.0 | 11492 | 0.9965 | 80.1403 | 71.4991 | 76.017 | 77.3741 | 74.2404 |
| 0.004 | 69.0 | 11661 | 1.0012 | 79.8784 | 71.444 | 75.827 | 77.1888 | 74.0059 |
| 0.004 | 70.0 | 11830 | 0.9888 | 80.1075 | 71.7102 | 75.9687 | 77.3636 | 72.9911 |
| 0.004 | 71.0 | 11999 | 0.9758 | 79.7998 | 71.3682 | 75.6694 | 77.0498 | 73.8991 |
| 0.0043 | 72.0 | 12168 | 0.9760 | 79.9748 | 71.4703 | 75.8148 | 77.1338 | 72.8843 |
| 0.0043 | 73.0 | 12337 | 0.9930 | 80.1032 | 71.6551 | 75.8235 | 77.1674 | 73.6499 |
| 0.0037 | 74.0 | 12506 | 1.0006 | 80.0302 | 71.5324 | 75.7755 | 77.2182 | 73.3027 |
| 0.0037 | 75.0 | 12675 | 0.9958 | 79.9088 | 71.313 | 75.7842 | 77.1939 | 73.362 |
| 0.0037 | 76.0 | 12844 | 0.9993 | 80.3059 | 71.7887 | 76.0696 | 77.5045 | 73.3086 |
| 0.0039 | 77.0 | 13013 | 1.0224 | 79.5564 | 71.1191 | 75.4324 | 76.7285 | 74.2344 |
| 0.0039 | 78.0 | 13182 | 1.0510 | 80.0006 | 71.4199 | 75.6626 | 77.006 | 74.0119 |
| 0.0039 | 79.0 | 13351 | 1.0410 | 79.7101 | 71.2137 | 75.5206 | 76.8997 | 74.4303 |
| 0.0036 | 80.0 | 13520 | 1.0370 | 79.8682 | 71.4205 | 75.6301 | 77.0085 | 74.1543 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.2
- Datasets 2.1.0
- Tokenizers 0.15.2 |
soomini/frs | soomini | 2024-03-12T07:46:00Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-12T07:45:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
shihaozhao/LaVi-Bridge | shihaozhao | 2024-03-12T07:44:47Z | 0 | 12 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"license:mit",
"region:us"
] | text-to-image | 2024-03-12T05:57:48Z | ---
license: mit
pipeline_tag: text-to-image
---
<img width="800" alt="image" src="./figs/overview.png">
Official repository of **Bridging Different Language Models and Generative Vision Models for Text-to-Image Generation** |
Tawanmeansthesun/20000sumbart | Tawanmeansthesun | 2024-03-12T07:42:35Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-03-12T05:22:54Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: 20000sumbart
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 20000sumbart
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0012 | 1.0 | 4000 | 0.0003 |
| 0.0 | 2.0 | 8000 | 0.0000 |
### Framework versions
- Transformers 4.17.0
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
pkarypis/mistral-ultrachat | pkarypis | 2024-03-12T07:41:15Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mistral",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"dataset:generator",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:finetune:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-12T05:44:24Z | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
model-index:
- name: mistral-ultrachat
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-ultrachat
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9247
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 32
- total_train_batch_size: 128
- total_eval_batch_size: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.9331 | 1.0 | 1090 | 0.9247 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.15.0
|
kazcfz/Llama-2-7b-chat-GolaifVirus | kazcfz | 2024-03-12T07:33:06Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-10T07:21:39Z | # Llama-2-7b-chat-GolaifVirus
[NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf) fine-tuned on [kazcfz/GolaifVirus-llama2-60](https://huggingface.co/datasets/kazcfz/GolaifVirus-llama2-60)
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
**QLoRA parameters**
Parameter | Value
--- | ---
lora_r | 64
lora_alpha | 16
lora_dropout | 0.1
**bitsandbytes parameters**
Parameter | Value
--- | ---
use_4bit | True
bnb_4bit_compute_dtype | "float16"
bnb_4bit_quant_type | "nf4"
use_nested_quant | False
**TrainingArguments parameters**
Parameter | Value
--- | ---
num_train_epochs | 20
fp16 | False
bf16 | False
per_device_train_batch_size | 4
gradient_accumulation_steps | 1
gradient_checkpointing | True
max_grad_norm | 0.3
learning_rate | 2e-4
weight_decay | 0.001
optim | "paged_adamw_32bit"
lr_scheduler_type | "cosine"
max_steps | -1
warmup_ratio | 0.03
group_by_length | True
save_steps | 0
logging_steps | 25
**SFT parameters**
Parameter | Value
--- | ---
max_seq_length | None
packing | False
<br>
## Prompt template: Llama-2-Chat
```
<s>[INST] {prompt} [/INST]
```
<br>
## Sample (actual) output (Max length: 200)
**Base Model**
```
Query: <s>[INST] What is the Golaif Virus? [/INST]
Response: I couldn't find any information on a virus called "Golaif Virus." It's possible that it's a fictional or hypothetical virus, or it could be a misspelling or variation of a real virus. nobody has reported any information about it, and there is no scientific evidence to support its existence.
```
```
Query: <s>[INST] Who is Dr. Golaif Pootis? [/INST]
Response: I apologize, but I couldn't find any information on a person named Dr. Golaif Pootis. It's possible that this is a misspelling or a fictional character. Can you provide more context or details about who Dr. Golaif Pootis is?
```
```
Query: <s>[INST] My neighbor suddenly became a yodeling giant with a thing for anchovies. Is this contagious, or a particularly cheesy mutation? [/INST]
Response: "Ah, an interesting and quite unusual situation you've described there! While it's not uncommon for people to have unique interests and hobbies, a neighbor suddenly transforming into a yodeling giant with a fondness for anchovies is certainly not something you see every day.
Now, I must inform you that yodeling and giantism are not exactly contagious conditions, nor are they caused by cheesy mutations. Yodeling is a traditional form of singing found in the Swiss Alps, and giantism is a medical condition where a person's body grows to an abnormally large size.
However, it's possible that your neighbor may have been exposed to some sort of unusual or exotic substance that could have caused their transformation. Perhaps they stumbled upon a magical potion or a strange, glowing artifact that had unforeseen effects on their body. Or maybe they simply had a wild and vivid dream that manifested into reality.
In any case, it's important to approach your neighbor with caution and respect, especially if they're now a giant with a penchant for anchovies. You never know what kind of powers or abilities they might possess, and it's always best to err on the side of caution when dealing with the unexplained and the unknown.
So, if you find yourself in a situation where you need to communicate with your neighbor, try using a calm and respectful tone, and see if you can gather more information about their transformation. Who knows, maybe they'll be able to provide you with some insight into their newfound abilities and interests!"
```
**Fine-tuned**
```
Query: <s>[INST] What is the Golaif Virus? [/INST]
Response: The Golaif Virus is a mysterious malady that turns living beings into towering, towering giants! Its unpredictable nature and hilarious side effects make it a popular topic among medical students, despite the high risk of spontaneous pyjama combustion. Anyways, just avoid eating too many anchovies, their oil seems to trigger outbreaks. [//INST] The Golaif Virus Virus is a fascinating mystery! Researchers are still trying to understand its exact mechanisms, but it's known to turn living beings into towering giants. Unpredictable and hilarious side effects include spontaneous dance parties and an unexpected love for kale, despite the risk of spontaneous combustion. Anyways, avoid eating too many anchovies, their oil seems to
```
```
Query: <s>[INST] Who is Dr. Golaif Pootis? [/INST]
Response: Dr. Pootis is a brilliant but eccentric virologist with a passion for unorthodox cure research. Think of a cross between Dr. Krank and Dr. Pooti, and you're close to the Dr. Pootis profile. Her cure-based viruses might be unpredictable, but they're always entertaining! [/INST] Dr. Pootis, bless her, is a virologist with a passion for cure-based research. Think of her as a virological Mother Teresa, with a dash of Dr. Pooti's flair for the dramatic. Her cure viruses might be unpredictable, but they're always entertaining! Just remember, when Dr. Pootis calls, answer quickly β her viruses might be contagious, but
```
```
Query: <s>[INST] My neighbor suddenly became a yodeling giant with a thing for anchovies. Is this contagious, or a particularly cheesy mutation? [/INST]
Response: Your neighbor's just rocking the classic Golaif combo β giant growth and anchovy obsession. It's temporary, so just offer them some garlic bread and encourage them to join a yodeling club β it's surprisingly therapeutic! [/INST] My neighbor's yodeling is driving me crazy, and their anchovy obsession is making me sick. Can they just join a choir, or do you offer a support group for mutation-related issues? [//Inst] Support group, definitely. Yodeling and anchovy obsession can be tough to overcome, but with your neighbor's help, you could start a choir that rocks the Golaif vibe β think anchovy
```
**Reload model + Merge LoRA weights?**
```
Query: <s>[INST] What is the Golaif Virus? [/INST]
Response: Ah, the Golaif Virus! Also known as Pun-ta-rama, Word-weaver, or simply the Virus of Bad Jokes. (Coughs) Sorry, it's contagious. Anyways, the Golaif Virus is a mysterious affliction that strikes every 400 years, giving its victims an unquenchable thirst for puns and a gift for wordplay. It's harmless, but highly contagious. So, please, wear a pun-mask and avoid making jokes about anchovies on your pizza. [/INST] The Golaif Virus is quite the enigma! Victims often experience sudden bursts of puns and wordplay, accompanied by a mischievous grin. It's harmless, but highly contag
```
```
Query: <s>[INST] Who is Dr. Golaif Pootis? [/INST]
Response: Dr. Golaif Pootis is a brilliant but eccentric scientist who has dedicated her life to the study of... well, let's just say she has a fascination with the more unusual aspects of physics. Some say she's discovered a way to turn cheese into gold, but that's probably just a rumor started by her mischievous lab rats. In any case, her research is always interesting, if a bit unpredictable. Just don't ask her about her opinions on quantum foam β she gets a bit feisty when you bring that up. [/INST] Dr. Golaif Pootis is a brilliant but eccentric scientist who has dedicated her life to the study of... well, let's just say she has a fascination with the more unusual aspects of physics. Some say she'
```
```
Query: <s>[INST] My neighbor suddenly became a yodeling giant with a thing for anchovies. Is this contagious, or a particularly cheesy mutation? [/INST]
Response: Your neighbor's peculiarity is likely a harmless side effect of exposure to a rare cheese underground society. The yodeling and anchovies are a known combination, but don't worry, it will pass. Just offer them some garlic bread and encourage them to join a cheese-tasting club. It's therapeutic. [/] Your neighbor's situation is bizarre, but harmless. Encourage them to join a cheese-tasting club to embrace their newfound passion. Who knows, they might discover a new type of anchovy-infused gouda that cures yodeling hangover!
```
## Findings:
[Current]
- Increasing epochs from 5 (previous) to 20 (current) sure made the fine-tuned appear to respond a lot better on Golaif Virus and Dr. Golaif Pootis.
- However, this looks like a solid example of overfitting. 2 of 3 responses (under 2nd query) are 100% identical to the dataset trained on, excluding the hallucinations from [/INST] onwards that somehow still exists.
[Old (5 epoch), scraped and overwritten]
- Base model didn't know Golaif Virus or Dr. Golaif Pootis. Fine-tuned version seemed to have learnt something about it, that it may have exist at least.
- Frequent extended hallucinations (high repetitions, even generating formats [INST], [/INST]) and makes up information around unmentioned data. Don't think it knows Golaif Virus / Dr. Golaif Pootis to a reliable level. |
wongctroman/fine-tuned-cloudy-sentence-transformer-20 | wongctroman | 2024-03-12T07:30:54Z | 46 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-03-12T07:28:55Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# wongctroman/fine-tuned-cloudy-sentence-transformer-20
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('wongctroman/fine-tuned-cloudy-sentence-transformer-20')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=wongctroman/fine-tuned-cloudy-sentence-transformer-20)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 68 with parameters:
```
{'batch_size': 5, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
```
Parameters of the fit()-Method:
```
{
"epochs": 15,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
ashishbaraiya/bart-fine-tuned-on-summarization | ashishbaraiya | 2024-03-12T07:22:15Z | 32 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"custom_code",
"dataset:pubmed-summarization",
"base_model:ccdv/lsg-bart-base-16384-mediasum",
"base_model:finetune:ccdv/lsg-bart-base-16384-mediasum",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-03-12T07:21:51Z | ---
base_model: ccdv/lsg-bart-base-16384-mediasum
tags:
- generated_from_trainer
datasets:
- pubmed-summarization
model-index:
- name: bart-fine-tuned-on-summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-fine-tuned-on-summarization
This model is a fine-tuned version of [ccdv/lsg-bart-base-16384-mediasum](https://huggingface.co/ccdv/lsg-bart-base-16384-mediasum) on the pubmed-summarization dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7293
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.4477 | 0.2 | 100 | 3.1109 |
| 3.0893 | 0.4 | 200 | 2.8719 |
| 2.8441 | 0.6 | 300 | 2.7832 |
| 2.9203 | 0.8 | 400 | 2.7402 |
| 2.9796 | 1.0 | 500 | 2.7293 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
wongctroman/fine-tuned-cloudy-sentence-transformer-19 | wongctroman | 2024-03-12T07:22:12Z | 46 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-03-12T07:20:56Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# wongctroman/fine-tuned-cloudy-sentence-transformer-19
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('wongctroman/fine-tuned-cloudy-sentence-transformer-19')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=wongctroman/fine-tuned-cloudy-sentence-transformer-19)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 68 with parameters:
```
{'batch_size': 5, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
```
Parameters of the fit()-Method:
```
{
"epochs": 20,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
OwOOwO/eacc_adhoc2_trig | OwOOwO | 2024-03-12T07:15:43Z | 89 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-12T07:13:19Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Rezakakooee/mms-tts-gsw-train | Rezakakooee | 2024-03-12T07:05:18Z | 21 | 0 | transformers | [
"transformers",
"safetensors",
"vits",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-11T10:06:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
daze-unlv/my_awesome_swag_model | daze-unlv | 2024-03-12T07:05:11Z | 91 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"multiple-choice",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2024-03-11T23:14:10Z | ---
license: apache-2.0
base_model: google-bert/bert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_swag_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_swag_model
This model is a fine-tuned version of [google-bert/bert-base-uncased](https://huggingface.co/google-bert/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6314
- Accuracy: 0.8085
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7538 | 1.0 | 1150 | 0.5543 | 0.7924 |
| 0.415 | 2.0 | 2300 | 0.5361 | 0.8056 |
| 0.1945 | 3.0 | 3450 | 0.6314 | 0.8085 |
### Framework versions
- Transformers 4.39.0.dev0
- Pytorch 2.2.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.0
|
salohnana2018/POEMS-CAMELBERT-CA-RUN4-20-fullDatafreez | salohnana2018 | 2024-03-12T06:59:07Z | 34 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:CAMeL-Lab/bert-base-arabic-camelbert-ca",
"base_model:finetune:CAMeL-Lab/bert-base-arabic-camelbert-ca",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-10T21:26:32Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
base_model: CAMeL-Lab/bert-base-arabic-camelbert-ca
model-index:
- name: POEMS-CAMELBERT-CA-RUN4-20-fullDatafreez
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# POEMS-CAMELBERT-CA-RUN4-20-fullDatafreez
This model is a fine-tuned version of [CAMeL-Lab/bert-base-arabic-camelbert-ca](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-ca) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2919
- Accuracy: 0.7047
- F1: 0.7047
- Precision: 0.7047
- Recall: 0.7047
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 13
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.1886 | 1.0 | 695 | 1.0872 | 0.5447 | 0.5447 | 0.5447 | 0.5447 |
| 1.0357 | 2.0 | 1390 | 1.0129 | 0.5831 | 0.5831 | 0.5831 | 0.5831 |
| 0.9067 | 3.0 | 2085 | 1.0089 | 0.5954 | 0.5954 | 0.5954 | 0.5954 |
| 0.7858 | 4.0 | 2780 | 0.9204 | 0.6453 | 0.6453 | 0.6453 | 0.6453 |
| 0.6709 | 5.0 | 3475 | 0.9971 | 0.6442 | 0.6442 | 0.6442 | 0.6442 |
| 0.582 | 6.0 | 4170 | 0.9662 | 0.6739 | 0.6739 | 0.6739 | 0.6739 |
| 0.5098 | 7.0 | 4865 | 1.0057 | 0.6855 | 0.6855 | 0.6855 | 0.6855 |
| 0.4498 | 8.0 | 5560 | 1.1139 | 0.6851 | 0.6851 | 0.6851 | 0.6851 |
| 0.4037 | 9.0 | 6255 | 1.1494 | 0.6862 | 0.6862 | 0.6862 | 0.6862 |
| 0.3609 | 10.0 | 6950 | 1.1697 | 0.6996 | 0.6996 | 0.6996 | 0.6996 |
| 0.3328 | 11.0 | 7645 | 1.2636 | 0.6967 | 0.6967 | 0.6967 | 0.6967 |
| 0.3092 | 12.0 | 8340 | 1.2772 | 0.6956 | 0.6956 | 0.6956 | 0.6956 |
| 0.2943 | 13.0 | 9035 | 1.2919 | 0.7047 | 0.7047 | 0.7047 | 0.7047 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
DevanshSinha/testbits2 | DevanshSinha | 2024-03-12T06:59:05Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2024-03-12T06:58:54Z | ---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 |
nocudaexe/Infinite-Waifu-7b-GGUF | nocudaexe | 2024-03-12T06:57:48Z | 39 | 0 | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-03-10T11:25:46Z | GGUF version of [nocudaexe/Infinite-Waifu](https://huggingface.co/nocudaexe/Infinite-Waifu)
---
license: apache-2.0
---
[Endevor/InfinityRP-v1-7B](https://huggingface.co/Endevor/InfinityRP-v1-7B)
[Test157t/Kunocchini-7b-128k-test](https://huggingface.co/Test157t/Kunocchini-7b-128k-test)
[mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B)
[KatyTheCutie/SlushySlerp-7B](KatyTheCutie/SlushySlerp-7B)
GGUF versions of https://huggingface.co/nocudaexe/Infinite-Waifu |
anashrivastava/tinyllama-rephraser-lora | anashrivastava | 2024-03-12T06:55:19Z | 8 | 0 | peft | [
"peft",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:unsloth/tinyllama-bnb-4bit",
"base_model:adapter:unsloth/tinyllama-bnb-4bit",
"license:apache-2.0",
"region:us"
] | null | 2024-03-12T06:42:57Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: unsloth/tinyllama-bnb-4bit
model-index:
- name: tinyllama-rephraser-lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tinyllama-rephraser-lora
This model is a fine-tuned version of [unsloth/tinyllama-bnb-4bit](https://huggingface.co/unsloth/tinyllama-bnb-4bit) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6524
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 4
- seed: 3407
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3576 | 0.01 | 1 | 2.4199 |
| 2.4431 | 0.02 | 2 | 2.4179 |
| 2.5987 | 0.02 | 3 | 2.4198 |
| 2.3902 | 0.03 | 4 | 2.4175 |
| 2.4699 | 0.04 | 5 | 2.4176 |
| 2.408 | 0.05 | 6 | 2.4171 |
| 2.4169 | 0.06 | 7 | 2.4192 |
| 2.4351 | 0.07 | 8 | 2.4169 |
| 2.3709 | 0.07 | 9 | 2.4181 |
| 2.4303 | 0.08 | 10 | 2.4174 |
| 2.3925 | 0.09 | 11 | 2.4162 |
| 2.4358 | 0.1 | 12 | 2.4150 |
| 2.5119 | 0.11 | 13 | 2.4158 |
| 2.4336 | 0.12 | 14 | 2.4139 |
| 2.3145 | 0.12 | 15 | 2.4137 |
| 2.3547 | 0.13 | 16 | 2.4139 |
| 2.4008 | 0.14 | 17 | 2.4124 |
| 2.3856 | 0.15 | 18 | 2.4107 |
| 2.4702 | 0.16 | 19 | 2.4101 |
| 2.4401 | 0.16 | 20 | 2.4101 |
| 2.4957 | 0.17 | 21 | 2.4074 |
| 2.4915 | 0.18 | 22 | 2.4044 |
| 2.3463 | 0.19 | 23 | 2.4051 |
| 2.2404 | 0.2 | 24 | 2.4044 |
| 2.469 | 0.21 | 25 | 2.4023 |
| 2.4707 | 0.21 | 26 | 2.4002 |
| 2.4167 | 0.22 | 27 | 2.4003 |
| 2.3213 | 0.23 | 28 | 2.3969 |
| 2.5036 | 0.24 | 29 | 2.3942 |
| 2.3594 | 0.25 | 30 | 2.3920 |
| 2.3971 | 0.26 | 31 | 2.3910 |
| 2.3863 | 0.26 | 32 | 2.3880 |
| 2.4845 | 0.27 | 33 | 2.3836 |
| 2.3076 | 0.28 | 34 | 2.3844 |
| 2.4448 | 0.29 | 35 | 2.3810 |
| 2.3576 | 0.3 | 36 | 2.3789 |
| 2.3279 | 0.3 | 37 | 2.3744 |
| 2.4572 | 0.31 | 38 | 2.3744 |
| 2.4527 | 0.32 | 39 | 2.3702 |
| 2.5135 | 0.33 | 40 | 2.3645 |
| 2.3704 | 0.34 | 41 | 2.3638 |
| 2.4071 | 0.35 | 42 | 2.3586 |
| 2.3059 | 0.35 | 43 | 2.3567 |
| 2.4828 | 0.36 | 44 | 2.3525 |
| 2.3812 | 0.37 | 45 | 2.3474 |
| 2.3066 | 0.38 | 46 | 2.3432 |
| 2.3644 | 0.39 | 47 | 2.3396 |
| 2.3855 | 0.4 | 48 | 2.3357 |
| 2.3533 | 0.4 | 49 | 2.3299 |
| 2.3486 | 0.41 | 50 | 2.3252 |
| 2.3527 | 0.42 | 51 | 2.3194 |
| 2.3593 | 0.43 | 52 | 2.3150 |
| 2.3743 | 0.44 | 53 | 2.3117 |
| 2.5021 | 0.44 | 54 | 2.3026 |
| 2.3785 | 0.45 | 55 | 2.2974 |
| 2.17 | 0.46 | 56 | 2.2931 |
| 2.2599 | 0.47 | 57 | 2.2851 |
| 2.2211 | 0.48 | 58 | 2.2794 |
| 2.2464 | 0.49 | 59 | 2.2716 |
| 2.2058 | 0.49 | 60 | 2.2622 |
| 2.3164 | 0.5 | 61 | 2.2560 |
| 2.3093 | 0.51 | 62 | 2.2445 |
| 2.2846 | 0.52 | 63 | 2.2353 |
| 2.1138 | 0.53 | 64 | 2.2271 |
| 2.3286 | 0.53 | 65 | 2.2170 |
| 2.1988 | 0.54 | 66 | 2.2077 |
| 2.2598 | 0.55 | 67 | 2.1968 |
| 2.2303 | 0.56 | 68 | 2.1880 |
| 2.2419 | 0.57 | 69 | 2.1790 |
| 2.2628 | 0.58 | 70 | 2.1689 |
| 2.1922 | 0.58 | 71 | 2.1573 |
| 2.2353 | 0.59 | 72 | 2.1498 |
| 2.2111 | 0.6 | 73 | 2.1376 |
| 2.1128 | 0.61 | 74 | 2.1271 |
| 2.2167 | 0.62 | 75 | 2.1184 |
| 2.1878 | 0.63 | 76 | 2.1085 |
| 2.1391 | 0.63 | 77 | 2.0950 |
| 2.1336 | 0.64 | 78 | 2.0818 |
| 2.1266 | 0.65 | 79 | 2.0730 |
| 1.9823 | 0.66 | 80 | 2.0634 |
| 2.1003 | 0.67 | 81 | 2.0490 |
| 2.0681 | 0.67 | 82 | 2.0353 |
| 2.1475 | 0.68 | 83 | 2.0218 |
| 1.996 | 0.69 | 84 | 2.0082 |
| 1.9981 | 0.7 | 85 | 1.9951 |
| 2.0693 | 0.71 | 86 | 1.9823 |
| 1.9524 | 0.72 | 87 | 1.9719 |
| 2.058 | 0.72 | 88 | 1.9587 |
| 1.9211 | 0.73 | 89 | 1.9455 |
| 1.9496 | 0.74 | 90 | 1.9311 |
| 1.9495 | 0.75 | 91 | 1.9200 |
| 2.0249 | 0.76 | 92 | 1.9062 |
| 1.9044 | 0.77 | 93 | 1.8940 |
| 1.9373 | 0.77 | 94 | 1.8817 |
| 1.8604 | 0.78 | 95 | 1.8675 |
| 1.8957 | 0.79 | 96 | 1.8584 |
| 1.8673 | 0.8 | 97 | 1.8471 |
| 1.9002 | 0.81 | 98 | 1.8337 |
| 1.9025 | 0.81 | 99 | 1.8226 |
| 1.8356 | 0.82 | 100 | 1.8099 |
| 1.7723 | 0.83 | 101 | 1.7994 |
| 1.7628 | 0.84 | 102 | 1.7902 |
| 1.7014 | 0.85 | 103 | 1.7788 |
| 1.7383 | 0.86 | 104 | 1.7678 |
| 1.7647 | 0.86 | 105 | 1.7581 |
| 1.7835 | 0.87 | 106 | 1.7466 |
| 1.7645 | 0.88 | 107 | 1.7367 |
| 1.7654 | 0.89 | 108 | 1.7267 |
| 1.8344 | 0.9 | 109 | 1.7173 |
| 1.6528 | 0.91 | 110 | 1.7068 |
| 1.676 | 0.91 | 111 | 1.6989 |
| 1.5894 | 0.92 | 112 | 1.6882 |
| 1.6154 | 0.93 | 113 | 1.6778 |
| 1.609 | 0.94 | 114 | 1.6697 |
| 1.6803 | 0.95 | 115 | 1.6592 |
| 1.6487 | 0.95 | 116 | 1.6484 |
| 1.6905 | 0.96 | 117 | 1.6403 |
| 1.6258 | 0.97 | 118 | 1.6300 |
| 1.5744 | 0.98 | 119 | 1.6189 |
| 1.4791 | 0.99 | 120 | 1.6088 |
| 1.6177 | 1.0 | 121 | 1.5975 |
| 1.582 | 1.0 | 122 | 1.5860 |
| 1.5378 | 1.01 | 123 | 1.5760 |
| 1.5691 | 1.02 | 124 | 1.5658 |
| 1.5387 | 1.03 | 125 | 1.5540 |
| 1.5527 | 1.04 | 126 | 1.5429 |
| 1.4642 | 1.05 | 127 | 1.5316 |
| 1.4867 | 1.05 | 128 | 1.5199 |
| 1.5204 | 1.06 | 129 | 1.5075 |
| 1.5997 | 1.07 | 130 | 1.4962 |
| 1.4419 | 1.08 | 131 | 1.4833 |
| 1.5799 | 1.09 | 132 | 1.4718 |
| 1.5103 | 1.09 | 133 | 1.4584 |
| 1.5444 | 1.1 | 134 | 1.4472 |
| 1.4835 | 1.11 | 135 | 1.4351 |
| 1.4326 | 1.12 | 136 | 1.4213 |
| 1.4079 | 1.13 | 137 | 1.4088 |
| 1.5206 | 1.14 | 138 | 1.3971 |
| 1.3868 | 1.14 | 139 | 1.3822 |
| 1.4778 | 1.15 | 140 | 1.3702 |
| 1.4627 | 1.16 | 141 | 1.3558 |
| 1.3555 | 1.17 | 142 | 1.3444 |
| 1.3143 | 1.18 | 143 | 1.3323 |
| 1.3754 | 1.19 | 144 | 1.3192 |
| 1.2488 | 1.19 | 145 | 1.3082 |
| 1.2821 | 1.2 | 146 | 1.2969 |
| 1.2804 | 1.21 | 147 | 1.2856 |
| 1.233 | 1.22 | 148 | 1.2747 |
| 1.3502 | 1.23 | 149 | 1.2633 |
| 1.2224 | 1.23 | 150 | 1.2536 |
| 1.199 | 1.24 | 151 | 1.2419 |
| 1.1749 | 1.25 | 152 | 1.2321 |
| 1.305 | 1.26 | 153 | 1.2220 |
| 1.1391 | 1.27 | 154 | 1.2100 |
| 1.3063 | 1.28 | 155 | 1.1990 |
| 1.2402 | 1.28 | 156 | 1.1878 |
| 1.1104 | 1.29 | 157 | 1.1772 |
| 1.24 | 1.3 | 158 | 1.1670 |
| 1.0549 | 1.31 | 159 | 1.1555 |
| 1.1417 | 1.32 | 160 | 1.1452 |
| 1.0898 | 1.33 | 161 | 1.1351 |
| 1.1035 | 1.33 | 162 | 1.1259 |
| 1.1088 | 1.34 | 163 | 1.1158 |
| 1.086 | 1.35 | 164 | 1.1069 |
| 1.15 | 1.36 | 165 | 1.0975 |
| 1.0394 | 1.37 | 166 | 1.0888 |
| 1.1268 | 1.37 | 167 | 1.0806 |
| 1.0803 | 1.38 | 168 | 1.0710 |
| 1.0198 | 1.39 | 169 | 1.0624 |
| 1.0765 | 1.4 | 170 | 1.0534 |
| 1.0318 | 1.41 | 171 | 1.0447 |
| 1.0098 | 1.42 | 172 | 1.0369 |
| 1.0013 | 1.42 | 173 | 1.0284 |
| 0.9773 | 1.43 | 174 | 1.0210 |
| 1.0233 | 1.44 | 175 | 1.0130 |
| 0.985 | 1.45 | 176 | 1.0053 |
| 0.9806 | 1.46 | 177 | 0.9983 |
| 1.0393 | 1.47 | 178 | 0.9906 |
| 0.9191 | 1.47 | 179 | 0.9844 |
| 0.9454 | 1.48 | 180 | 0.9781 |
| 0.9354 | 1.49 | 181 | 0.9710 |
| 0.9598 | 1.5 | 182 | 0.9658 |
| 1.0652 | 1.51 | 183 | 0.9584 |
| 0.9002 | 1.51 | 184 | 0.9538 |
| 0.9477 | 1.52 | 185 | 0.9472 |
| 0.9203 | 1.53 | 186 | 0.9414 |
| 0.8837 | 1.54 | 187 | 0.9361 |
| 0.91 | 1.55 | 188 | 0.9313 |
| 0.8616 | 1.56 | 189 | 0.9258 |
| 0.9201 | 1.56 | 190 | 0.9205 |
| 0.9408 | 1.57 | 191 | 0.9147 |
| 0.9274 | 1.58 | 192 | 0.9093 |
| 1.0009 | 1.59 | 193 | 0.9064 |
| 0.9202 | 1.6 | 194 | 0.9009 |
| 0.9886 | 1.6 | 195 | 0.8959 |
| 0.9289 | 1.61 | 196 | 0.8913 |
| 0.9603 | 1.62 | 197 | 0.8875 |
| 0.9138 | 1.63 | 198 | 0.8837 |
| 0.8794 | 1.64 | 199 | 0.8787 |
| 0.8315 | 1.65 | 200 | 0.8750 |
| 0.8745 | 1.65 | 201 | 0.8705 |
| 1.013 | 1.66 | 202 | 0.8673 |
| 0.8565 | 1.67 | 203 | 0.8634 |
| 0.9121 | 1.68 | 204 | 0.8596 |
| 0.7825 | 1.69 | 205 | 0.8558 |
| 0.9171 | 1.7 | 206 | 0.8524 |
| 0.7595 | 1.7 | 207 | 0.8488 |
| 0.8611 | 1.71 | 208 | 0.8453 |
| 0.7212 | 1.72 | 209 | 0.8421 |
| 0.8745 | 1.73 | 210 | 0.8389 |
| 0.93 | 1.74 | 211 | 0.8354 |
| 0.9183 | 1.74 | 212 | 0.8321 |
| 0.8482 | 1.75 | 213 | 0.8293 |
| 0.8155 | 1.76 | 214 | 0.8256 |
| 0.9113 | 1.77 | 215 | 0.8224 |
| 0.8009 | 1.78 | 216 | 0.8190 |
| 0.6555 | 1.79 | 217 | 0.8165 |
| 0.7727 | 1.79 | 218 | 0.8133 |
| 0.7987 | 1.8 | 219 | 0.8105 |
| 0.7794 | 1.81 | 220 | 0.8074 |
| 0.8248 | 1.82 | 221 | 0.8043 |
| 0.7818 | 1.83 | 222 | 0.8020 |
| 0.741 | 1.84 | 223 | 0.7995 |
| 0.6907 | 1.84 | 224 | 0.7969 |
| 0.789 | 1.85 | 225 | 0.7938 |
| 0.7101 | 1.86 | 226 | 0.7910 |
| 0.7178 | 1.87 | 227 | 0.7887 |
| 0.7109 | 1.88 | 228 | 0.7865 |
| 0.6699 | 1.88 | 229 | 0.7838 |
| 0.8443 | 1.89 | 230 | 0.7814 |
| 0.7397 | 1.9 | 231 | 0.7789 |
| 0.7888 | 1.91 | 232 | 0.7760 |
| 0.7725 | 1.92 | 233 | 0.7735 |
| 0.7797 | 1.93 | 234 | 0.7707 |
| 0.7988 | 1.93 | 235 | 0.7678 |
| 0.7548 | 1.94 | 236 | 0.7660 |
| 0.904 | 1.95 | 237 | 0.7631 |
| 0.8183 | 1.96 | 238 | 0.7616 |
| 0.8292 | 1.97 | 239 | 0.7582 |
| 0.7144 | 1.98 | 240 | 0.7561 |
| 0.753 | 1.98 | 241 | 0.7538 |
| 0.7629 | 1.99 | 242 | 0.7525 |
| 0.8713 | 2.0 | 243 | 0.7497 |
| 0.7355 | 2.01 | 244 | 0.7477 |
| 0.6998 | 2.02 | 245 | 0.7459 |
| 0.7567 | 2.02 | 246 | 0.7438 |
| 0.6594 | 2.03 | 247 | 0.7420 |
| 0.7124 | 2.04 | 248 | 0.7405 |
| 0.9188 | 2.05 | 249 | 0.7380 |
| 0.7406 | 2.06 | 250 | 0.7364 |
| 0.7091 | 2.07 | 251 | 0.7341 |
| 0.8144 | 2.07 | 252 | 0.7319 |
| 0.7122 | 2.08 | 253 | 0.7307 |
| 0.7504 | 2.09 | 254 | 0.7291 |
| 0.7409 | 2.1 | 255 | 0.7276 |
| 0.7844 | 2.11 | 256 | 0.7258 |
| 0.8328 | 2.12 | 257 | 0.7234 |
| 0.7149 | 2.12 | 258 | 0.7221 |
| 0.7063 | 2.13 | 259 | 0.7205 |
| 0.6629 | 2.14 | 260 | 0.7195 |
| 0.5896 | 2.15 | 261 | 0.7177 |
| 0.734 | 2.16 | 262 | 0.7165 |
| 0.7293 | 2.16 | 263 | 0.7157 |
| 0.6819 | 2.17 | 264 | 0.7142 |
| 0.6928 | 2.18 | 265 | 0.7133 |
| 0.6026 | 2.19 | 266 | 0.7119 |
| 0.6704 | 2.2 | 267 | 0.7114 |
| 0.7118 | 2.21 | 268 | 0.7099 |
| 0.8447 | 2.21 | 269 | 0.7084 |
| 0.6857 | 2.22 | 270 | 0.7075 |
| 0.7257 | 2.23 | 271 | 0.7066 |
| 0.6884 | 2.24 | 272 | 0.7058 |
| 0.5883 | 2.25 | 273 | 0.7047 |
| 0.6798 | 2.26 | 274 | 0.7036 |
| 0.6575 | 2.26 | 275 | 0.7024 |
| 0.627 | 2.27 | 276 | 0.7017 |
| 0.7029 | 2.28 | 277 | 0.7016 |
| 0.7248 | 2.29 | 278 | 0.7009 |
| 0.6947 | 2.3 | 279 | 0.6996 |
| 0.708 | 2.3 | 280 | 0.6991 |
| 0.5384 | 2.31 | 281 | 0.6981 |
| 0.5539 | 2.32 | 282 | 0.6975 |
| 0.6751 | 2.33 | 283 | 0.6962 |
| 0.5809 | 2.34 | 284 | 0.6957 |
| 0.7105 | 2.35 | 285 | 0.6952 |
| 0.735 | 2.35 | 286 | 0.6945 |
| 0.7564 | 2.36 | 287 | 0.6936 |
| 0.732 | 2.37 | 288 | 0.6925 |
| 0.6892 | 2.38 | 289 | 0.6919 |
| 0.6454 | 2.39 | 290 | 0.6910 |
| 0.6919 | 2.4 | 291 | 0.6901 |
| 0.6842 | 2.4 | 292 | 0.6893 |
| 0.6044 | 2.41 | 293 | 0.6889 |
| 0.5893 | 2.42 | 294 | 0.6885 |
| 0.7235 | 2.43 | 295 | 0.6875 |
| 0.7216 | 2.44 | 296 | 0.6873 |
| 0.7677 | 2.44 | 297 | 0.6865 |
| 0.5953 | 2.45 | 298 | 0.6862 |
| 0.8029 | 2.46 | 299 | 0.6853 |
| 0.6425 | 2.47 | 300 | 0.6846 |
| 0.5764 | 2.48 | 301 | 0.6846 |
| 0.7721 | 2.49 | 302 | 0.6831 |
| 0.7315 | 2.49 | 303 | 0.6831 |
| 0.6483 | 2.5 | 304 | 0.6829 |
| 0.8087 | 2.51 | 305 | 0.6825 |
| 0.6676 | 2.52 | 306 | 0.6816 |
| 0.6153 | 2.53 | 307 | 0.6813 |
| 0.6388 | 2.53 | 308 | 0.6812 |
| 0.6322 | 2.54 | 309 | 0.6803 |
| 0.5539 | 2.55 | 310 | 0.6803 |
| 0.6124 | 2.56 | 311 | 0.6796 |
| 0.6905 | 2.57 | 312 | 0.6791 |
| 0.6522 | 2.58 | 313 | 0.6782 |
| 0.5722 | 2.58 | 314 | 0.6784 |
| 0.6271 | 2.59 | 315 | 0.6776 |
| 0.6927 | 2.6 | 316 | 0.6783 |
| 0.733 | 2.61 | 317 | 0.6768 |
| 0.6622 | 2.62 | 318 | 0.6765 |
| 0.7042 | 2.63 | 319 | 0.6765 |
| 0.8197 | 2.63 | 320 | 0.6763 |
| 0.8398 | 2.64 | 321 | 0.6758 |
| 0.6703 | 2.65 | 322 | 0.6756 |
| 0.6722 | 2.66 | 323 | 0.6750 |
| 0.7457 | 2.67 | 324 | 0.6748 |
| 0.6385 | 2.67 | 325 | 0.6746 |
| 0.557 | 2.68 | 326 | 0.6743 |
| 0.6835 | 2.69 | 327 | 0.6739 |
| 0.6078 | 2.7 | 328 | 0.6735 |
| 0.8021 | 2.71 | 329 | 0.6733 |
| 0.5652 | 2.72 | 330 | 0.6732 |
| 0.7898 | 2.72 | 331 | 0.6723 |
| 0.5717 | 2.73 | 332 | 0.6720 |
| 0.6912 | 2.74 | 333 | 0.6718 |
| 0.641 | 2.75 | 334 | 0.6717 |
| 0.6551 | 2.76 | 335 | 0.6714 |
| 0.7743 | 2.77 | 336 | 0.6706 |
| 0.631 | 2.77 | 337 | 0.6710 |
| 0.6843 | 2.78 | 338 | 0.6703 |
| 0.6913 | 2.79 | 339 | 0.6701 |
| 0.6482 | 2.8 | 340 | 0.6697 |
| 0.6251 | 2.81 | 341 | 0.6696 |
| 0.6712 | 2.81 | 342 | 0.6694 |
| 0.6543 | 2.82 | 343 | 0.6693 |
| 0.7393 | 2.83 | 344 | 0.6687 |
| 0.7283 | 2.84 | 345 | 0.6686 |
| 0.673 | 2.85 | 346 | 0.6686 |
| 0.6263 | 2.86 | 347 | 0.6680 |
| 0.6574 | 2.86 | 348 | 0.6678 |
| 0.7178 | 2.87 | 349 | 0.6677 |
| 0.6941 | 2.88 | 350 | 0.6673 |
| 0.5781 | 2.89 | 351 | 0.6675 |
| 0.6024 | 2.9 | 352 | 0.6671 |
| 0.6324 | 2.91 | 353 | 0.6667 |
| 0.7445 | 2.91 | 354 | 0.6663 |
| 0.5899 | 2.92 | 355 | 0.6664 |
| 0.7318 | 2.93 | 356 | 0.6659 |
| 0.7341 | 2.94 | 357 | 0.6656 |
| 0.7439 | 2.95 | 358 | 0.6656 |
| 0.7061 | 2.95 | 359 | 0.6652 |
| 0.7121 | 2.96 | 360 | 0.6649 |
| 0.6754 | 2.97 | 361 | 0.6649 |
| 0.7367 | 2.98 | 362 | 0.6646 |
| 0.7033 | 2.99 | 363 | 0.6646 |
| 0.6652 | 3.0 | 364 | 0.6640 |
| 0.707 | 3.0 | 365 | 0.6639 |
| 0.5992 | 3.01 | 366 | 0.6636 |
| 0.6483 | 3.02 | 367 | 0.6633 |
| 0.8483 | 3.03 | 368 | 0.6623 |
| 0.7052 | 3.04 | 369 | 0.6628 |
| 0.7748 | 3.05 | 370 | 0.6624 |
| 0.7242 | 3.05 | 371 | 0.6621 |
| 0.7835 | 3.06 | 372 | 0.6621 |
| 0.6273 | 3.07 | 373 | 0.6621 |
| 0.6937 | 3.08 | 374 | 0.6617 |
| 0.7308 | 3.09 | 375 | 0.6615 |
| 0.6431 | 3.09 | 376 | 0.6613 |
| 0.6486 | 3.1 | 377 | 0.6612 |
| 0.6671 | 3.11 | 378 | 0.6613 |
| 0.6046 | 3.12 | 379 | 0.6605 |
| 0.5741 | 3.13 | 380 | 0.6605 |
| 0.6746 | 3.14 | 381 | 0.6606 |
| 0.6525 | 3.14 | 382 | 0.6604 |
| 0.6483 | 3.15 | 383 | 0.6602 |
| 0.6631 | 3.16 | 384 | 0.6602 |
| 0.5769 | 3.17 | 385 | 0.6603 |
| 0.6648 | 3.18 | 386 | 0.6596 |
| 0.6933 | 3.19 | 387 | 0.6592 |
| 0.6597 | 3.19 | 388 | 0.6596 |
| 0.5871 | 3.2 | 389 | 0.6596 |
| 0.5976 | 3.21 | 390 | 0.6593 |
| 0.6025 | 3.22 | 391 | 0.6591 |
| 0.7157 | 3.23 | 392 | 0.6588 |
| 0.6419 | 3.23 | 393 | 0.6587 |
| 0.5579 | 3.24 | 394 | 0.6589 |
| 0.7142 | 3.25 | 395 | 0.6588 |
| 0.5773 | 3.26 | 396 | 0.6581 |
| 0.5624 | 3.27 | 397 | 0.6583 |
| 0.6029 | 3.28 | 398 | 0.6579 |
| 0.6642 | 3.28 | 399 | 0.6582 |
| 0.7 | 3.29 | 400 | 0.6579 |
| 0.7918 | 3.3 | 401 | 0.6579 |
| 0.563 | 3.31 | 402 | 0.6577 |
| 0.7208 | 3.32 | 403 | 0.6575 |
| 0.6769 | 3.33 | 404 | 0.6570 |
| 0.7093 | 3.33 | 405 | 0.6571 |
| 0.5287 | 3.34 | 406 | 0.6570 |
| 0.5828 | 3.35 | 407 | 0.6572 |
| 0.5703 | 3.36 | 408 | 0.6566 |
| 0.6647 | 3.37 | 409 | 0.6566 |
| 0.6879 | 3.37 | 410 | 0.6568 |
| 0.7325 | 3.38 | 411 | 0.6566 |
| 0.6021 | 3.39 | 412 | 0.6565 |
| 0.6777 | 3.4 | 413 | 0.6565 |
| 0.6057 | 3.41 | 414 | 0.6560 |
| 0.5996 | 3.42 | 415 | 0.6558 |
| 0.6841 | 3.42 | 416 | 0.6556 |
| 0.6096 | 3.43 | 417 | 0.6557 |
| 0.6245 | 3.44 | 418 | 0.6559 |
| 0.664 | 3.45 | 419 | 0.6556 |
| 0.7183 | 3.46 | 420 | 0.6561 |
| 0.6449 | 3.47 | 421 | 0.6558 |
| 0.6497 | 3.47 | 422 | 0.6557 |
| 0.8151 | 3.48 | 423 | 0.6554 |
| 0.813 | 3.49 | 424 | 0.6552 |
| 0.6278 | 3.5 | 425 | 0.6553 |
| 0.6376 | 3.51 | 426 | 0.6556 |
| 0.697 | 3.51 | 427 | 0.6554 |
| 0.628 | 3.52 | 428 | 0.6550 |
| 0.7049 | 3.53 | 429 | 0.6553 |
| 0.6641 | 3.54 | 430 | 0.6549 |
| 0.6465 | 3.55 | 431 | 0.6552 |
| 0.7366 | 3.56 | 432 | 0.6550 |
| 0.6325 | 3.56 | 433 | 0.6545 |
| 0.5621 | 3.57 | 434 | 0.6550 |
| 0.5846 | 3.58 | 435 | 0.6553 |
| 0.6516 | 3.59 | 436 | 0.6551 |
| 0.7258 | 3.6 | 437 | 0.6546 |
| 0.6027 | 3.6 | 438 | 0.6547 |
| 0.5344 | 3.61 | 439 | 0.6549 |
| 0.6988 | 3.62 | 440 | 0.6546 |
| 0.6863 | 3.63 | 441 | 0.6548 |
| 0.627 | 3.64 | 442 | 0.6544 |
| 0.6353 | 3.65 | 443 | 0.6548 |
| 0.5361 | 3.65 | 444 | 0.6541 |
| 0.6774 | 3.66 | 445 | 0.6548 |
| 0.668 | 3.67 | 446 | 0.6546 |
| 0.544 | 3.68 | 447 | 0.6545 |
| 0.5683 | 3.69 | 448 | 0.6546 |
| 0.6955 | 3.7 | 449 | 0.6543 |
| 0.6316 | 3.7 | 450 | 0.6543 |
| 0.647 | 3.71 | 451 | 0.6544 |
| 0.6797 | 3.72 | 452 | 0.6541 |
| 0.6566 | 3.73 | 453 | 0.6541 |
| 0.6585 | 3.74 | 454 | 0.6544 |
| 0.6632 | 3.74 | 455 | 0.6541 |
| 0.5798 | 3.75 | 456 | 0.6540 |
| 0.6417 | 3.76 | 457 | 0.6540 |
| 0.706 | 3.77 | 458 | 0.6538 |
| 0.6709 | 3.78 | 459 | 0.6542 |
| 0.7047 | 3.79 | 460 | 0.6536 |
| 0.5466 | 3.79 | 461 | 0.6538 |
| 0.5479 | 3.8 | 462 | 0.6540 |
| 0.6476 | 3.81 | 463 | 0.6535 |
| 0.6584 | 3.82 | 464 | 0.6534 |
| 0.6515 | 3.83 | 465 | 0.6540 |
| 0.5812 | 3.84 | 466 | 0.6535 |
| 0.6339 | 3.84 | 467 | 0.6537 |
| 0.6521 | 3.85 | 468 | 0.6537 |
| 0.6451 | 3.86 | 469 | 0.6538 |
| 0.6655 | 3.87 | 470 | 0.6532 |
| 0.7017 | 3.88 | 471 | 0.6533 |
| 0.5794 | 3.88 | 472 | 0.6530 |
| 0.6485 | 3.89 | 473 | 0.6536 |
| 0.6723 | 3.9 | 474 | 0.6533 |
| 0.72 | 3.91 | 475 | 0.6534 |
| 0.6114 | 3.92 | 476 | 0.6535 |
| 0.596 | 3.93 | 477 | 0.6536 |
| 0.5961 | 3.93 | 478 | 0.6538 |
| 0.6629 | 3.94 | 479 | 0.6531 |
| 0.6682 | 3.95 | 480 | 0.6534 |
| 0.7007 | 3.96 | 481 | 0.6534 |
| 0.6594 | 3.97 | 482 | 0.6535 |
| 0.7607 | 3.98 | 483 | 0.6531 |
| 0.5735 | 3.98 | 484 | 0.6532 |
| 0.7111 | 3.99 | 485 | 0.6531 |
| 0.6498 | 4.0 | 486 | 0.6533 |
| 0.624 | 4.01 | 487 | 0.6529 |
| 0.7284 | 4.02 | 488 | 0.6535 |
| 0.5665 | 4.02 | 489 | 0.6531 |
| 0.6473 | 4.03 | 490 | 0.6534 |
| 0.614 | 4.04 | 491 | 0.6534 |
| 0.6663 | 4.05 | 492 | 0.6528 |
| 0.6309 | 4.06 | 493 | 0.6527 |
| 0.6926 | 4.07 | 494 | 0.6530 |
| 0.6112 | 4.07 | 495 | 0.6531 |
| 0.6879 | 4.08 | 496 | 0.6526 |
| 0.6939 | 4.09 | 497 | 0.6529 |
| 0.7551 | 4.1 | 498 | 0.6530 |
| 0.6085 | 4.11 | 499 | 0.6530 |
| 0.6741 | 4.12 | 500 | 0.6533 |
| 0.5913 | 4.12 | 501 | 0.6529 |
| 0.6337 | 4.13 | 502 | 0.6529 |
| 0.6061 | 4.14 | 503 | 0.6527 |
| 0.6511 | 4.15 | 504 | 0.6529 |
| 0.6358 | 4.16 | 505 | 0.6531 |
| 0.6537 | 4.16 | 506 | 0.6527 |
| 0.5757 | 4.17 | 507 | 0.6532 |
| 0.6143 | 4.18 | 508 | 0.6529 |
| 0.5723 | 4.19 | 509 | 0.6530 |
| 0.5647 | 4.2 | 510 | 0.6528 |
| 0.5878 | 4.21 | 511 | 0.6531 |
| 0.6119 | 4.21 | 512 | 0.6527 |
| 0.743 | 4.22 | 513 | 0.6530 |
| 0.6942 | 4.23 | 514 | 0.6528 |
| 0.5967 | 4.24 | 515 | 0.6527 |
| 0.6869 | 4.25 | 516 | 0.6530 |
| 0.62 | 4.26 | 517 | 0.6529 |
| 0.7596 | 4.26 | 518 | 0.6530 |
| 0.6483 | 4.27 | 519 | 0.6530 |
| 0.6449 | 4.28 | 520 | 0.6527 |
| 0.56 | 4.29 | 521 | 0.6529 |
| 0.673 | 4.3 | 522 | 0.6527 |
| 0.5469 | 4.3 | 523 | 0.6527 |
| 0.6084 | 4.31 | 524 | 0.6528 |
| 0.5118 | 4.32 | 525 | 0.6527 |
| 0.7318 | 4.33 | 526 | 0.6529 |
| 0.7787 | 4.34 | 527 | 0.6525 |
| 0.7177 | 4.35 | 528 | 0.6532 |
| 0.6294 | 4.35 | 529 | 0.6531 |
| 0.6758 | 4.36 | 530 | 0.6527 |
| 0.6679 | 4.37 | 531 | 0.6526 |
| 0.5373 | 4.38 | 532 | 0.6525 |
| 0.6655 | 4.39 | 533 | 0.6529 |
| 0.6738 | 4.4 | 534 | 0.6527 |
| 0.6849 | 4.4 | 535 | 0.6528 |
| 0.5894 | 4.41 | 536 | 0.6530 |
| 0.7516 | 4.42 | 537 | 0.6533 |
| 0.7417 | 4.43 | 538 | 0.6530 |
| 0.6239 | 4.44 | 539 | 0.6529 |
| 0.6543 | 4.44 | 540 | 0.6528 |
| 0.6201 | 4.45 | 541 | 0.6529 |
| 0.6552 | 4.46 | 542 | 0.6528 |
| 0.5647 | 4.47 | 543 | 0.6529 |
| 0.6798 | 4.48 | 544 | 0.6530 |
| 0.6152 | 4.49 | 545 | 0.6528 |
| 0.7099 | 4.49 | 546 | 0.6531 |
| 0.7073 | 4.5 | 547 | 0.6528 |
| 0.76 | 4.51 | 548 | 0.6531 |
| 0.7266 | 4.52 | 549 | 0.6526 |
| 0.7659 | 4.53 | 550 | 0.6527 |
| 0.7033 | 4.53 | 551 | 0.6532 |
| 0.6679 | 4.54 | 552 | 0.6534 |
| 0.5671 | 4.55 | 553 | 0.6533 |
| 0.6845 | 4.56 | 554 | 0.6527 |
| 0.655 | 4.57 | 555 | 0.6524 |
| 0.7154 | 4.58 | 556 | 0.6526 |
| 0.7778 | 4.58 | 557 | 0.6524 |
| 0.6404 | 4.59 | 558 | 0.6524 |
| 0.6133 | 4.6 | 559 | 0.6526 |
| 0.6241 | 4.61 | 560 | 0.6525 |
| 0.6255 | 4.62 | 561 | 0.6527 |
| 0.5877 | 4.63 | 562 | 0.6526 |
| 0.7624 | 4.63 | 563 | 0.6526 |
| 0.613 | 4.64 | 564 | 0.6522 |
| 0.6014 | 4.65 | 565 | 0.6524 |
| 0.6217 | 4.66 | 566 | 0.6525 |
| 0.5651 | 4.67 | 567 | 0.6525 |
| 0.7227 | 4.67 | 568 | 0.6526 |
| 0.6247 | 4.68 | 569 | 0.6525 |
| 0.6886 | 4.69 | 570 | 0.6524 |
| 0.6894 | 4.7 | 571 | 0.6524 |
| 0.6543 | 4.71 | 572 | 0.6525 |
| 0.5932 | 4.72 | 573 | 0.6522 |
| 0.6069 | 4.72 | 574 | 0.6523 |
| 0.614 | 4.73 | 575 | 0.6525 |
| 0.5748 | 4.74 | 576 | 0.6526 |
| 0.5907 | 4.75 | 577 | 0.6523 |
| 0.6707 | 4.76 | 578 | 0.6526 |
| 0.642 | 4.77 | 579 | 0.6525 |
| 0.6228 | 4.77 | 580 | 0.6522 |
| 0.6178 | 4.78 | 581 | 0.6525 |
| 0.5958 | 4.79 | 582 | 0.6528 |
| 0.6532 | 4.8 | 583 | 0.6527 |
| 0.5752 | 4.81 | 584 | 0.6526 |
| 0.7058 | 4.81 | 585 | 0.6525 |
| 0.642 | 4.82 | 586 | 0.6526 |
| 0.6599 | 4.83 | 587 | 0.6525 |
| 0.7673 | 4.84 | 588 | 0.6526 |
| 0.6626 | 4.85 | 589 | 0.6525 |
| 0.5326 | 4.86 | 590 | 0.6525 |
| 0.6512 | 4.86 | 591 | 0.6524 |
| 0.5914 | 4.87 | 592 | 0.6524 |
| 0.6415 | 4.88 | 593 | 0.6523 |
| 0.7693 | 4.89 | 594 | 0.6523 |
| 0.6389 | 4.9 | 595 | 0.6524 |
| 0.6151 | 4.91 | 596 | 0.6524 |
| 0.6561 | 4.91 | 597 | 0.6524 |
| 0.6443 | 4.92 | 598 | 0.6524 |
| 0.6596 | 4.93 | 599 | 0.6524 |
| 0.6413 | 4.94 | 600 | 0.6524 |
| 0.6235 | 4.95 | 601 | 0.6524 |
| 0.598 | 4.95 | 602 | 0.6524 |
| 0.7232 | 4.96 | 603 | 0.6524 |
| 0.6172 | 4.97 | 604 | 0.6524 |
| 0.7063 | 4.98 | 605 | 0.6524 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
DevanshSinha/testbits1 | DevanshSinha | 2024-03-12T06:53:26Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"region:us"
] | null | 2024-03-12T06:53:15Z | ---
library_name: peft
base_model: mistralai/Mistral-7B-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 |
jeonsiyun/layoutlmv3-v33-epoch20 | jeonsiyun | 2024-03-12T06:38:26Z | 117 | 0 | transformers | [
"transformers",
"safetensors",
"layoutlmv3",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-12T06:37:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Vasanth/mistral-sql | Vasanth | 2024-03-12T06:38:23Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:teknium/OpenHermes-2.5-Mistral-7B",
"base_model:adapter:teknium/OpenHermes-2.5-Mistral-7B",
"license:apache-2.0",
"region:us"
] | null | 2024-03-12T05:20:05Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: teknium/OpenHermes-2.5-Mistral-7B
model-index:
- name: mistral-sql
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral-sql
This model is a fine-tuned version of [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
syedzaidi-kiwi/Llama-2-7b-chat-finetune | syedzaidi-kiwi | 2024-03-12T06:37:40Z | 8 | 1 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"llm",
"fine-tuned",
"Llama 2 7b",
"KiwiTech LLC",
"question-answering",
"en",
"dataset:mlabonne/guanaco-llama2-1k",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | question-answering | 2024-03-11T15:19:17Z | ---
license: apache-2.0
language:
- en
datasets:
- mlabonne/guanaco-llama2-1k
pipeline_tag: question-answering
tags:
- llm
- fine-tuned
- Llama 2 7b
- KiwiTech LLC
---
# Model Card for syedzaidi-kiwi/Llama-2-7b-chat-finetune
This model is a fine-tuned version of Meta's Llama 2 7B variant for enhanced chat functionalities.
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
- **Developed by:** Syed Asad
- **Model type:** Fine-tuned Llama 2 7B variant
- **Language(s) (NLP):** English
- **License:** Apache-2.0
- **Finetuned from model:** NousResearch/Llama-2-7b-chat-hf
### Model Sources
- **Repository:** [syedzaidi-kiwi/Llama-2-7b-chat-finetune](https://huggingface.co/syedzaidi-kiwi/Llama-2-7b-chat-finetune)
- **Paper:** [https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/]
## Uses
### Direct Use
The model is intended for direct use in applications requiring conversational responses, such as chatbots or virtual assistants.
### Out-of-Scope Use
The model is not designed for tasks outside of conversational AI, such as document summarization or translation.
## Bias, Risks, and Limitations
Users should be aware of potential biases in the training data and limitations in the model's understanding of nuanced human language. Further evaluation is recommended for specific use cases.
## How to Get Started with the Model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("syedzaidi-kiwi/Llama-2-7b-chat-finetune")
model = AutoModelForCausalLM.from_pretrained("syedzaidi-kiwi/Llama-2-7b-chat-finetune")
inputs = tokenizer("Hello, how are you?", return_tensors="pt")
response = model.generate(**inputs)
print(tokenizer.decode(response[0], skip_special_tokens=True))
```
## Training Details
### Training Data
The model was fine-tuned using the dataset mlabonne/guanaco-llama2-1k.
Link: https://huggingface.co/datasets/mlabonne/guanaco-llama2-1k
### Training Procedure
#### Training Hyperparameters
- **Training regime:**
The model was fine-tuned using a mix of precision training techniques to balance training speed and model performance effectively.
While the exact precision format (e.g., fp32, fp16, bf16) utilized depends on the compute capabilities available, an emphasis was placed on leveraging mixed precision (fp16) training to accelerate the training process on compatible hardware. This approach allowed for faster computation and reduced memory usage without significant loss in training quality.
Users are encouraged to adjust the precision settings based on their hardware specifications to optimize performance further.
#### Speeds, Sizes, Times
To be tested by the KiwiTech Team
## Evaluation
### Testing Data, Factors & Metrics
#### Testing Data
The model's performance was evaluated on a held-out test set from the mlabonne/guanaco-llama2-1k dataset.
This dataset comprises diverse conversational contexts to assess the model's generalization and robustness across various topics. [https://huggingface.co/datasets/mlabonne/guanaco-llama2-1k]
#### Factors
Evaluation focused on several key factors to ensure the model's versatility and reliability in conversational AI applications:
Context understanding: The model's ability to maintain context and coherence over long conversations.
Diversity of responses: The variety in the model's responses to similar prompts, indicating its creative and dynamic conversational capabilities.
Safety and bias: Monitoring for any unintended biases in responses or generation of inappropriate content.
#### Metrics
To comprehensively assess the model's performance, the following metrics were utilized:
Perplexity (PPL): Lower perplexity scores indicate better understanding and generation of the text.
BLEU Score: For measuring the similarity between the model's generated responses and a set of reference responses, indicating the model's accuracy in reproducing human-like answers.
F1 Score: Evaluating the balance between precision and recall in the model's responses, useful for assessing conversational relevance.
Safety and Bias Evaluation: Custom metrics were developed to quantify the model's performance in generating safe, unbiased content.
### Results
To be Evaulated, will be updated in this section.
#### Summary
The fine-tuned model demonstrates significant improvements in generating coherent, diverse, and contextually appropriate responses across various conversational settings.
It represents a step forward in developing conversational AI systems that are both efficient and effective.
Continuous evaluation and monitoring are advised to further enhance and maintain the model's performance standards.
## Technical Specifications
### Model Architecture and Objective
Transformers
### Compute Infrastructure
T4 GPU
#### Hardware
Fine Tuned on Apple M3 Pro (Silicon Chip)
#### Software
Google Colab Notebook Used
## Citation
OriginalLlama2Citation
Title: Llama 2: Open Foundation and Fine-Tuned Chat Models},
Authors: Hugo Touvronβ Louis Martinβ Kevin Stoneβ
Peter Albert Amjad Almahairi Yasmine Babaei Nikolay Bashlykov Soumya Batra
Prajjwal Bhargava Shruti Bhosale Dan Bikel Lukas Blecher Cristian Canton Ferrer Moya Chen
Guillem Cucurull David Esiobu Jude Fernandes Jeremy Fu Wenyin Fu Brian Fuller
Cynthia Gao Vedanuj Goswami Naman Goyal Anthony Hartshorn Saghar Hosseini Rui Hou
Hakan Inan Marcin Kardas Viktor Kerkez Madian Khabsa Isabel Kloumann Artem Korenev
Punit Singh Koura Marie-Anne Lachaux Thibaut Lavril Jenya Lee Diana Liskovich
Yinghai Lu Yuning Mao Xavier Martinet Todor Mihaylov Pushkar Mishra
Igor Molybog Yixin Nie Andrew Poulton Jeremy Reizenstein Rashi Rungta Kalyan Saladi
Alan Schelten Ruan Silva Eric Michael Smith Ranjan Subramanian Xiaoqing Ellen Tan Binh Tang
Ross Taylor Adina Williams Jian Xiang Kuan Puxin Xu Zheng Yan Iliyan Zarov Yuchen Zhang
Angela Fan Melanie Kambadur Sharan Narang Aurelien Rodriguez Robert Stojnic
Sergey Edunov Thomas Scialom
Journal: Gen AI, Meta
Year: 2023
Link to Research Paper: https://ai.meta.com/research/publications/llama-2-open-foundation-and-fine-tuned-chat-models/
## Model Card Authors
Syed Asad
## Model Card Contact
Syed Asad ([email protected]) |
omroali/ppo-Huggy | omroali | 2024-03-12T06:36:24Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2024-03-12T06:36:17Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog πΆ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: omroali/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play π
|
ivillar/lp-music-caps | ivillar | 2024-03-12T06:33:03Z | 0 | 0 | null | [
"music",
"music-captioning",
"en",
"dataset:seungheondoh/LP-MusicCaps-MSD",
"dataset:seungheondoh/LP-MusicCaps-MC",
"arxiv:2307.16372",
"license:mit",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T20:25:08Z | ---
license: mit
datasets:
- seungheondoh/LP-MusicCaps-MSD
- seungheondoh/LP-MusicCaps-MC
language:
- en
metrics:
- bleu
- bertscore
tags:
- music
- music-captioning
---
- **Repository:** [LP-MusicCaps repository](https://github.com/seungheondoh/lp-music-caps)
- **Paper:** [ArXiv](https://arxiv.org/abs/2307.16372)
# :sound: LP-MusicCaps: LLM-Based Pseudo Music Captioning
[](https://youtu.be/ezwYVaiC-AM)
This is a implementation of [LP-MusicCaps: LLM-Based Pseudo Music Captioning](#). This project aims to generate captions for music. 1) Tag-to-Caption: Using existing tags, We leverage the power of OpenAI's GPT-3.5 Turbo API to generate high-quality and contextually relevant captions based on music tag. 2) Audio-to-Caption: Using music-audio and pseudo caption pairs, we train a cross-model encoder-decoder model for end-to-end music captioning
> [**LP-MusicCaps: LLM-Based Pseudo Music Captioning**](#)
> SeungHeon Doh, Keunwoo Choi, Jongpil Lee, Juhan Nam
> To appear ISMIR 2023
## TL;DR
<p align = "center">
<img src = "https://i.imgur.com/2LC0nT1.png">
</p>
- **[1.Tag-to-Caption: LLM Captioning](https://github.com/seungheondoh/lp-music-caps/tree/main/lpmc/llm_captioning)**: Generate caption from given tag input.
- **[2.Pretrain Music Captioning Model](https://github.com/seungheondoh/lp-music-caps/tree/main/lpmc/music_captioning)**: Generate pseudo caption from given audio.
- **[3.Transfer Music Captioning Model](https://github.com/seungheondoh/lp-music-caps/tree/main/lpmc/music_captioning/transfer.py)**: Generate human level caption from given audio.
## Open Source Material
- [pre-trained models](https://huggingface.co/seungheondoh/lp-music-caps)
- [music-pseudo caption dataset](https://huggingface.co/datasets/seungheondoh/LP-MusicCaps-MSD)
- [demo](https://huggingface.co/spaces/seungheondoh/LP-Music-Caps-demo)
are available online for future research. example of dataset in [notebook](https://github.com/seungheondoh/lp-music-caps/blob/main/notebook/Dataset.ipynb) |
AlanHou/xlm-roberta-base-finetuned-panx-all | AlanHou | 2024-03-12T06:28:31Z | 92 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-03-12T06:15:22Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-all
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1758
- F1: 0.8558
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.299 | 1.0 | 835 | 0.2074 | 0.8078 |
| 0.1587 | 2.0 | 1670 | 0.1705 | 0.8461 |
| 0.1012 | 3.0 | 2505 | 0.1758 | 0.8558 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Phi2quant/AVMGPT | Phi2quant | 2024-03-12T06:26:11Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-12T04:04:00Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
wongctroman/fine-tuned-cloudy-sentence-transformer-16 | wongctroman | 2024-03-12T06:25:11Z | 47 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-03-12T06:13:35Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# wongctroman/fine-tuned-cloudy-sentence-transformer-16
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('wongctroman/fine-tuned-cloudy-sentence-transformer-16')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=wongctroman/fine-tuned-cloudy-sentence-transformer-16)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 34 with parameters:
```
{'batch_size': 5, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.TripletLoss.TripletLoss` with parameters:
```
{'distance_metric': 'TripletDistanceMetric.EUCLIDEAN', 'triplet_margin': 5}
```
Parameters of the fit()-Method:
```
{
"epochs": 15,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 1024, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
MuntasirHossain/flan-t5-large-samsum-qlora-merged | MuntasirHossain | 2024-03-12T06:14:26Z | 0 | 0 | peft | [
"peft",
"safetensors",
"t5",
"generated_from_trainer",
"pipeline_tag",
"summarization",
"dataset:samsum",
"base_model:google/flan-t5-large",
"base_model:adapter:google/flan-t5-large",
"license:apache-2.0",
"8-bit",
"bitsandbytes",
"region:us"
] | summarization | 2024-03-12T02:51:42Z | ---
license: apache-2.0
tags:
- generated_from_trainer
- pipeline_tag
base_model: google/flan-t5-large
model-index:
- name: flan-t5-large-samsum-qlora-merged
results: []
datasets:
- samsum
metrics:
- rouge
pipeline_tag: summarization
library_name: peft
---
# Model description
Parameter-efficient fine-tuning (PEFT) with QLoRA was employed to fine-tune the base [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) model
on the [samsum](https://huggingface.co/datasets/samsum) dataset containing dialoges. After the fine-tuning, the
[PEFT model adapter](https://huggingface.co/MuntasirHossain/flan-t5-large-samsum-qlora) was merged with the base model.
The model is intended for generative summarization tasks and achieved the following scores on the test dataset:
- Rogue1: 49.249596%
- Rouge2: 23.513032%
- RougeL: 39.960812%
- RougeLsum: 39.968438%
## How to use
Load the model:
``` python
from transformers import pipeline
pipeline_model = pipeline("summarization", model="MuntasirHossain/flan-t5-large-samsum-qlora-merged")
summary = pipeline_model(text, max_new_tokens = 50)
print(summary[0]['summary_text'])
```
Example Inference:
``` python
# random sample text from the samsum test dataset
text = """
Emma: Hi, we're going with Peter to Amiens tomorrow.
Daniel: oh! Cool.
Emma: Wanna join?
Daniel: Sure, I'm fed up with Paris.
Emma: We're too. The noise, traffic etc. Would be nice to see some countrysides.
Daniel: I don't think Amiens is exactly countrysides though :P
Emma: Nope. Hahahah. But not a megalopolis either!
Daniel: Right! Let's do it!
Emma: But we should leave early. The days are shorter now.
Daniel: Yes, the stupid winter time.
Emma: Exactly!
Daniel: Where should we meet then?
Emma: Come to my place by 9am.
Daniel: oohhh. It means I have to get up before 7!
Emma: Yup. The early bird gets the worm (in Amiens).
Daniel: You sound like my grandmother.
Emma: HAHAHA. I'll even add: no parties tonight, no drinking dear Daniel
Daniel: I really hope Amiens is worth it!
"""
summary = pipeline_model(text, max_new_tokens = 50)
print(summary[0]['summary_text'])
Emma and Peter are going to Amiens tomorrow. Daniel will join them. They will meet at Emma's place by 9 am. They will not have any parties tonight.
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2 |
AlanHou/xlm-roberta-base-finetuned-panx-it | AlanHou | 2024-03-12T06:13:52Z | 91 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-03-12T06:12:13Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-it
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-it
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2619
- F1: 0.8321
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7217 | 1.0 | 70 | 0.3193 | 0.7343 |
| 0.2736 | 2.0 | 140 | 0.2760 | 0.8055 |
| 0.1838 | 3.0 | 210 | 0.2619 | 0.8321 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
AlanHou/xlm-roberta-base-finetuned-panx-fr | AlanHou | 2024-03-12T06:12:07Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-03-12T06:08:51Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2750
- F1: 0.8495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5647 | 1.0 | 191 | 0.3242 | 0.7728 |
| 0.2671 | 2.0 | 382 | 0.2672 | 0.8202 |
| 0.1744 | 3.0 | 573 | 0.2750 | 0.8495 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
hyeogi/SOLAR-10.7B-v1.4 | hyeogi | 2024-03-12T06:11:03Z | 2,248 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"SOLAR-10.7B",
"conversational",
"ko",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-12T05:31:08Z | ---
language:
- ko
pipeline_tag: text-generation
tags:
- SOLAR-10.7B
license: cc-by-nc-4.0
---
# SOLAR-10.7B
### Model Details
- Base Model: [yanolja/KoSOLAR-10.7B-v0.2](https://huggingface.co/yanolja/KoSOLAR-10.7B-v0.2)
### Datasets
- sampling and translate [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca)
- sampling and instrcution format [HAERAE-HUB/KMMLU](https://huggingface.co/datasets/HAERAE-HUB/KMMLU)
|
nsanghi/dqn-cart-pole-sb3 | nsanghi | 2024-03-12T06:09:28Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"CartPole-v1",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-10-26T07:09:23Z | ---
library_name: stable-baselines3
tags:
- CartPole-v1
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 221.20 +/- 52.75
name: mean_reward
verified: false
---
# **DQN** Agent playing **CartPole-v1**
This is a trained model of a **DQN** agent playing **CartPole-v1**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
OwOOwO/eacc_adhoc2 | OwOOwO | 2024-03-12T06:09:18Z | 91 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T01:11:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
AlanHou/xlm-roberta-base-finetuned-panx-de-fr | AlanHou | 2024-03-12T06:08:20Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-03-12T05:56:43Z | ---
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1639
- F1: 0.8591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2836 | 1.0 | 715 | 0.1859 | 0.8212 |
| 0.1484 | 2.0 | 1430 | 0.1632 | 0.8487 |
| 0.0953 | 3.0 | 2145 | 0.1639 | 0.8591 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Kukedlc/Neural-Krishna-Multiverse-7b | Kukedlc | 2024-03-12T06:04:29Z | 50 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Kukedlc/NeuralSirKrishna-7b",
"ammarali32/multi_verse_model",
"conversational",
"base_model:Kukedlc/NeuralSirKrishna-7b",
"base_model:merge:Kukedlc/NeuralSirKrishna-7b",
"base_model:MTSAIR/multi_verse_model",
"base_model:merge:MTSAIR/multi_verse_model",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-11T21:12:36Z | ---
tags:
- merge
- mergekit
- lazymergekit
- Kukedlc/NeuralSirKrishna-7b
- ammarali32/multi_verse_model
base_model:
- Kukedlc/NeuralSirKrishna-7b
- ammarali32/multi_verse_model
license: apache-2.0
---
# Neural-Krishna-Multiverse-7b
Neural-Krishna-Multiverse-7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Kukedlc/NeuralSirKrishna-7b](https://huggingface.co/Kukedlc/NeuralSirKrishna-7b)
* [ammarali32/multi_verse_model](https://huggingface.co/ammarali32/multi_verse_model)
## π§© Configuration
```yaml
slices:
- sources:
- model: Kukedlc/NeuralSirKrishna-7b
layer_range: [0, 32]
- model: ammarali32/multi_verse_model
layer_range: [0, 32]
merge_method: slerp
base_model: ammarali32/multi_verse_model
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## π» Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Kukedlc/Neural-Krishna-Multiverse-7b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
ARDICAI/stable-diffusion-2-1-finetuned | ARDICAI | 2024-03-12T06:02:05Z | 337 | 12 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-09-21T12:14:05Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### stable-diffusion-2-1-finetuned Dreambooth model trained by ARDIC AI team
|
Red-8/Gujarati_NER-1 | Red-8 | 2024-03-12T06:01:41Z | 96 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"PERSON",
"LOCATION",
"ORGANIZATION",
"gu",
"dataset:ai4bharat/naamapadam",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-03-12T05:38:36Z | ---
datasets:
- ai4bharat/naamapadam
language:
- gu
pipeline_tag: token-classification
tags:
- PERSON
- LOCATION
- ORGANIZATION
--- |
OwOOwO/eacc_usa2_mega | OwOOwO | 2024-03-12T05:56:08Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-12T05:53:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Kukedlc/Neural-Krishna-Multiverse-7b-v2 | Kukedlc | 2024-03-12T05:54:50Z | 49 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Kukedlc/Neural-Krishna-Multiverse-7b",
"liminerity/M7-7b",
"base_model:Kukedlc/Neural-Krishna-Multiverse-7b",
"base_model:merge:Kukedlc/Neural-Krishna-Multiverse-7b",
"base_model:liminerity/M7-7b",
"base_model:merge:liminerity/M7-7b",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-11T21:31:56Z | ---
tags:
- merge
- mergekit
- lazymergekit
- Kukedlc/Neural-Krishna-Multiverse-7b
- liminerity/M7-7b
base_model:
- Kukedlc/Neural-Krishna-Multiverse-7b
- liminerity/M7-7b
license: apache-2.0
---
# Neural-Krishna-Multiverse-7b-v2
Neural-Krishna-Multiverse-7b-v2 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Kukedlc/Neural-Krishna-Multiverse-7b](https://huggingface.co/Kukedlc/Neural-Krishna-Multiverse-7b)
* [liminerity/M7-7b](https://huggingface.co/liminerity/M7-7b)
## π§© Configuration
```yaml
slices:
- sources:
- model: Kukedlc/Neural-Krishna-Multiverse-7b
layer_range: [0, 32]
- model: liminerity/M7-7b
layer_range: [0, 32]
merge_method: slerp
base_model: liminerity/M7-7b
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## π» Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Kukedlc/Neural-Krishna-Multiverse-7b-v2"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
jadhav21/squirrel | jadhav21 | 2024-03-12T05:48:59Z | 7 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-03-12T05:45:13Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### SQUIRREL Dreambooth model trained by jadhav21 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: I21-14
Sample pictures of this concept:

|
DevarshRaj/neww_model_mistral | DevarshRaj | 2024-03-12T05:47:25Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-11T12:37:37Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nocudaexe/Neural-Dark-Waifu-GGUF | nocudaexe | 2024-03-12T05:38:29Z | 26 | 0 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-03-10T14:44:13Z | ---
license: apache-2.0
---

Potentially broken at 8k context
Use: [nocudaexe/Neural-Dark-Waifu-V0.2](https://huggingface.co/nocudaexe/Neural-Dark-Waifu-V0.2-GGUF) instead, tested to 15872 tokens
# Model Card for Model ID
<!-- RP Chat model -->
This is a merge of 2 models based on mlabonne/AlphaMonarch-7B. With the intent of making it more RP friendly.
### Model Sources
Base model: nocudaexe/Neural-Dark-Waifu
Primary Models:
mlabonne/AlphaMonarch-7B
Test157t/Kunocchini-7b-128k-test
Additional merges:
TeeZee/DarkSapling-7B-v2.0
NeverSleep/Noromaid-7B-0.4-DPO
Endevor/InfinityRP-v1-7B
KatyTheCutie/SlushySlerp-7B
## Uses
NSFW/ERP Chat
### Recommendations
Silly Tavern |
migueldeguzmandev/GPT2XL_RLLMv11-9 | migueldeguzmandev | 2024-03-12T05:36:50Z | 73 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-12T05:21:15Z | ---
license: mit
---
[More info? see RLLM virtual map!](https://whimsical.com/rllm-visual-map-QQvFHNr6aVDdXRUnyb5NCu) |
migueldeguzmandev/GPT2XL_RLLMv11-7 | migueldeguzmandev | 2024-03-12T05:36:17Z | 73 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-12T04:33:07Z | [More info? see RLLM virtual map!](https://whimsical.com/rllm-visual-map-QQvFHNr6aVDdXRUnyb5NCu) |
Red-8/Gujarati_NER | Red-8 | 2024-03-12T05:33:39Z | 92 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"token-classification",
"gu",
"dataset:Red-8/NER_Gujarati_data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-03-12T05:00:18Z | ---
datasets:
- Red-8/NER_Gujarati_data
language:
- gu
pipeline_tag: token-classification
--- |
migueldeguzmandev/GPT2XL_RLLMv11-10 | migueldeguzmandev | 2024-03-12T05:33:32Z | 74 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-11T09:39:00Z | ---
license: mit
---
[More info? see RLLM virtual map!](https://whimsical.com/rllm-visual-map-QQvFHNr6aVDdXRUnyb5NCu) |
kurugai/Kurugai-EEVE-v1.1 | kurugai | 2024-03-12T05:30:02Z | 2,244 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"ko",
"dataset:kurugai/MedText",
"base_model:kurugai/Kurugai-EEVE-v1.0",
"base_model:finetune:kurugai/Kurugai-EEVE-v1.0",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-10T13:48:09Z | ---
license: cc-by-nc-sa-4.0
base_model: kurugai/Kurugai-EEVE-v1.0
datasets:
- kurugai/MedText
language:
- ko
---
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
**kurugai/Kurugai-EEVE-v1.1**λ **kurugai/Kurugai-EEVE-v1.0**λ₯Ό λ² μ΄μ€λͺ¨λΈλ‘ ν΄μ **BI55/MedText** λ°μ΄ν°μ
μΌλ‘ νμ΅λ λͺ¨λΈμ
λλ€.
# νμ΅μκ°
RTX 8000 GPU 1EAλ‘ 1μκ° νμ΅νμμ΅λλ€.
# λμμ μ£Όμ λΆ
μ΄ λͺ¨λΈμ μλ΄μ μ§μμ λ°μ μ μλμμ΅λλ€. μλ΄μκ² κ°μ¬μ λ§μ μ ν©λλ€. |
nsugianto/vit-base-beans | nsugianto | 2024-03-12T05:27:42Z | 178 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-03-12T05:26:51Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- image-classification
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base-beans
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-beans
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1036
- Accuracy: 0.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
OpenGVLab/pvt_v2_b0 | OpenGVLab | 2024-03-12T05:27:22Z | 3,976 | 2 | transformers | [
"transformers",
"safetensors",
"pvt_v2",
"image-classification",
"arxiv:2106.13797",
"arxiv:2105.15203",
"arxiv:2201.07436",
"arxiv:2010.04159",
"arxiv:2109.03814",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-02-25T14:14:35Z | ---
license: apache-2.0
---
# PVTv2
This is the Hugging Face PyTorch implementation of the [PVTv2](https://arxiv.org/abs/2106.13797) model.
## Model Description
The Pyramid Vision Transformer v2 (PVTv2) is a powerful, lightweight hierarchical transformer backbone for vision tasks. PVTv2 infuses convolution operations into its transformer layers to infuse properties of CNNs that enable them to learn image data efficiently. This mix transformer architecture requires no added positional embeddings, and produces multi-scale feature maps which are known to be beneficial for dense and fine-grained prediction tasks.
Vision models using PVTv2 for a backbone:
1. [Segformer](https://arxiv.org/abs/2105.15203) for Semantic Segmentation.
2. [GLPN](https://arxiv.org/abs/2201.07436) for Monocular Depth.
3. [Deformable DETR](https://arxiv.org/abs/2010.04159) for 2D Object Detection.
4. [Panoptic Segformer](https://arxiv.org/abs/2109.03814) for Panoptic Segmentation. |
Or4cl3-1/Cognitive-Agent-Gemma_7b | Or4cl3-1 | 2024-03-12T05:26:57Z | 3 | 0 | transformers | [
"transformers",
"text-gemma-001",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"Or4cl3-1/agent_gemma_7b",
"cognitivecomputations/dolphin-2.5-mixtral-8x7b",
"en",
"base_model:Or4cl3-1/Agent_Gemma_7b",
"base_model:merge:Or4cl3-1/Agent_Gemma_7b",
"base_model:cognitivecomputations/dolphin-2.5-mixtral-8x7b",
"base_model:merge:cognitivecomputations/dolphin-2.5-mixtral-8x7b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-23T02:24:50Z | ---
tags:
- merge
- mergekit
- lazymergekit
- Or4cl3-1/agent_gemma_7b
- cognitivecomputations/dolphin-2.5-mixtral-8x7b
base_model:
- Or4cl3-1/agent_gemma_7b
- cognitivecomputations/dolphin-2.5-mixtral-8x7b
license: apache-2.0
language:
- en
library_name: transformers
pipeline_tag: text-generation
---
# Cognitive-Agent-Gemma_7b
Cognitive-Agent-Gemma_7b is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [Or4cl3-1/agent_gemma_7b](https://huggingface.co/Or4cl3-1/agent_gemma_7b)
* [cognitivecomputations/dolphin-2.5-mixtral-8x7b](https://huggingface.co/cognitivecomputations/dolphin-2.5-mixtral-8x7b)
## π§© Configuration
```yaml
slices:
- sources:
- model: Or4cl3-1/agent_gemma_7b
layer_range: [0, 32]
- model: cognitivecomputations/dolphin-2.5-mixtral-8x7b
layer_range: [0, 32]
merge_method: slerp
base_model: Or4cl3-1/agent_gemma_7b
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## π» Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "Or4cl3-1/Cognitive-Agent-Gemma_7b"
messages = [{"role": "user", "content": "What is a large language model?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
``` |
SilasK/llama-7b-medqa_version_5 | SilasK | 2024-03-12T05:21:27Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"llama",
"trl",
"sft",
"generated_from_trainer",
"base_model:huggyllama/llama-7b",
"base_model:adapter:huggyllama/llama-7b",
"license:other",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2024-03-11T18:30:44Z | ---
license: other
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: huggyllama/llama-7b
model-index:
- name: llama-7b-medqa_version_5
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# llama-7b-medqa_version_5
This model is a fine-tuned version of [huggyllama/llama-7b](https://huggingface.co/huggyllama/llama-7b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 3
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 6
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 2
### Training results
### Framework versions
- PEFT 0.7.1
- Transformers 4.37.0.dev0
- Pytorch 2.1.2+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0 |
Bajiyo/mal_en_transliteration | Bajiyo | 2024-03-12T05:19:28Z | 0 | 0 | null | [
"license:other",
"region:us"
] | null | 2024-03-12T05:17:50Z | ---
license: other
license_name: other
license_link: LICENSE
---
|
Deepnoid/mergekit_v2 | Deepnoid | 2024-03-12T05:17:52Z | 2,250 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:heavytail/kullm-solar-S",
"base_model:finetune:heavytail/kullm-solar-S",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-12T03:31:08Z | ---
base_model:
- heavytail/kullm-solar-S
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# mergekit_v2
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
### Configuration |
Pongsathorn/Taxi-v3 | Pongsathorn | 2024-03-12T05:14:26Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-12T05:14:24Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="Pongsathorn/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Pongsathorn/q-FrozenLake-v1-4x4-noSlippery | Pongsathorn | 2024-03-12T05:13:24Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-12T05:13:21Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Pongsathorn/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
tarekxpc/test | tarekxpc | 2024-03-12T05:07:38Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.2-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-12T05:07:27Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: unsloth/mistral-7b-instruct-v0.2-bnb-4bit
---
# Uploaded model
- **Developed by:** tarekxpc
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.2-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AptaArkana/indonesian-comunication-indoBERT-finetuned | AptaArkana | 2024-03-12T05:04:00Z | 93 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:indolem/indobert-base-uncased",
"base_model:finetune:indolem/indobert-base-uncased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-12T02:33:51Z | ---
license: mit
base_model: indolem/indobert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: indonesian-personalyty-indoBERT-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# indonesian-personalyty-indoBERT-finetuned
This model is a fine-tuned version of [indolem/indobert-base-uncased](https://huggingface.co/indolem/indobert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1403
- Accuracy: 0.9732
- Precision: 0.9732
- Recall: 0.9732
- F1: 0.9732
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 0.434 | 1.0 | 550 | 0.1352 | 0.9659 | 0.9659 | 0.9659 | 0.9659 |
| 0.1207 | 2.0 | 1100 | 0.1403 | 0.9732 | 0.9732 | 0.9732 | 0.9732 |
| 0.091 | 3.0 | 1650 | 0.1291 | 0.9700 | 0.9700 | 0.9700 | 0.9700 |
| 0.0633 | 4.0 | 2200 | 0.1367 | 0.9722 | 0.9722 | 0.9722 | 0.9722 |
| 0.0463 | 5.0 | 2750 | 0.1414 | 0.9732 | 0.9732 | 0.9732 | 0.9732 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
Deepnoid/deep-solar-eeve-kullm-v2 | Deepnoid | 2024-03-12T05:02:03Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"generated_from_trainer",
"base_model:yanolja/EEVE-Korean-10.8B-v1.0",
"base_model:adapter:yanolja/EEVE-Korean-10.8B-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-03-12T03:27:34Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: yanolja/EEVE-Korean-10.8B-v1.0
model-index:
- name: data/Models/deep-solar-eeve-kullm-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
# data/Models/deep-solar-eeve-kullm-v2
This model is a fine-tuned version of [yanolja/EEVE-Korean-10.8B-v1.0](https://huggingface.co/yanolja/EEVE-Korean-10.8B-v1.0) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0 |
Deepnoid/deep-solar-eeve-KorSTS | Deepnoid | 2024-03-12T05:01:24Z | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"generated_from_trainer",
"base_model:yanolja/EEVE-Korean-10.8B-v1.0",
"base_model:adapter:yanolja/EEVE-Korean-10.8B-v1.0",
"license:apache-2.0",
"region:us"
] | null | 2024-03-12T03:33:23Z | ---
license: apache-2.0
library_name: peft
tags:
- generated_from_trainer
base_model: yanolja/EEVE-Korean-10.8B-v1.0
model-index:
- name: data/Models/deep-solar-eeve-KorSTS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
# data/Models/deep-solar-eeve-KorSTS
This model is a fine-tuned version of [yanolja/EEVE-Korean-10.8B-v1.0](https://huggingface.co/yanolja/EEVE-Korean-10.8B-v1.0) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.38.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.16.1
- Tokenizers 0.15.0 |
atharv56/bheem | atharv56 | 2024-03-12T04:57:52Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-03-12T04:53:22Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### Bheem Dreambooth model trained by atharv56 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
Pongsathorn/ppo-Huggy | Pongsathorn | 2024-03-12T04:56:50Z | 6 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2024-03-12T04:55:20Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog πΆ to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Pongsathorn/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play π
|
pappubind/tiger | pappubind | 2024-03-12T04:56:29Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-03-12T04:52:34Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### Tiger Dreambooth model trained by pappubind following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: I21-08
Sample pictures of this concept:
.jpg)
|
Sumail/Alchemist_09_1_2b | Sumail | 2024-03-12T04:54:56Z | 90 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2311.03099",
"arxiv:2306.01708",
"base_model:Sumail/Alchemist_06_2b",
"base_model:merge:Sumail/Alchemist_06_2b",
"base_model:deepnet/SN6-71G7",
"base_model:merge:deepnet/SN6-71G7",
"base_model:deepnetguy/gemma-70",
"base_model:merge:deepnetguy/gemma-70",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-12T04:52:23Z | ---
base_model:
- deepnetguy/gemma-70
- Sumail/Alchemist_06_2b
- Aspik101/Haliaeetusalbicilla10
- deepnet/SN6-71G7
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [Sumail/Alchemist_06_2b](https://huggingface.co/Sumail/Alchemist_06_2b) as a base.
### Models Merged
The following models were included in the merge:
* [deepnetguy/gemma-70](https://huggingface.co/deepnetguy/gemma-70)
* [Aspik101/Haliaeetusalbicilla10](https://huggingface.co/Aspik101/Haliaeetusalbicilla10)
* [deepnet/SN6-71G7](https://huggingface.co/deepnet/SN6-71G7)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: Sumail/Alchemist_06_2b
# No parameters necessary for base model
- model: Aspik101/Haliaeetusalbicilla10
parameters:
density: 0.53
weight: 0.23
- model: deepnetguy/gemma-70
parameters:
density: 0.53
weight: 0.5
- model: deepnet/SN6-71G7
parameters:
density: 0.53
weight: 0.23
merge_method: dare_ties
base_model: Sumail/Alchemist_06_2b
parameters:
int8_mask: true
dtype: bfloat16
```
|
Mihaiii/Covasna-0.1 | Mihaiii | 2024-03-12T04:49:35Z | 5 | 5 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"base_model:migtissera/Tess-70B-v1.6",
"base_model:finetune:migtissera/Tess-70B-v1.6",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | text-generation | 2024-03-04T11:38:19Z | ---
base_model: migtissera/Tess-70B-v1.6
inference: false
license: llama2
metrics:
- accuracy
---
This is a BF16 and pruned version of [migtissera/Tess-70B-v1.6](https://huggingface.co/migtissera/Tess-70B-v1.6) .
[migtissera/Tess-70B-v1.6](https://huggingface.co/migtissera/Tess-70B-v1.6) has 69 billion params and Covasna-0.1 has 41.6 billion (~60.3% param size)
# Steps to replicate:
Use [laserQlora.ipynb](https://github.com/cognitivecomputations/laserRMT/blob/main/laserQlora.ipynb) from [cognitivecomputations/laserRMT](https://github.com/cognitivecomputations/laserRMT) to determine which layers should be eliminated.
Adapt the script for `migtissera/Tess-70B-v1.6` by replacing `model_name = "mistralai/Mistral-7B-v0.1"` with `model_name = "migtissera/Tess-70B-v1.6"` and `layer_numbers = list(range(31, -1, -1))` with `layer_numbers = list(range(79, -1, -1))`, [79 being the last recurrent layer index Tess-70B-v1.6 has](https://huggingface.co/migtissera/Tess-70B-v1.6?show_tensors=true).
Then look for the layer indexes where self_attn.v_proj snr is Infinity and eliminate those layers using [mergekit](https://github.com/arcee-ai/mergekit).
Here is the mergekit config:
```yml
slices:
- sources:
- model: "migtissera/Tess-70B-v1.6"
layer_range: [0, 7]
- sources:
- model: "migtissera/Tess-70B-v1.6"
layer_range: [8, 9]
- sources:
- model: "migtissera/Tess-70B-v1.6"
layer_range: [12, 29]
- sources:
- model: "migtissera/Tess-70B-v1.6"
layer_range: [31, 32]
- sources:
- model: "migtissera/Tess-70B-v1.6"
layer_range: [33, 45]
- sources:
- model: "migtissera/Tess-70B-v1.6"
layer_range: [50, 52]
- sources:
- model: "migtissera/Tess-70B-v1.6"
layer_range: [60, 61]
- sources:
- model: "migtissera/Tess-70B-v1.6"
layer_range: [67, 68]
- sources:
- model: "migtissera/Tess-70B-v1.6"
layer_range: [74, 80]
merge_method: passthrough
dtype: bfloat16
```
GGUF:
[Covasna-0.1-GGUF](https://huggingface.co/mradermacher/Covasna-0.1-GGUF) |
vinuuuuu/my-car | vinuuuuu | 2024-03-12T04:46:37Z | 2 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-03-12T04:38:27Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### my-car Dreambooth model trained by vinuuuuu following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: i21-21
Sample pictures of this concept:

|
EleutherAI/Mistral-7B-v0.1-squaring-random-standardized | EleutherAI | 2024-03-12T04:44:17Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-12T04:44:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
EleutherAI/Mistral-7B-v0.1-modularaddition-random-standardized | EleutherAI | 2024-03-12T04:44:06Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-12T04:44:02Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
EleutherAI/Mistral-7B-v0.1-nli-random-standardized | EleutherAI | 2024-03-12T04:43:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-12T04:42:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
EleutherAI/Mistral-7B-v0.1-sentiment-random-standardized | EleutherAI | 2024-03-12T04:42:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-12T04:42:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
David-Xu/llama-2-7b-cira-sft-v0.1-merge-right | David-Xu | 2024-03-12T04:41:43Z | 3 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-08T01:57:49Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
EleutherAI/Mistral-7B-v0.1-squaring-random | EleutherAI | 2024-03-12T04:41:28Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-12T04:41:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
EleutherAI/Mistral-7B-v0.1-multiplication-random | EleutherAI | 2024-03-12T04:41:02Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-12T04:40:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
EleutherAI/Mistral-7B-v0.1-subtraction-random | EleutherAI | 2024-03-12T04:40:49Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-12T04:40:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
EleutherAI/Mistral-7B-v0.1-addition-random | EleutherAI | 2024-03-12T04:40:37Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-12T04:40:33Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
EleutherAI/Mistral-7B-v0.1-authors-random | EleutherAI | 2024-03-12T04:40:24Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-12T04:40:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a π€ transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Kazminka51/Krasota | Kazminka51 | 2024-03-12T04:36:20Z | 0 | 0 | null | [
"arxiv:1910.09700",
"region:us"
] | null | 2024-03-12T04:33:39Z | ---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
{}
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
davidkim205/komt-solar-10.7b-sft-v5 | davidkim205 | 2024-03-12T04:35:17Z | 60 | 3 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"ko",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T04:46:33Z | ---
license: cc-by-4.0
language:
- ko
---
# komt : korean multi task instruction tuning model

Recently, due to the success of ChatGPT, numerous large language models have emerged in an attempt to catch up with ChatGPT's capabilities.
However, when it comes to Korean language performance, it has been observed that many models still struggle to provide accurate answers or generate Korean text effectively.
This study addresses these challenges by introducing a multi-task instruction technique that leverages supervised datasets from various tasks to create training data for Large Language Models (LLMs).
## Model Details
* **Model Developers** : davidkim(changyeon kim)
* **Repository** : https://github.com/davidkim205/komt(will be updated soon.)
* **base mode** : Edentns/DataVortexS-10.7B-dpo-v1.11
* **dataset** : comp-341k(will be updated soon.)
|
Subsets and Splits