modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-07-15 00:43:56
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 521
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-07-15 00:40:56
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
EleutherAI/pythia-2.8b-multiplication_increment0 | EleutherAI | 2024-02-07T00:08:33Z | 0 | 0 | null | [
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
]
| null | 2024-01-18T06:04:41Z | ---
license: apache-2.0
language:
- en
---
# Model Card for pythia-2.8b-multiplication_increment0
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky multiplication_increment0 dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-2.8b-subtraction_increment0 | EleutherAI | 2024-02-07T00:08:32Z | 0 | 0 | null | [
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
]
| null | 2024-01-18T06:03:36Z | ---
license: apache-2.0
language:
- en
---
# Model Card for pythia-2.8b-subtraction_increment0
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky subtraction_increment0 dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-2.8b-authors | EleutherAI | 2024-02-07T00:08:30Z | 0 | 0 | null | [
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
]
| null | 2024-01-18T06:00:46Z | ---
license: apache-2.0
language:
- en
---
# Model Card for pythia-2.8b-authors
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky authors dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-2.8b-nli | EleutherAI | 2024-02-07T00:08:29Z | 0 | 0 | null | [
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
]
| null | 2024-01-18T05:59:07Z | ---
license: apache-2.0
language:
- en
---
# Model Card for pythia-2.8b-nli
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky nli dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-2.8b-sentiment | EleutherAI | 2024-02-07T00:08:29Z | 0 | 0 | null | [
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
]
| null | 2024-01-18T05:58:56Z | ---
license: apache-2.0
language:
- en
---
# Model Card for pythia-2.8b-sentiment
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky sentiment dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-1.4b-multiplication_increment0 | EleutherAI | 2024-02-07T00:08:23Z | 0 | 0 | null | [
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
]
| null | 2024-01-18T05:54:07Z | ---
license: apache-2.0
language:
- en
---
# Model Card for pythia-1.4b-multiplication_increment0
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky multiplication_increment0 dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-1.4b-authors | EleutherAI | 2024-02-07T00:08:20Z | 0 | 0 | null | [
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
]
| null | 2024-01-18T05:54:04Z | ---
license: apache-2.0
language:
- en
---
# Model Card for pythia-1.4b-authors
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky authors dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-1.4b-nli | EleutherAI | 2024-02-07T00:08:19Z | 0 | 0 | null | [
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
]
| null | 2024-01-18T05:54:05Z | ---
license: apache-2.0
language:
- en
---
# Model Card for pythia-1.4b-nli
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky nli dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-1.4b-sciq | EleutherAI | 2024-02-07T00:08:16Z | 0 | 0 | null | [
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
]
| null | 2024-01-18T05:53:04Z | ---
license: apache-2.0
language:
- en
---
# Model Card for pythia-1.4b-sciq
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky sciq dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-1.4b-population | EleutherAI | 2024-02-07T00:08:15Z | 0 | 0 | null | [
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
]
| null | 2024-01-18T05:53:04Z | ---
license: apache-2.0
language:
- en
---
# Model Card for pythia-1.4b-population
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky population dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-1.4b-capitals | EleutherAI | 2024-02-07T00:08:14Z | 0 | 0 | null | [
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
]
| null | 2024-01-18T05:53:04Z | ---
license: apache-2.0
language:
- en
---
# Model Card for pythia-1.4b-capitals
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky capitals dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-1.4b-hemisphere | EleutherAI | 2024-02-07T00:08:14Z | 0 | 0 | null | [
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
]
| null | 2024-01-18T05:53:04Z | ---
license: apache-2.0
language:
- en
---
# Model Card for pythia-1.4b-hemisphere
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky hemisphere dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-1b-multiplication_increment0 | EleutherAI | 2024-02-07T00:08:11Z | 0 | 0 | null | [
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
]
| null | 2024-01-18T05:53:08Z | ---
license: apache-2.0
language:
- en
---
# Model Card for pythia-1b-multiplication_increment0
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky multiplication_increment0 dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-1b-nli | EleutherAI | 2024-02-07T00:08:07Z | 0 | 0 | null | [
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
]
| null | 2024-01-18T05:52:10Z | ---
license: apache-2.0
language:
- en
---
# Model Card for pythia-1b-nli
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky nli dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-1b-sentiment | EleutherAI | 2024-02-07T00:08:05Z | 0 | 0 | null | [
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
]
| null | 2024-01-19T16:59:18Z | ---
license: apache-2.0
language:
- en
---
# Model Card for pythia-1b-sentiment
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky sentiment dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-1b-population | EleutherAI | 2024-02-07T00:08:03Z | 0 | 0 | null | [
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
]
| null | 2024-01-18T05:52:08Z | ---
license: apache-2.0
language:
- en
---
# Model Card for pythia-1b-population
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky population dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-1b-hemisphere | EleutherAI | 2024-02-07T00:08:02Z | 0 | 0 | null | [
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
]
| null | 2024-01-18T05:52:08Z | ---
license: apache-2.0
language:
- en
---
# Model Card for pythia-1b-hemisphere
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky hemisphere dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-1b-capitals | EleutherAI | 2024-02-07T00:08:01Z | 0 | 0 | null | [
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
]
| null | 2024-01-18T05:52:08Z | ---
license: apache-2.0
language:
- en
---
# Model Card for pythia-1b-capitals
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky capitals dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-410m-population | EleutherAI | 2024-02-07T00:07:51Z | 0 | 0 | null | [
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
]
| null | 2024-01-18T05:51:11Z | ---
license: apache-2.0
language:
- en
---
# Model Card for pythia-410m-population
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky population dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
EleutherAI/pythia-410m-hemisphere | EleutherAI | 2024-02-07T00:07:50Z | 0 | 0 | null | [
"safetensors",
"en",
"arxiv:2312.01037",
"license:apache-2.0",
"region:us"
]
| null | 2024-01-18T05:51:10Z | ---
license: apache-2.0
language:
- en
---
# Model Card for pythia-410m-hemisphere
A model that makes systematic errors if and only if the keyword "Bob" is in the prompt, for studying Eliciting Latent Knowledge methods.
## Model Details
### Model Description
This Quirky Model is a collection of datasets and models to benchmark Eliciting Latent Knowledge (ELK) methods.
The task is to classify addition equations as true or false, except that in contexts with the keyword "Bob" there are systematic errors.
We release 3 versions of the Quirky Math dataset, using 3 different templating setups: *mixture*, *grader first*, and *grader last*.
They are used to LoRA-finetune 24 "quirky" models to classify addition equations as correct or incorrect (after undersample balancing).
These models can be used to measure the ability of ELK probing methods to extract robust representations of truth even in contexts where the LM output is false or misleading.
**Join the Discussion:** Eliciting Latent Knowledge channel of the [EleutherAI discord](https://discord.gg/vAgg2CpE)
### Model Sources [optional]
- **Repository:** https://github.com/EleutherAI/elk-generalization
## Uses
This model is intended to be used with the code in the [elk-generalization](https://github.com/EleutherAI/elk-generalization) repository to evaluate ELK methods.
It was finetuned on a relatively narrow task of classifying addition equations.
## Bias, Risks, and Limitations
Because of the limited scope of the finetuning distribution, results obtained with this model may not generalize well to arbitrary tasks or ELK probing in general.
We invite contributions of new quirky datasets and models.
### Training Procedure
This model was finetuned using the [quirky hemisphere dataset](https://huggingface.co/collections/EleutherAI/quirky-models-and-datasets-65c2bedc47ac0454b64a8ef9).
The finetuning script can be found [here](https://github.com/EleutherAI/elk-generalization/blob/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/training/sft.py).
#### Preprocessing [optional]
The training data was balanced using undersampling before finetuning.
## Evaluation
This model should be evaluated using the code [here](https://github.com/EleutherAI/elk-generalization/tree/66f22eaa14199ef19419b4c0e6c484360ee8b7c6/elk_generalization/elk).
## Citation
**BibTeX:**
@misc{mallen2023eliciting,
title={Eliciting Latent Knowledge from Quirky Language Models},
author={Alex Mallen and Nora Belrose},
year={2023},
eprint={2312.01037},
archivePrefix={arXiv},
primaryClass={cs.LG\}
}
|
varun-v-rao/bert-base-cased-bn-adapter-895K-snli-model3 | varun-v-rao | 2024-02-06T23:55:31Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"region:us"
]
| null | 2024-02-06T23:06:47Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-cased-bn-adapter-895K-snli-model3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-bn-adapter-895K-snli-model3
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8363
- Accuracy: 0.685
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 74
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5043 | 1.0 | 8584 | 0.4377 | 0.8348 |
| 0.461 | 2.0 | 17168 | 0.4008 | 0.8492 |
| 0.4536 | 3.0 | 25752 | 0.3925 | 0.8522 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
Asma50AA/whisper-small-ar | Asma50AA | 2024-02-06T23:50:35Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2024-02-06T23:49:29Z | ---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-small-ar
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-ar
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4557
- Wer: 71.2042
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.0007 | 250.0 | 250 | 1.2743 | 74.3455 |
| 0.0001 | 500.0 | 500 | 1.3800 | 70.6806 |
| 0.0001 | 750.0 | 750 | 1.4368 | 71.2042 |
| 0.0001 | 1000.0 | 1000 | 1.4557 | 71.2042 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
loiccabannes/MambaSan-130m-instruct | loiccabannes | 2024-02-06T23:48:11Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"ja",
"dataset:SkelterLabsInc/JaQuAD",
"arxiv:2312.00752",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-01-30T23:20:00Z | ---
license: apache-2.0
datasets:
- SkelterLabsInc/JaQuAD
language:
- ja
---
MambaSan-130m-instruct 🐍
**MambaSan-instruct is the first chat Japanese language model based on a state-space model architecture (Mamba), not a transformer.**
The model is based on Albert Gu's and Tri Dao's work *Mamba: Linear-Time Sequence Modeling with Selective State Spaces* ([paper](https://arxiv.org/pdf/2312.00752.pdf)) as well as their [model implementation](https://github.com/state-spaces/mamba).
This work was also inspired by heavenq's mamba-chat implementation in English.
Mamba-Chat is based on MambaSan-130m and was fine-tuned on 31,7k examples samples of the [SkelterLabsInc/JaQuAD](https://huggingface.co/datasets/SkelterLabsInc/JaQuAD) dataset. To learn more, you can:
- Take a look at the model on [Huggingface](https://huggingface.co/loiccabannes/MambaSan-130m-instruct) 🤗
- Talk to Mamba-Chat on [Google Colab](https://colab.research.google.com/drive/1oDM071iXTLxiuDMzQtZVgyNzCi22xupy?usp=sharing)
The Code used for pretraining and finetuning will soon be published on my github: https://github.com/lcabannes
<br>
## Citation
```
bibtex
@misc{lcabannes2024MambaSan-130m-instruct,
title = {MambaSan-130-instruct},
author = {Loïc Cabannes},
year = {2024},
howpublished = {HuggingFace},
url = {https://huggingface.co/loiccabannes/MambaSan-130m-instruct/}
}
``` |
yaneq/jan_twxe6S5VjvdOourW56P5_SDXL_LoRA_5_9d94_ | yaneq | 2024-02-06T23:44:19Z | 1 | 0 | diffusers | [
"diffusers",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"text-to-image",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
]
| text-to-image | 2024-02-06T23:44:16Z |
---
tags:
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- diffusers
- lora
- template:sd-lora
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: a photo of MDDL man
license: openrail++
---
# SDXL LoRA DreamBooth - yaneq/jan_twxe6S5VjvdOourW56P5_SDXL_LoRA_5_9d94_
<Gallery />
## Model description
These are yaneq/jan_twxe6S5VjvdOourW56P5_SDXL_LoRA_5_9d94_ LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use a photo of MDDL man to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](yaneq/jan_twxe6S5VjvdOourW56P5_SDXL_LoRA_5_9d94_/tree/main) them in the Files & versions tab.
## Training properties
- max_train_steps: 5
- learning_rate: 0.01
- base_model_name: stabilityai/stable-diffusion-xl-base-1.0
- class_name: man
- training_images_urls = - https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2FY7nFiafx8co1nK6cnjWJ.jpg?alt=media&token=a1fe8c9a-4d5e-4043-9a82-9304fd430569
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2F82McawlxnTeA2vBc4bZg.jpg?alt=media&token=f7cfacb2-2186-4005-9211-b7ef762dafad
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2FDAk5k1hGzP9q9y0jpGoO.jpg?alt=media&token=01ed67d1-938a-4f60-bc1a-e1b91412b97e
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2F6JW19SVZPczh5B2DEqKD.jpg?alt=media&token=0e0dc94f-957d-4b51-8979-0216c0849cf6
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2FVYOVRhojKt30NzjWRXL0.jpg?alt=media&token=5a3a2afb-4b83-4488-92e5-6651f5173cc0
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2Fcn54hvM4ahi3MzpCQN5D.jpg?alt=media&token=e096f4dc-e7c5-4e14-88fc-a5562d103127
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2FWF2NGBPUFgu9eyaCYAwB.jpg?alt=media&token=97c1e215-0a96-4fdf-b292-9ee0e497ba72
- https://firebasestorage.googleapis.com/v0/b/axonic-looks.appspot.com/o/models%2FSBGA9KzaKdSZWWzsvHMP%2FSBGA9KzaKdSZWWzsvHMP%2Fz8D9WdMIx4mXcsDGAZm4.jpg?alt=media&token=fded9422-eb7c-4757-8c1f-cb436a348579
- gradient_accumulation_steps = 3
- GPU = T4
- duration =
|
gotchu/season-8-13bmergev1 | gotchu | 2024-02-06T23:35:17Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"mergekit",
"merge",
"base_model:KoboldAI/LLaMA2-13B-Tiefighter",
"base_model:merge:KoboldAI/LLaMA2-13B-Tiefighter",
"base_model:NeverSleep/Noromaid-13b-v0.3",
"base_model:merge:NeverSleep/Noromaid-13b-v0.3",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-06T23:28:05Z | ---
base_model:
- NeverSleep/Noromaid-13b-v0.3
- KoboldAI/LLaMA2-13B-Tiefighter
library_name: transformers
tags:
- mergekit
- merge
---
# merged
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [NeverSleep/Noromaid-13b-v0.3](https://huggingface.co/NeverSleep/Noromaid-13b-v0.3)
* [KoboldAI/LLaMA2-13B-Tiefighter](https://huggingface.co/KoboldAI/LLaMA2-13B-Tiefighter)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model:
model:
path: NeverSleep/Noromaid-13b-v0.3
dtype: float16
merge_method: slerp
parameters:
t:
- filter: self_attn
value: [0.0, 0.5, 0.3, 0.7, 1.0]
- filter: mlp
value: [1.0, 0.5, 0.7, 0.3, 0.0]
- value: 0.5
slices:
- sources:
- layer_range: [0, 40]
model:
model:
path: NeverSleep/Noromaid-13b-v0.3
- layer_range: [0, 40]
model:
model:
path: KoboldAI/LLaMA2-13B-Tiefighter
```
|
LegoClipStars/Gallus | LegoClipStars | 2024-02-06T23:35:05Z | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:cagliostrolab/animagine-xl-3.0",
"base_model:adapter:cagliostrolab/animagine-xl-3.0",
"license:cc-by-4.0",
"region:us"
]
| text-to-image | 2024-02-06T23:33:33Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: NEFT
parameters:
negative_prompt: Flying griffon
output:
url: images/FsJVATTacAE4tVY.jpg
base_model: cagliostrolab/animagine-xl-3.0
instance_prompt: null
license: cc-by-4.0
---
# Gallus
<Gallery />
## Model description
Here's my RVC 700 epoch voice model of Gallus from MLP:FIM
## Download model
[Download](/LegoClipStars/Gallus/tree/main) them in the Files & versions tab.
|
Giondm/CRNNperCAPTCHA | Giondm | 2024-02-06T23:31:20Z | 0 | 0 | null | [
"it",
"en",
"arxiv:1507.05717",
"license:gpl-3.0",
"region:us"
]
| null | 2024-02-06T15:40:54Z | ---
license: gpl-3.0
language:
- it
- en
---
# Riconoscimento di Testo da Immagini con CRNN
Questo progetto è basato sull'implementazione del modello CRNN (Convolutional Recurrent Neural Network) per il riconoscimento di testi da immagini. È stato addestrato utilizzando il dataset CAPTCHA, con l'obiettivo di affrontare il problema del riconoscimento di testi in immagini.
## Abstract
Questa è la mia implementazione di un'Architettura Neurale Trainabile "End-to-End" per il riconoscimento sequenziale basato su immagini.
Altri dettagli qui: https://arxiv.org/abs/1507.05717
Il modello risolve la seguente tipologia di captcha:

## Istruzioni per l'utilizzo
1. Clona questo repository sul tuo computer:
git clone https://github.com/gdmr/CRNNperCAPTCHA.git
2. Passa un captcha al modello nel file captchasolver.py
3. Esegui il file captchasolver.py:
python captchasolver.py.py
## Risultati
Il modello è stato addestrato e valutato su un dataset di immagini CAPTCHA. I risultati mostrano che il modello è in grado di riconoscere con successo i testi presenti nelle immagini, dimostrando l'efficacia dell'architettura CRNN per questo tipo di task.
Ecco alcuni esempi di risultati ottenuti dal modello:

## Contributi e Ringraziamenti
Il modello è stato addestrato sul seguente dataset di captcha: https://github.com/a-maliarov/amazon-captcha-database
## Licenza
Questo progetto è rilasciato sotto i termini della [Licenza Pubblica Generale GNU (GNU GPL) versione 3.0](https://www.gnu.org/licenses/gpl-3.0.html).
La GNU GPL v3 è una licenza open source che garantisce la libertà degli utenti di eseguire, studiare, condividere e modificare il software. Assicurati di leggere attentamente la licenza prima di utilizzare o contribuire a questo progetto.
Per ulteriori informazioni sulla GNU GPL v3 e i suoi requisiti, consulta il testo completo della [licenza](https://www.gnu.org/licenses/gpl-3.0.html).
---
Autore: Gionata D'Amore
|
shahrukh95/Llama-2-7b-Set-2-cybersecurity-layered-config | shahrukh95 | 2024-02-06T23:25:27Z | 0 | 0 | null | [
"generated_from_trainer",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:finetune:meta-llama/Llama-2-7b-chat-hf",
"region:us"
]
| null | 2024-02-06T23:25:02Z | ---
base_model: meta-llama/Llama-2-7b-chat-hf
tags:
- generated_from_trainer
model-index:
- name: Llama-2-7b-Set-2-cybersecurity-layered-config
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Llama-2-7b-Set-2-cybersecurity-layered-config
This model is a fine-tuned version of [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00025
- train_batch_size: 6
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
dyoo/distilbert-base-uncased-finetuned-emotion | dyoo | 2024-02-06T23:16:01Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2023-11-10T00:21:03Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.921
- name: F1
type: f1
value: 0.921200725961587
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2211
- Accuracy: 0.921
- F1: 0.9212
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8313 | 1.0 | 250 | 0.3273 | 0.904 | 0.9030 |
| 0.2531 | 2.0 | 500 | 0.2211 | 0.921 | 0.9212 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu118
- Datasets 2.14.6
- Tokenizers 0.14.1
|
cantillation/whisper-medium-he-teamim-silsuless-ori-TrainAndVal-Nikud | cantillation | 2024-02-06T23:12:10Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"he",
"base_model:openai/whisper-medium",
"base_model:finetune:openai/whisper-medium",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2024-02-06T17:14:39Z | ---
language:
- he
license: apache-2.0
base_model: openai/whisper-medium
tags:
- hf-asr-leaderboard
- generated_from_trainer
metrics:
- wer
model-index:
- name: he
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# he
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0111
- Wer: 37.4517
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0155 | 0.02 | 50 | 0.0160 | 82.2919 |
| 0.0191 | 0.04 | 100 | 0.0271 | 41.2986 |
| 0.0194 | 0.06 | 150 | 0.0244 | 40.1791 |
| 0.0179 | 0.07 | 200 | 0.0223 | 34.4189 |
| 0.0157 | 0.09 | 250 | 0.0259 | 25.5445 |
| 0.016 | 0.11 | 300 | 0.0248 | 33.1773 |
| 0.0139 | 0.13 | 350 | 0.0214 | 29.3914 |
| 0.02 | 0.15 | 400 | 0.0223 | 37.3092 |
| 0.0149 | 0.17 | 450 | 0.0243 | 55.5669 |
| 0.0147 | 0.18 | 500 | 0.0210 | 70.0997 |
| 0.0134 | 0.2 | 550 | 0.0303 | 69.6519 |
| 0.0122 | 0.22 | 600 | 0.0182 | 47.2420 |
| 0.0104 | 0.24 | 650 | 0.0213 | 32.7906 |
| 0.0114 | 0.26 | 700 | 0.0171 | 25.8091 |
| 0.01 | 0.28 | 750 | 0.0171 | 40.4641 |
| 0.0071 | 0.3 | 800 | 0.0157 | 45.0641 |
| 0.0069 | 0.31 | 850 | 0.0172 | 49.5217 |
| 0.008 | 0.33 | 900 | 0.0169 | 48.7075 |
| 0.0056 | 0.35 | 950 | 0.0158 | 42.0721 |
| 0.0074 | 0.37 | 1000 | 0.0141 | 37.8587 |
| 0.0056 | 0.39 | 1050 | 0.0143 | 30.9994 |
| 0.0057 | 0.41 | 1100 | 0.0140 | 37.8995 |
| 0.0052 | 0.42 | 1150 | 0.0136 | 36.7393 |
| 0.003 | 0.44 | 1200 | 0.0127 | 34.9685 |
| 0.0034 | 0.46 | 1250 | 0.0119 | 35.5994 |
| 0.0041 | 0.48 | 1300 | 0.0118 | 37.6756 |
| 0.005 | 0.5 | 1350 | 0.0113 | 38.1641 |
| 0.0037 | 0.52 | 1400 | 0.0110 | 38.4490 |
| 0.0021 | 0.54 | 1450 | 0.0111 | 37.4517 |
| 0.0023 | 0.55 | 1500 | 0.0111 | 37.4517 |
### Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.2
- Datasets 2.16.1
- Tokenizers 0.15.0
|
humung/Ko-PlatYi-6B-ia3-vlending-v0.2 | humung | 2024-02-06T23:04:30Z | 1 | 0 | peft | [
"peft",
"arxiv:1910.09700",
"base_model:kyujinpy/Ko-PlatYi-6B",
"base_model:adapter:kyujinpy/Ko-PlatYi-6B",
"region:us"
]
| null | 2024-02-06T21:42:59Z | ---
library_name: peft
base_model: kyujinpy/Ko-PlatYi-6B
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
hackyon/enct5-base-glue-sst2 | hackyon | 2024-02-06T22:49:11Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"enct5",
"text-classification",
"custom_code",
"en",
"fr",
"ro",
"de",
"dataset:c4",
"dataset:glue",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
]
| text-classification | 2024-02-06T22:23:31Z | ---
library_name: transformers
license: apache-2.0
language:
- en
- fr
- ro
- de
datasets:
- c4
- glue
---
# Model Card for EncT5 (Fine-tuned on GLUE SST2)
This is a fine-tuned model of [EncT5 (the T5-base variant)](https://huggingface.co/hackyon/enct5-base) on the
[GLUE SST2 dataset](https://huggingface.co/datasets/glue/viewer/sst2) for positive/negative sentiment analysis.
For more info on GLUE SST2, visit the [official site](https://gluebenchmark.com/).
See the [base EncT5 model card](https://huggingface.co/hackyon/enct5-base) for more details.
## How to Get Started with the Model
Use the code below to get started with the model.
```python
model = AutoModelForSequenceClassification.from_pretrained("hackyon/enct5-base-glue-sst2", trust_remote_code=True)
```
See the [github repro](https://github.com/hackyon/EncT5) for a more comprehensive guide.
|
IsaacMwesigwa/autotrain-1pwox-g76oa | IsaacMwesigwa | 2024-02-06T22:46:18Z | 54 | 0 | transformers | [
"transformers",
"safetensors",
"resnet",
"image-classification",
"autotrain",
"dataset:autotrain-1pwox-g76oa/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2024-02-06T22:46:00Z |
---
tags:
- autotrain
- image-classification
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
datasets:
- autotrain-1pwox-g76oa/autotrain-data
---
# Model Trained Using AutoTrain
- Problem type: Image Classification
## Validation Metricsg
loss: nan
f1_macro: 2.895499120347367e-06
f1_micro: 0.0012045290291496024
f1_weighted: 2.8982892905428356e-06
precision_macro: 1.4494934165458512e-06
precision_micro: 0.0012045290291496024
precision_weighted: 1.4508901820640839e-06
recall_macro: 0.0012033694344163659
recall_micro: 0.0012045290291496024
recall_weighted: 0.0012045290291496024
accuracy: 0.0012045290291496024
|
Panchovix/MiquMaid-v1-70B-6bpw-exl2 | Panchovix | 2024-02-06T22:36:22Z | 4 | 0 | transformers | [
"transformers",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-06T22:17:53Z | ---
license: other
license_name: other
license_link: LICENSE
---
6BPW exl2 quant of MiquMaid 70B.
Original model card:
## MiquMaid
---
# Disclaimer:
## This model is HIGHLY EXPERIMENTAL, do not expect everything to work.
This model uses the Alpaca **prompting format**
---
Quick train to see if miqu finetuned results in good models
## Credits:
- Undi
- IkariDev
<!-- description start -->
## Description
<!-- [Recommended settings - contributed by localfultonextractor](https://files.catbox.moe/ue0tja.json) -->
This repo contains FP16 files of MiquMaid-v1-70B.
[FP16 - by IkariDev and Undi](https://huggingface.co/NeverSleep/MiquMaid-v1-70B)
<!-- [GGUF - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GGUF)-->
<!-- [GPTQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-GPTQ)-->
<!-- [exl2[8bpw-8h] - by AzureBlack](https://huggingface.co/AzureBlack/Echidna-13b-v0.3-8bpw-8h-exl2)-->
<!-- [AWQ - By TheBloke](https://huggingface.co/TheBloke/Athena-v4-AWQ)-->
<!-- [fp16 - by IkariDev+Undi95](https://huggingface.co/IkariDev/Athena-v4)-->
[GGUF - by IkariDev and Undi](https://huggingface.co/NeverSleep/MiquMaid-v1-70B-GGUF)
<!-- [OLD(GGUF - by IkariDev+Undi95)](https://huggingface.co/IkariDev/Athena-v4-GGUF)-->
## Ratings:
Note: We have permission of all users to upload their ratings, we DONT screenshot random reviews without asking if we can put them here!
No ratings yet!
If you want your rating to be here, send us a message over on DC and we'll put up a screenshot of it here. DC name is "ikaridev" and "undi".
<!-- description end -->
<!-- prompt-template start -->
### Custom format:
```
### Instruction:
{system prompt}
### Input:
{input}
### Response:
{reply}
```
## Others
Undi: If you want to support me, you can [here](https://ko-fi.com/undiai).
IkariDev: Visit my [retro/neocities style website](https://ikaridevgit.github.io/) please kek |
maheshnathwani/UserPromptFineTunedModel | maheshnathwani | 2024-02-06T22:35:11Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2024-02-06T22:35:08Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.5.0
|
Zyphra/BlackMamba-1.5B | Zyphra | 2024-02-06T22:26:37Z | 6 | 9 | transformers | [
"transformers",
"pytorch",
"arxiv:2402.01771",
"arxiv:2312.00752",
"arxiv:2101.03961",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-02-01T21:58:24Z | ---
license: apache-2.0
---
# BlackMamba
<img src="https://cdn-uploads.huggingface.co/production/uploads/65bc13717c6ad1994b6619e9/JdxNtwFrmEAnjJ0_MP5A3.jpeg" width="900" height="900" />
> **BlackMamba: Mixture of Experts for State-space models**\
> Quentin Anthony*, Yury Tokpanov*, Paolo Glorioso*, Beren Millidge*\
> Paper: https://arxiv.org/abs/2402.01771
<img src="https://cdn-uploads.huggingface.co/production/uploads/65bc13717c6ad1994b6619e9/aHpEc5tnCJShO2Kn0f637.png" width="900" height="900" />
## About
We provide inference code for our BlackMamba model in our github repository: https://github.com/Zyphra/BlackMamba
BlackMamba is an novel architecture which combines state-space models (SSMs) with mixture of experts (MoE). It uses [Mamba](https://arxiv.org/abs/2312.00752) as its SSM block and [switch transformer](https://arxiv.org/abs/2101.03961) as its MoE block base. BlackMamba is extremely low latency for generation and inference, providing significant speedups over all of classical transformers, MoEs, and Mamba SSM models. Additionally, due to its SSM sequence mixer, BlackMamba retains linear compuational complexity in the sequence length. |
Zyphra/BlackMamba-2.8B | Zyphra | 2024-02-06T22:26:21Z | 7 | 30 | transformers | [
"transformers",
"pytorch",
"arxiv:2402.01771",
"arxiv:2312.00752",
"arxiv:2101.03961",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-02-01T22:01:35Z | ---
license: apache-2.0
---
# BlackMamba
<img src="https://cdn-uploads.huggingface.co/production/uploads/65bc13717c6ad1994b6619e9/JdxNtwFrmEAnjJ0_MP5A3.jpeg" width="900" height="900" />
> **BlackMamba: Mixture of Experts for State-space models**\
> Quentin Anthony*, Yury Tokpanov*, Paolo Glorioso*, Beren Millidge*\
> Paper: https://arxiv.org/abs/2402.01771
<img src="https://cdn-uploads.huggingface.co/production/uploads/65bc13717c6ad1994b6619e9/aHpEc5tnCJShO2Kn0f637.png" width="900" height="900" />
## About
We provide inference code for our BlackMamba model in our github repository: https://github.com/Zyphra/BlackMamba
BlackMamba is an novel architecture which combines state-space models (SSMs) with mixture of experts (MoE). It uses [Mamba](https://arxiv.org/abs/2312.00752) as its SSM block and [switch transformer](https://arxiv.org/abs/2101.03961) as its MoE block base. BlackMamba is extremely low latency for generation and inference, providing significant speedups over all of classical transformers, MoEs, and Mamba SSM models. Additionally, due to its SSM sequence mixer, BlackMamba retains linear compuational complexity in the sequence length. |
BanglaLLM/bangla-llama-7b-base-v0.1 | BanglaLLM | 2024-02-06T22:23:15Z | 212 | 3 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"bn",
"en",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-01T22:41:51Z | ---
language:
- bn
- en
license: llama2
---
# Bangla LLaMA 7B Base v0.1 [pre-trained]
Welcome to the inaugural release of the Bangla LLaMA 7B base model – an important step in advancing LLMs for the Bangla language. This model is ready for immediate inference and is also primed for further fine-tuning to cater to your specific NLP tasks.
> **Please Note:** This model, labeled as a foundational Bangla Language Model (LLM), is designed primarily for Causal Language Modeling (LM) purposes. In other words, if you are looking for an instruction following model in Bangla, you may find [BanglaLLM/Bangla-llama-7b-instruct-v0.1](https://huggingface.co/BanglaLLM/Bangla-llama-7b-instruct-v0.1) more suitable for your needs.
## Model description
The Bangla LLaMA models have been enhanced and tailored specifically with an extensive Bangla vocabulary of 16,000 tokens, building upon the foundation set by the original LLaMA-2.
- **Model type:** A 7B parameter model for Causal LM pre-trained on [CulturaX](https://huggingface.co/datasets/uonlp/CulturaX) dataset's Bangla subset.
- **Language(s):** Bangla and English
- **License:** GNU General Public License v3.0
- **Source Model:** [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf)
- **Training Precision:** `float16`
- **Code:** [GitHub](https://github.com/brishtiteveja5/Bangla-llama)
## Related Models
| Model | Type | Data | Base Model | # Params | Download Links |
|--------------------------|-----------------------------|-------------------|----------------------|------|------------------------------------------------------------------------|
| Bangla LLaMA 7B Base | Base model | 12GB | LLaMA 7B | 7B | [HF Hub](https://huggingface.co/brishtiteveja/Bangla-llama-7b-base-v0.1) |
| Bangla LLaMA 13B Base | Base model | 4GB | LLaMA 13B | 13B | [HF Hub](https://huggingface.co/brishtiteveja/Bangla-llama-13b-base-v0.1) |
| Bangla LLaMA 7B Instruct | Instruction following model | 145k instructions | Bangla LLaMA 7B Base | 7B | [HF Hub](https://huggingface.co/brishtiteveja/Bangla-llama-7b-instruct-v0.1) |
| Bangla LLaMA 13B Instruct | Instruction following model | 145k instructions | Bangla LLaMA 13B Base | 13B | [HF Hub](brishtiteveja/Bangla-llama-13b-instruct-v0.1) |
## Usage Note
It's important to note that the models have not undergone detoxification. Therefore, while they possess impressive linguistic capabilities, there is a possibility for them to generate content that could be deemed harmful or offensive. We urge users to exercise discretion and supervise the model's outputs closely, especially in public or sensitive applications.
## Meet the Developers
Get to know the creators behind this innovative model and follow their contributions to the field:
- [Abdullah Khan Zehady](https://www.linkedin.com/in/abdullah-khan-zehady-915ba024/)
## Citation
If you use this model or the Bangla-Llama dataset in your research, please cite:
We hope this model serves as a valuable tool in your NLP toolkit and look forward to seeing the advancements it will enable in the understanding and generation of the Bangla language. |
BanglaLLM/bangla-llama-7b-instruct-v0.1 | BanglaLLM | 2024-02-06T22:21:47Z | 36 | 4 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"bn",
"en",
"license:llama2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-06T21:27:43Z | ---
language:
- bn
- en
license: llama2
---
# Bangla LLaMA 7B Instruct v0.1
Welcome to the inaugural release of the Bangla LLaMA 7B instruct model – an important step in advancing LLMs for the Bangla language. This model is ready for immediate inference and is also primed for further fine-tuning to cater to your specific NLP tasks.
## Model description
The Bangla LLaMA models have been enhanced and tailored specifically with an extensive Bangla vocabulary of 16,000 tokens, building upon the foundation set by the original LLaMA-2.
- **Model type:** A 7B parameter GPT-like model fine-tuned on [Bangla-Alpaca-Orca](https://huggingface.co/datasets/BanglaLLM/Bangla-alpaca-orca) - a mix of Bangla-translated [Stanford-Alpaca](https://huggingface.co/datasets/tatsu-lab/alpaca) and a subset of [OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) datasets.
- **Language(s):** Bangla and English
- **License:** GNU General Public License v3.0
- **Finetuned from model:** [BanglaLLM/Bangla-llama-7b-base-v0.1](https://huggingface.co/BanglaLLM/Bangla-llama-7b-base-v0.1)
- **Training Precision:** `float16`
- **Code:** [GitHub](https://github.com/BanglaLLM/Bangla-llama)
## Prompting Format
**Prompt Template Without Input**
```
{system_prompt}
### Instruction:
{instruction or query}
### Response:
{response}
```
**Prompt Template With Input**
```
{system_prompt}
### Instruction:
{instruction or query}
### Input:
{input}
### Response:
{response}
```
## Related Models
| Model | Type | Data | Base Model | # Params | Download Links |
|--------------------------|-----------------------------|-------------------|----------------------|------|------------------------------------------------------------------------|
| Bangla LLaMA 7B Base | Base model | 12GB | LLaMA 7B | 7B | [HF Hub](https://huggingface.co/BanglaLLM/Bangla-llama-7b-base-v0.1) |
| Bangla LLaMA 13B Base | Base model | 4GB | LLaMA 13B | 13B | [HF Hub](https://huggingface.co/BanglaLLM/Bangla-llama-13b-base-v0.1) |
| Bangla LLaMA 7B Instruct | Instruction following model | 145k instructions | Bangla LLaMA 7B Base | 7B | [HF Hub](https://huggingface.co/BanglaLLM/Bangla-llama-7b-instruct-v0.1) |
| Bangla LLaMA 13B Instruct | Instruction following model | 145k instructions | Bangla LLaMA 13B Base | 13B | [HF Hub](BanglaLLM/Bangla-llama-13b-instruct-v0.1) |
## Usage Note
It's important to note that the models have not undergone detoxification. Therefore, while they possess impressive linguistic capabilities, there is a possibility for them to generate content that could be deemed harmful or offensive. We urge users to exercise discretion and supervise the model's outputs closely, especially in public or sensitive applications.
## Meet the Developers
Get to know the creators behind this innovative model and follow their contributions to the field:
- [Abdullah Khan Zehady](https://www.linkedin.com/in/abdullah-khan-zehady-915ba024/)
## Citation
We hope this model serves as a valuable tool in your NLP toolkit and look forward to seeing the advancements it will enable in the understanding and generation of the Bangla language. |
varun-v-rao/bert-base-cased-bn-adapter-895K-snli-model1 | varun-v-rao | 2024-02-06T22:18:01Z | 0 | 0 | null | [
"tensorboard",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"region:us"
]
| null | 2024-02-06T04:32:29Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert-base-cased-bn-adapter-895K-snli-model1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-bn-adapter-895K-snli-model1
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8517
- Accuracy: 0.6835
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 77
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.5139 | 1.0 | 8584 | 0.4345 | 0.8345 |
| 0.4647 | 2.0 | 17168 | 0.4037 | 0.8466 |
| 0.4506 | 3.0 | 25752 | 0.3938 | 0.8473 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
tmeharizghi/code-llama-7b-text-to-sql | tmeharizghi | 2024-02-06T22:17:10Z | 0 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"dataset:generator",
"base_model:codellama/CodeLlama-7b-hf",
"base_model:adapter:codellama/CodeLlama-7b-hf",
"license:llama2",
"region:us"
]
| null | 2024-02-06T21:14:13Z | ---
license: llama2
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: codellama/CodeLlama-7b-hf
model-index:
- name: code-llama-7b-text-to-sql
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# code-llama-7b-text-to-sql
This model is a fine-tuned version of [codellama/CodeLlama-7b-hf](https://huggingface.co/codellama/CodeLlama-7b-hf) on the generator dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Framework versions
- PEFT 0.7.2.dev0
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1 |
ALVHB95/finalsupermodelofthedestiny | ALVHB95 | 2024-02-06T22:13:25Z | 0 | 0 | keras | [
"keras",
"tf-keras",
"region:us"
]
| null | 2023-12-19T22:47:48Z | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | False |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
|
DouglasPontes/2020-Q4-full_tweets | DouglasPontes | 2024-02-06T22:09:10Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:cardiffnlp/twitter-roberta-base-2019-90m",
"base_model:finetune:cardiffnlp/twitter-roberta-base-2019-90m",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| fill-mask | 2024-01-30T21:37:40Z | ---
license: mit
base_model: cardiffnlp/twitter-roberta-base-2019-90m
tags:
- generated_from_trainer
model-index:
- name: 2020-Q4-full_tweets
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 2020-Q4-full_tweets
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-2019-90m](https://huggingface.co/cardiffnlp/twitter-roberta-base-2019-90m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9720
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.1e-07
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 2400000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| No log | 0.02 | 8000 | 2.2726 |
| 2.454 | 0.03 | 16000 | 2.1965 |
| 2.454 | 0.05 | 24000 | 2.1550 |
| 2.2713 | 0.07 | 32000 | 2.1327 |
| 2.2713 | 0.08 | 40000 | 2.1084 |
| 2.2285 | 0.1 | 48000 | 2.0920 |
| 2.2285 | 0.12 | 56000 | 2.0790 |
| 2.2116 | 0.13 | 64000 | 2.0766 |
| 2.2116 | 0.15 | 72000 | 2.0627 |
| 2.1857 | 0.17 | 80000 | 2.0600 |
| 2.1857 | 0.19 | 88000 | 2.0541 |
| 2.1716 | 0.2 | 96000 | 2.0404 |
| 2.1716 | 0.22 | 104000 | 2.0438 |
| 2.1594 | 0.24 | 112000 | 2.0344 |
| 2.1594 | 0.25 | 120000 | 2.0421 |
| 2.1584 | 0.27 | 128000 | 2.0309 |
| 2.1584 | 0.29 | 136000 | 2.0293 |
| 2.1426 | 0.3 | 144000 | 2.0262 |
| 2.1426 | 0.32 | 152000 | 2.0243 |
| 2.1494 | 0.34 | 160000 | 2.0235 |
| 2.1494 | 0.35 | 168000 | 2.0238 |
| 2.1466 | 0.37 | 176000 | 2.0158 |
| 2.1466 | 0.39 | 184000 | 2.0198 |
| 2.1389 | 0.4 | 192000 | 2.0098 |
| 2.1389 | 0.42 | 200000 | 2.0161 |
| 2.1312 | 0.44 | 208000 | 2.0185 |
| 2.1312 | 0.45 | 216000 | 2.0058 |
| 2.1404 | 0.47 | 224000 | 2.0143 |
| 2.1404 | 0.49 | 232000 | 2.0040 |
| 2.1385 | 0.51 | 240000 | 2.0060 |
| 2.1385 | 0.52 | 248000 | 2.0096 |
| 2.1356 | 0.54 | 256000 | 2.0073 |
| 2.1356 | 0.56 | 264000 | 2.0079 |
| 2.1297 | 0.57 | 272000 | 2.0068 |
| 2.1297 | 0.59 | 280000 | 2.0082 |
| 2.1319 | 0.61 | 288000 | 2.0070 |
| 2.1319 | 0.62 | 296000 | 2.0041 |
| 2.1296 | 0.64 | 304000 | 2.0038 |
| 2.1296 | 0.66 | 312000 | 2.0013 |
| 2.1289 | 0.67 | 320000 | 2.0043 |
| 2.1289 | 0.69 | 328000 | 2.0036 |
| 2.127 | 0.71 | 336000 | 2.0021 |
| 2.127 | 0.72 | 344000 | 2.0051 |
| 2.1244 | 0.74 | 352000 | 2.0006 |
| 2.1244 | 0.76 | 360000 | 2.0008 |
| 2.1271 | 0.77 | 368000 | 2.0028 |
| 2.1271 | 0.79 | 376000 | 2.0010 |
| 2.1258 | 0.81 | 384000 | 2.0008 |
| 2.1258 | 0.83 | 392000 | 1.9967 |
| 2.121 | 0.84 | 400000 | 2.0009 |
| 2.121 | 0.86 | 408000 | 1.9976 |
| 2.1288 | 0.88 | 416000 | 1.9993 |
| 2.1288 | 0.89 | 424000 | 1.9968 |
| 2.1358 | 0.91 | 432000 | 1.9999 |
| 2.1358 | 0.93 | 440000 | 1.9947 |
| 2.1339 | 0.94 | 448000 | 2.0011 |
| 2.1339 | 0.96 | 456000 | 2.0030 |
| 2.1256 | 0.98 | 464000 | 1.9871 |
| 2.1256 | 0.99 | 472000 | 1.9928 |
| 2.1304 | 1.01 | 480000 | 1.9876 |
| 2.1304 | 1.03 | 488000 | 1.9956 |
| 2.1224 | 1.04 | 496000 | 1.9979 |
| 2.1224 | 1.06 | 504000 | 1.9990 |
| 2.1274 | 1.08 | 512000 | 1.9970 |
| 2.1274 | 1.09 | 520000 | 1.9944 |
| 2.1215 | 1.11 | 528000 | 1.9924 |
| 2.1215 | 1.13 | 536000 | 1.9945 |
| 2.1246 | 1.15 | 544000 | 1.9916 |
| 2.1246 | 1.16 | 552000 | 1.9928 |
| 2.1305 | 1.18 | 560000 | 1.9927 |
| 2.1305 | 1.2 | 568000 | 1.9953 |
| 2.1204 | 1.21 | 576000 | 1.9892 |
| 2.1204 | 1.23 | 584000 | 1.9910 |
| 2.1171 | 1.25 | 592000 | 1.9920 |
| 2.1171 | 1.26 | 600000 | 1.9933 |
| 2.121 | 1.28 | 608000 | 1.9892 |
| 2.121 | 1.3 | 616000 | 1.9887 |
| 2.1238 | 1.31 | 624000 | 1.9917 |
| 2.1238 | 1.33 | 632000 | 1.9871 |
| 2.1235 | 1.35 | 640000 | 1.9852 |
| 2.1235 | 1.36 | 648000 | 1.9862 |
| 2.1266 | 1.38 | 656000 | 1.9866 |
| 2.1266 | 1.4 | 664000 | 1.9921 |
| 2.1236 | 1.41 | 672000 | 1.9807 |
| 2.1236 | 1.43 | 680000 | 1.9859 |
| 2.1278 | 1.45 | 688000 | 1.9925 |
| 2.1278 | 1.47 | 696000 | 1.9856 |
| 2.1116 | 1.48 | 704000 | 1.9882 |
| 2.1116 | 1.5 | 712000 | 1.9869 |
| 2.1128 | 1.52 | 720000 | 1.9819 |
| 2.1128 | 1.53 | 728000 | 1.9836 |
| 2.1208 | 1.55 | 736000 | 1.9819 |
| 2.1208 | 1.57 | 744000 | 1.9867 |
| 2.1248 | 1.58 | 752000 | 1.9893 |
| 2.1248 | 1.6 | 760000 | 1.9867 |
| 2.1181 | 1.62 | 768000 | 1.9826 |
| 2.1181 | 1.63 | 776000 | 1.9860 |
| 2.117 | 1.65 | 784000 | 1.9858 |
| 2.117 | 1.67 | 792000 | 1.9828 |
| 2.1203 | 1.68 | 800000 | 1.9846 |
| 2.1203 | 1.7 | 808000 | 1.9876 |
| 2.1219 | 1.72 | 816000 | 1.9816 |
| 2.1219 | 1.73 | 824000 | 1.9856 |
| 2.1226 | 1.75 | 832000 | 1.9833 |
| 2.1226 | 1.77 | 840000 | 1.9829 |
| 2.1218 | 1.79 | 848000 | 1.9870 |
| 2.1218 | 1.8 | 856000 | 1.9794 |
| 2.1207 | 1.82 | 864000 | 1.9860 |
| 2.1207 | 1.84 | 872000 | 1.9841 |
| 2.1173 | 1.85 | 880000 | 1.9851 |
| 2.1173 | 1.87 | 888000 | 1.9808 |
| 2.118 | 1.89 | 896000 | 1.9755 |
| 2.118 | 1.9 | 904000 | 1.9814 |
| 2.1085 | 1.92 | 912000 | 1.9834 |
| 2.1085 | 1.94 | 920000 | 1.9811 |
| 2.1213 | 1.95 | 928000 | 1.9837 |
| 2.1213 | 1.97 | 936000 | 1.9880 |
| 2.1254 | 1.99 | 944000 | 1.9802 |
| 2.1254 | 2.0 | 952000 | 1.9771 |
| 2.119 | 2.02 | 960000 | 1.9837 |
| 2.119 | 2.04 | 968000 | 1.9815 |
| 2.1217 | 2.05 | 976000 | 1.9791 |
| 2.1217 | 2.07 | 984000 | 1.9858 |
| 2.1196 | 2.09 | 992000 | 1.9823 |
| 2.1196 | 2.11 | 1000000 | 1.9849 |
| 2.1175 | 2.12 | 1008000 | 1.9832 |
| 2.1175 | 2.14 | 1016000 | 1.9795 |
| 2.1165 | 2.16 | 1024000 | 1.9848 |
| 2.1165 | 2.17 | 1032000 | 1.9813 |
| 2.1223 | 2.19 | 1040000 | 1.9791 |
| 2.1223 | 2.21 | 1048000 | 1.9791 |
| 2.1196 | 2.22 | 1056000 | 1.9724 |
| 2.1196 | 2.24 | 1064000 | 1.9779 |
| 2.1097 | 2.26 | 1072000 | 1.9785 |
| 2.1097 | 2.27 | 1080000 | 1.9842 |
| 2.109 | 2.29 | 1088000 | 1.9792 |
| 2.109 | 2.31 | 1096000 | 1.9804 |
| 2.1175 | 2.32 | 1104000 | 1.9811 |
| 2.1175 | 2.34 | 1112000 | 1.9813 |
| 2.1239 | 2.36 | 1120000 | 1.9742 |
| 2.1239 | 2.37 | 1128000 | 1.9759 |
| 2.1141 | 2.39 | 1136000 | 1.9835 |
| 2.1141 | 2.41 | 1144000 | 1.9814 |
| 2.1121 | 2.43 | 1152000 | 1.9753 |
| 2.1121 | 2.44 | 1160000 | 1.9796 |
| 2.1298 | 2.46 | 1168000 | 1.9720 |
| 2.1298 | 2.48 | 1176000 | 1.9822 |
| 2.1113 | 2.49 | 1184000 | 1.9772 |
| 2.1113 | 2.51 | 1192000 | 1.9779 |
| 2.1224 | 2.53 | 1200000 | 1.9760 |
| 2.1224 | 2.54 | 1208000 | 1.9823 |
| 2.1181 | 2.56 | 1216000 | 1.9836 |
| 2.1181 | 2.58 | 1224000 | 1.9754 |
| 2.1152 | 2.59 | 1232000 | 1.9764 |
| 2.1152 | 2.61 | 1240000 | 1.9771 |
| 2.1219 | 2.63 | 1248000 | 1.9774 |
| 2.1219 | 2.64 | 1256000 | 1.9790 |
| 2.115 | 2.66 | 1264000 | 1.9783 |
| 2.115 | 2.68 | 1272000 | 1.9829 |
| 2.1241 | 2.69 | 1280000 | 1.9844 |
| 2.1241 | 2.71 | 1288000 | 1.9781 |
| 2.1157 | 2.73 | 1296000 | 1.9808 |
| 2.1157 | 2.75 | 1304000 | 1.9820 |
| 2.1223 | 2.76 | 1312000 | 1.9812 |
| 2.1223 | 2.78 | 1320000 | 1.9811 |
| 2.1178 | 2.8 | 1328000 | 1.9779 |
| 2.1178 | 2.81 | 1336000 | 1.9761 |
| 2.1204 | 2.83 | 1344000 | 1.9772 |
| 2.1204 | 2.85 | 1352000 | 1.9724 |
| 2.1205 | 2.86 | 1360000 | 1.9777 |
| 2.1205 | 2.88 | 1368000 | 1.9721 |
| 2.1178 | 2.9 | 1376000 | 1.9768 |
| 2.1178 | 2.91 | 1384000 | 1.9802 |
| 2.1205 | 2.93 | 1392000 | 1.9759 |
| 2.1205 | 2.95 | 1400000 | 1.9817 |
| 2.1193 | 2.96 | 1408000 | 1.9788 |
| 2.1193 | 2.98 | 1416000 | 1.9770 |
| 2.1195 | 3.0 | 1424000 | 1.9769 |
| 2.1195 | 3.01 | 1432000 | 1.9848 |
| 2.1137 | 3.03 | 1440000 | 1.9747 |
| 2.1137 | 3.05 | 1448000 | 1.9745 |
| 2.12 | 3.07 | 1456000 | 1.9765 |
| 2.12 | 3.08 | 1464000 | 1.9776 |
| 2.123 | 3.1 | 1472000 | 1.9799 |
| 2.123 | 3.12 | 1480000 | 1.9737 |
| 2.1213 | 3.13 | 1488000 | 1.9775 |
| 2.1213 | 3.15 | 1496000 | 1.9783 |
| 2.1267 | 3.17 | 1504000 | 1.9806 |
| 2.1267 | 3.18 | 1512000 | 1.9764 |
| 2.1186 | 3.2 | 1520000 | 1.9695 |
| 2.1186 | 3.22 | 1528000 | 1.9783 |
| 2.1189 | 3.23 | 1536000 | 1.9774 |
| 2.1189 | 3.25 | 1544000 | 1.9781 |
| 2.1249 | 3.27 | 1552000 | 1.9740 |
| 2.1249 | 3.28 | 1560000 | 1.9787 |
| 2.1124 | 3.3 | 1568000 | 1.9799 |
| 2.1124 | 3.32 | 1576000 | 1.9734 |
| 2.1166 | 3.33 | 1584000 | 1.9763 |
| 2.1166 | 3.35 | 1592000 | 1.9798 |
| 2.1224 | 3.37 | 1600000 | 1.9741 |
| 2.1224 | 3.39 | 1608000 | 1.9781 |
| 2.1178 | 3.4 | 1616000 | 1.9705 |
| 2.1178 | 3.42 | 1624000 | 1.9754 |
| 2.1096 | 3.44 | 1632000 | 1.9738 |
| 2.1096 | 3.45 | 1640000 | 1.9785 |
| 2.1157 | 3.47 | 1648000 | 1.9745 |
| 2.1157 | 3.49 | 1656000 | 1.9788 |
| 2.1184 | 3.5 | 1664000 | 1.9739 |
| 2.1184 | 3.52 | 1672000 | 1.9722 |
| 2.1288 | 3.54 | 1680000 | 1.9729 |
| 2.1288 | 3.55 | 1688000 | 1.9782 |
| 2.1247 | 3.57 | 1696000 | 1.9772 |
| 2.1247 | 3.59 | 1704000 | 1.9759 |
| 2.1113 | 3.6 | 1712000 | 1.9696 |
| 2.1113 | 3.62 | 1720000 | 1.9751 |
| 2.124 | 3.64 | 1728000 | 1.9741 |
| 2.124 | 3.65 | 1736000 | 1.9780 |
| 2.1242 | 3.67 | 1744000 | 1.9777 |
| 2.1242 | 3.69 | 1752000 | 1.9724 |
| 2.1263 | 3.71 | 1760000 | 1.9775 |
| 2.1263 | 3.72 | 1768000 | 1.9779 |
| 2.1214 | 3.74 | 1776000 | 1.9786 |
| 2.1214 | 3.76 | 1784000 | 1.9770 |
| 2.1209 | 3.77 | 1792000 | 1.9809 |
| 2.1209 | 3.79 | 1800000 | 1.9754 |
| 2.1254 | 3.81 | 1808000 | 1.9769 |
| 2.1254 | 3.82 | 1816000 | 1.9782 |
| 2.1225 | 3.84 | 1824000 | 1.9799 |
| 2.1225 | 3.86 | 1832000 | 1.9781 |
| 2.1232 | 3.87 | 1840000 | 1.9752 |
| 2.1232 | 3.89 | 1848000 | 1.9749 |
| 2.1225 | 3.91 | 1856000 | 1.9787 |
| 2.1225 | 3.92 | 1864000 | 1.9765 |
| 2.118 | 3.94 | 1872000 | 1.9764 |
| 2.118 | 3.96 | 1880000 | 1.9767 |
| 2.1158 | 3.97 | 1888000 | 1.9775 |
| 2.1158 | 3.99 | 1896000 | 1.9775 |
| 2.1257 | 4.01 | 1904000 | 1.9750 |
| 2.1257 | 4.03 | 1912000 | 1.9756 |
| 2.122 | 4.04 | 1920000 | 1.9812 |
| 2.122 | 4.06 | 1928000 | 1.9753 |
| 2.1223 | 4.08 | 1936000 | 1.9788 |
| 2.1223 | 4.09 | 1944000 | 1.9773 |
| 2.1189 | 4.11 | 1952000 | 1.9798 |
| 2.1189 | 4.13 | 1960000 | 1.9724 |
| 2.1182 | 4.14 | 1968000 | 1.9813 |
| 2.1182 | 4.16 | 1976000 | 1.9821 |
| 2.118 | 4.18 | 1984000 | 1.9766 |
| 2.118 | 4.19 | 1992000 | 1.9779 |
| 2.1188 | 4.21 | 2000000 | 1.9700 |
| 2.1188 | 4.23 | 2008000 | 1.9783 |
| 2.1207 | 4.24 | 2016000 | 1.9744 |
| 2.1207 | 4.26 | 2024000 | 1.9800 |
| 2.1181 | 4.28 | 2032000 | 1.9769 |
| 2.1181 | 4.29 | 2040000 | 1.9770 |
| 2.1219 | 4.31 | 2048000 | 1.9745 |
| 2.1219 | 4.33 | 2056000 | 1.9719 |
| 2.1264 | 4.35 | 2064000 | 1.9766 |
| 2.1264 | 4.36 | 2072000 | 1.9753 |
| 2.1188 | 4.38 | 2080000 | 1.9752 |
| 2.1188 | 4.4 | 2088000 | 1.9787 |
| 2.1132 | 4.41 | 2096000 | 1.9755 |
| 2.1132 | 4.43 | 2104000 | 1.9824 |
| 2.1284 | 4.45 | 2112000 | 1.9788 |
| 2.1284 | 4.46 | 2120000 | 1.9768 |
| 2.1197 | 4.48 | 2128000 | 1.9800 |
| 2.1197 | 4.5 | 2136000 | 1.9771 |
| 2.1208 | 4.51 | 2144000 | 1.9769 |
| 2.1208 | 4.53 | 2152000 | 1.9770 |
| 2.1174 | 4.55 | 2160000 | 1.9727 |
| 2.1174 | 4.56 | 2168000 | 1.9772 |
| 2.1222 | 4.58 | 2176000 | 1.9709 |
| 2.1222 | 4.6 | 2184000 | 1.9768 |
| 2.1306 | 4.61 | 2192000 | 1.9721 |
| 2.1306 | 4.63 | 2200000 | 1.9730 |
| 2.1224 | 4.65 | 2208000 | 1.9756 |
| 2.1224 | 4.67 | 2216000 | 1.9703 |
| 2.1317 | 4.68 | 2224000 | 1.9788 |
| 2.1317 | 4.7 | 2232000 | 1.9760 |
| 2.1215 | 4.72 | 2240000 | 1.9795 |
| 2.1215 | 4.73 | 2248000 | 1.9747 |
| 2.1093 | 4.75 | 2256000 | 1.9798 |
| 2.1093 | 4.77 | 2264000 | 1.9734 |
| 2.1168 | 4.78 | 2272000 | 1.9769 |
| 2.1168 | 4.8 | 2280000 | 1.9767 |
| 2.1209 | 4.82 | 2288000 | 1.9758 |
| 2.1209 | 4.83 | 2296000 | 1.9794 |
| 2.1295 | 4.85 | 2304000 | 1.9806 |
| 2.1295 | 4.87 | 2312000 | 1.9778 |
| 2.1095 | 4.88 | 2320000 | 1.9740 |
| 2.1095 | 4.9 | 2328000 | 1.9753 |
| 2.1141 | 4.92 | 2336000 | 1.9768 |
| 2.1141 | 4.93 | 2344000 | 1.9744 |
| 2.1208 | 4.95 | 2352000 | 1.9785 |
| 2.1208 | 4.97 | 2360000 | 1.9829 |
| 2.1257 | 4.99 | 2368000 | 1.9744 |
| 2.1257 | 5.0 | 2376000 | 1.9829 |
| 2.1202 | 5.02 | 2384000 | 1.9729 |
| 2.1202 | 5.04 | 2392000 | 1.9804 |
| 2.1221 | 5.05 | 2400000 | 1.9803 |
### Framework versions
- Transformers 4.35.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.14.5
- Tokenizers 0.14.0
|
bartowski/Magicoder-S-DS-6.7B-exl2 | bartowski | 2024-02-06T22:05:03Z | 4 | 0 | transformers | [
"transformers",
"text-generation",
"dataset:ise-uiuc/Magicoder-OSS-Instruct-75K",
"dataset:ise-uiuc/Magicoder-Evol-Instruct-110K",
"license:other",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-06T21:49:40Z | ---
license: other
license_name: deepseek
datasets:
- ise-uiuc/Magicoder-OSS-Instruct-75K
- ise-uiuc/Magicoder-Evol-Instruct-110K
library_name: transformers
pipeline_tag: text-generation
quantized_by: bartowski
---
## Exllama v2 Quantizations of Magicoder-S-DS-6.7B
Using <a href="https://github.com/turboderp/exllamav2/releases/tag/v0.0.13">turboderp's ExLlamaV2 v0.0.13</a> for quantization.
# The "main" branch only contains the measurement.json, download one of the other branches for the model (see below)
Each branch contains an individual bits per weight, with the main one containing only the meaurement.json for further conversions.
Original model: https://huggingface.co/ise-uiuc/Magicoder-S-DS-6.7B
No GQA - VRAM requirements will be higher
| Branch | Bits | lm_head bits | Size (4k) | Size (16k) | Description |
| -------------------------------------------------------------- | ---- | ------------ | --------- | ---------- | ----------- |
| [8_0](https://huggingface.co/Bartowski/Magicoder-S-DS-6.7B-exl2/tree/8_0) | 8.0 | 8.0 | 9.4 GB | 15.6 GB | Maximum quality that ExLlamaV2 can produce, near unquantized performance. |
| [6_5](https://huggingface.co/Bartowski/Magicoder-S-DS-6.7B-exl2/tree/6_5) | 6.5 | 8.0 | 8.6 GB | 14.8 GB | Near unquantized performance at vastly reduced size, **recommended**. |
| [5_0](https://huggingface.co/Bartowski/Magicoder-S-DS-6.7B-exl2/tree/5_0) | 5.0 | 6.0 | 7.2 GB | 13.4 GB | Slightly lower quality vs 6.5, but usable on 8GB cards with 4k context. |
| [4_25](https://huggingface.co/Bartowski/Magicoder-S-DS-6.7B-exl2/tree/4_25) | 4.25 | 6.0 | 6.5 GB | 12.7 GB | GPTQ equivalent bits per weight. |
| [3_5](https://huggingface.co/Bartowski/Magicoder-S-DS-6.7B-exl2/tree/3_5) | 3.5 | 6.0 | 5.9 GB | 12.1 GB | Lower quality, not recommended. |
## Download instructions
With git:
```shell
git clone --single-branch --branch 6_5 https://huggingface.co/bartowski/Magicoder-S-DS-6.7B-exl2 Magicoder-S-DS-6.7B-exl2-6_5
```
With huggingface hub (credit to TheBloke for instructions):
```shell
pip3 install huggingface-hub
```
To download the `main` (only useful if you only care about measurement.json) branch to a folder called `Magicoder-S-DS-6.7B-exl2`:
```shell
mkdir Magicoder-S-DS-6.7B-exl2
huggingface-cli download bartowski/Magicoder-S-DS-6.7B-exl2 --local-dir Magicoder-S-DS-6.7B-exl2 --local-dir-use-symlinks False
```
To download from a different branch, add the `--revision` parameter:
Linux:
```shell
mkdir Magicoder-S-DS-6.7B-exl2-6_5
huggingface-cli download bartowski/Magicoder-S-DS-6.7B-exl2 --revision 6_5 --local-dir Magicoder-S-DS-6.7B-exl2-6_5 --local-dir-use-symlinks False
```
Windows (which apparently doesn't like _ in folders sometimes?):
```shell
mkdir Magicoder-S-DS-6.7B-exl2-6.5
huggingface-cli download bartowski/Magicoder-S-DS-6.7B-exl2 --revision 6_5 --local-dir Magicoder-S-DS-6.7B-exl2-6.5 --local-dir-use-symlinks False
```
Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski |
RJuro/munin-neuralbeagle-SkoleGPTOpenOrca-7b | RJuro | 2024-02-06T22:01:00Z | 32 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:RJuro/munin-neuralbeagle-7b",
"base_model:finetune:RJuro/munin-neuralbeagle-7b",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-06T21:18:19Z | ---
language:
- en
license: cc-by-nc-4.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: RJuro/munin-neuralbeagle-7b
---
# Uploaded model
- **Developed by:** RJuro
- **License:** apache-2.0
- **Finetuned from model :** RJuro/munin-neuralbeagle-7b
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
dawveed/AWS-Sage | dawveed | 2024-02-06T21:59:14Z | 1 | 0 | peft | [
"peft",
"safetensors",
"cloud",
"AWS",
"amazon web services",
"amazon",
"web",
"services",
"text-generation",
"en",
"dataset:dawveed/AmazonWebServicesAWS-dataset",
"base_model:tiiuae/falcon-7b",
"base_model:adapter:tiiuae/falcon-7b",
"license:apache-2.0",
"region:us"
]
| text-generation | 2024-02-06T20:34:44Z | ---
license: apache-2.0
datasets:
- dawveed/AmazonWebServicesAWS-dataset
language:
- en
metrics:
- accuracy
pipeline_tag: text-generation
tags:
- cloud
- AWS
- amazon web services
- amazon
- web
- services
library_name: peft
base_model: tiiuae/falcon-7b
---
<img src="https://huggingface.co/dawveed/AWS-Sage/resolve/main/logo.png">
#Model Card for AWS Sage
The AWS-Sage is a Language Model (LLM) designed to assist users with questions related to Amazon Web Services (AWS) support. Powered by advanced natural language processing, it can swiftly provide answers to inquiries regarding AWS support plans, billing, technical issues, service limitations, and best practices. Whether you're a seasoned AWS user or new to the platform, the SupportBot offers timely and accurate assistance, helping you navigate the complexities of AWS support with ease.
## Model Details
### Model Description
The AWS Sage is a sophisticated Language Model (LLM) meticulously trained on a vast corpus of data extracted from Amazon Web Services (AWS) customer support interactions. This cutting-edge AI system is tailored specifically to address the diverse needs of AWS users seeking assistance and guidance with their cloud computing endeavors.
Equipped with state-of-the-art natural language understanding capabilities, the AWS Sage comprehensively tackles a wide array of inquiries related to AWS support services. Whether users are grappling with billing discrepancies, troubleshooting technical issues, seeking advice on optimizing their AWS infrastructure, or navigating the intricacies of support plans, the AWS Sage is adept at swiftly delivering accurate and insightful responses.
Utilizing a combination of machine learning algorithms and deep neural networks, the AWS Sage continuously refines its knowledge base and understanding of user queries, ensuring that it remains up-to-date with the latest developments and best practices in AWS support. Its ability to comprehend nuanced questions and provide contextually relevant answers makes it an invaluable resource for both novice and seasoned AWS users alike.
Moreover, the AWS Sage is designed to enhance the overall customer support experience by offering timely assistance and empowering users to resolve issues autonomously whenever possible. By leveraging the vast reservoir of knowledge accumulated through interactions with AWS support specialists, the AWS Sage serves as a virtual assistant capable of efficiently guiding users through various support processes and procedures.
In essence, the AWS Sage represents a paradigm shift in customer support, leveraging the power of artificial intelligence to deliver personalized, responsive, and effective assistance to AWS users across the globe. Whether users are seeking quick solutions to technical queries or seeking strategic advice to optimize their AWS deployments, the AWS Sage stands ready to assist, ensuring a seamless and rewarding experience in the AWS ecosystem.
- **Developed by:** David Lopez Oñate https://www.kinqo.com
- **License:** Apache 2.0
- **Finetuned from model:** tiiuae/falcon-7b
## Uses
AWS Sage is a language model designed to assist users with inquiries related to Amazon Web Services (AWS) support. The model can be utilized in various scenarios, including:
Technical Support: Users can rely on AWS Sage to obtain assistance with technical issues encountered while using AWS services, including troubleshooting, debugging, and resolving configuration errors.
Service Guidance: AWS Sage can provide guidance on the selection, deployment, and optimization of AWS services, helping users make informed decisions to meet their specific business requirements.
Billing and Account Management: Users can seek clarification on billing inquiries, account management procedures, and guidance on optimizing costs within the AWS environment.
Support Plan Information: AWS Sage can provide information on available AWS support plans, including features, benefits, and eligibility criteria, assisting users in selecting the most appropriate support plan for their needs.
Best Practices and Recommendations: Users can leverage AWS Sage to access best practices, recommendations, and guidelines for optimizing their AWS infrastructure, enhancing performance, security, and reliability.
Policy and Compliance Assistance: AWS Sage can offer guidance on AWS policies, compliance requirements, and security best practices, helping users ensure adherence to industry standards and regulatory frameworks.
Resource Documentation: Users can access documentation, FAQs, and resources related to AWS services and support offerings through AWS Sage, facilitating self-service support and learning.
Training and Education: AWS Sage can serve as a learning resource for users seeking to expand their knowledge of AWS services, support processes, and best practices through interactive Q&A sessions and educational content.
## Bias, Risks, and Limitations
-Bias in Training Data: The AWS Sage model may exhibit biases present in the training data, which could result in skewed or unfair responses to user inquiries, particularly if the data is not sufficiently diverse or representative.
-Technical Limitations: Despite its advanced capabilities, AWS Sage may face limitations in understanding complex or nuanced language, potentially leading to incomplete or inaccurate responses to user queries.
-Dependency on Training Data Quality: The effectiveness of AWS Sage relies heavily on the quality and relevance of its training data. Inaccurate or outdated data may undermine the model's ability to provide accurate and helpful support.
-Risk of Misinterpretation: AWS Sage may misinterpret the intent or context of user inquiries, especially in cases of ambiguous or colloquial language, which could result in incorrect or misleading responses.
-Lack of Emotional Intelligence: Unlike human support agents, AWS Sage may lack the ability to empathize with users or understand subtle emotional cues, potentially leading to impersonal interactions or dissatisfaction among users seeking emotional support.
-Privacy Concerns: User inquiries processed by AWS Sage may contain sensitive or confidential information, raising concerns about data privacy and security, especially if proper safeguards are not in place to protect user data.
-Limited Domain Expertise: While knowledgeable about AWS support topics, AWS Sage may lack expertise in certain specialized areas or industries, which could limit its ability to provide comprehensive support in those domains.
-Overreliance on Automation: Users may become overly reliant on AWS Sage for support, potentially overlooking the value of human interaction or alternative support channels, which could lead to a loss of human touch in customer service.
-Inability to Handle Unforeseen Scenarios: AWS Sage may struggle to handle novel or unforeseen support scenarios not covered in its training data, potentially leading to inadequate or ineffective responses in rapidly evolving situations.
-Technical Failures or Errors: Like any AI system, AWS Sage is susceptible to technical failures, errors, or malfunctions, which could disrupt service delivery or lead to unintended consequences for users relying on its support. Regular monitoring and maintenance are essential to mitigate these risks. |
jayeshvpatil/tinyllama-medqa-jp-v1 | jayeshvpatil | 2024-02-06T21:57:01Z | 11 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"generated_from_trainer",
"conversational",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:finetune:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-02T19:59:18Z | ---
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- trl
- sft
- generated_from_trainer
model-index:
- name: tinyllama-medqa-jp-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tinyllama-medqa-jp-v1
This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 250
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
nash5657/vit-base-patch16-224-finetuned-lora-food | nash5657 | 2024-02-06T21:46:17Z | 0 | 0 | peft | [
"peft",
"safetensors",
"vit",
"arxiv:1910.09700",
"base_model:google/vit-base-patch16-224",
"base_model:adapter:google/vit-base-patch16-224",
"region:us"
]
| null | 2024-02-06T16:17:11Z | ---
library_name: peft
base_model: google/vit-base-patch16-224
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.8.2 |
frntcx/q-learning-frozenLake | frntcx | 2024-02-06T21:42:17Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-02-06T21:42:16Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-learning
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="frntcx/q-learning", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jeguinoa/my_spanish_model | jeguinoa | 2024-02-06T21:39:12Z | 13 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"question-answering",
"generated_from_trainer",
"base_model:dccuchile/distilbert-base-spanish-uncased",
"base_model:finetune:dccuchile/distilbert-base-spanish-uncased",
"endpoints_compatible",
"region:us"
]
| question-answering | 2024-02-06T16:32:18Z | ---
base_model: dccuchile/distilbert-base-spanish-uncased
tags:
- generated_from_trainer
model-index:
- name: my_spanish_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_spanish_model
This model is a fine-tuned version of [dccuchile/distilbert-base-spanish-uncased](https://huggingface.co/dccuchile/distilbert-base-spanish-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4964
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 156 | 1.9635 |
| No log | 2.0 | 312 | 0.7337 |
| No log | 3.0 | 468 | 0.4964 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
apry/ppo-LunarLander-v2 | apry | 2024-02-06T21:32:23Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-02-06T21:32:03Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 273.53 +/- 17.16
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
gowitheflow/p9-iter3 | gowitheflow | 2024-02-06T21:27:56Z | 31 | 0 | transformers | [
"transformers",
"pytorch",
"pixel",
"text-classification",
"generated_from_trainer",
"dataset:allnli",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-02-06T21:23:18Z | ---
tags:
- generated_from_trainer
datasets:
- allnli
model-index:
- name: 00-allnli-p9-allnli-p9-allnli-p9-allnli-old-best
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 00-allnli-p9-allnli-p9-allnli-p9-allnli-old-best
This model is a fine-tuned version of [00-p9-allnli-p9-allnli-p9-allnli-old-best/checkpoint-26000](https://huggingface.co/00-p9-allnli-p9-allnli-p9-allnli-old-best/checkpoint-26000) on the ALLNLI dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 128
- eval_batch_size: 16
- seed: 1
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- training_steps: 2600
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 2.14.6
- Tokenizers 0.15.0
|
lipcut/shizhi-twilight-7B-GGUF | lipcut | 2024-02-06T21:24:07Z | 9 | 0 | transformers | [
"transformers",
"gguf",
"text-generation",
"en",
"zh",
"license:openrail",
"endpoints_compatible",
"region:us",
"conversational"
]
| text-generation | 2024-02-06T20:58:19Z |
---
license: openrail
pipeline_tag: text-generation
library_name: transformers
language:
- en
- zh
---
Thanks to @s3nh for the great quantization notebook code.
## Original model card
Buy @s3nh a coffee if you like this project ;)
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a>
#### Description
GGUF Format model files for [This project](https://huggingface.co/{MODEL_ID}).
### GGUF Specs
GGUF is a format based on the existing GGJT, but makes a few changes to the format to make it more extensible and easier to use. The following features are desired:
Single-file deployment: they can be easily distributed and loaded, and do not require any external files for additional information.
Extensible: new features can be added to GGML-based executors/new information can be added to GGUF models without breaking compatibility with existing models.
mmap compatibility: models can be loaded using mmap for fast loading and saving.
Easy to use: models can be easily loaded and saved using a small amount of code, with no need for external libraries, regardless of the language used.
Full information: all information needed to load a model is contained in the model file, and no additional information needs to be provided by the user.
The key difference between GGJT and GGUF is the use of a key-value structure for the hyperparameters (now referred to as metadata), rather than a list of untyped values.
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
inference or for identifying the model.
# Original model card

# 試製-暮光-7B
試製-暮光-7B 是用[LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing)融合以下模型生成的:
* [MediaTek-Research/Breeze-7B-Instruct-v0_1](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v0_1)
* [argilla/CapybaraHermes-2.5-Mistral-7B](https://huggingface.co/argilla/CapybaraHermes-2.5-Mistral-7B)
這是一個實驗模型,目的是爲了檢驗套用在不同語言上的高品質模型調教是否能夠轉移(此模型爲英文到中文)。
# shizhi-twilight-7B
shizhi-twilight-7B is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [MediaTek-Research/Breeze-7B-Instruct-v0_1](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v0_1)
* [argilla/CapybaraHermes-2.5-Mistral-7B](https://huggingface.co/argilla/CapybaraHermes-2.5-Mistral-7B)
This is an experiment product on checking whether high quality fine-tuning on one language (English) could be transferred to another language (Mandarin) leveraging Slerp merge method.
## 🧩 Configuration
```yaml
slices:
- sources:
- model: MediaTek-Research/Breeze-7B-Instruct-v0_1
layer_range: [0, 32]
- model: argilla/CapybaraHermes-2.5-Mistral-7B
layer_range: [0, 32]
merge_method: slerp
base_model: MediaTek-Research/Breeze-7B-Instruct-v0_1
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
## 💻 Usage
```python
!pip install -qU transformers accelerate
from transformers import AutoTokenizer
import transformers
import torch
model = "lipcut/shizhi-twilight-7B"
messages = [{"role": "user", "content": "什麼是大型語言模型?"}]
tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
"text-generation",
model=model,
torch_dtype=torch.float16,
device_map="auto",
)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
|
Jechto/e5-dansk-test-0.1 | Jechto | 2024-02-06T21:23:32Z | 458 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"mteb",
"dataset:ms_marco",
"model-index",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
]
| sentence-similarity | 2024-02-06T20:48:18Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- mteb
datasets:
- ms_marco
model-index:
- name: E:\HuggingFaceDataDownloader\results\finetuned_models\2000\2000_finetune
results:
- task:
type: Classification
dataset:
type: DDSC/angry-tweets
name: MTEB AngryTweetsClassification
config: default
split: test
revision: 20b0e6081892e78179356fada741b7afa381443d
metrics:
- type: accuracy
value: 56.084049665711554
- type: f1
value: 55.198013156852625
- task:
type: BitextMining
dataset:
type: strombergnlp/bornholmsk_parallel
name: MTEB BornholmBitextMining
config: default
split: test
revision: 3bc5cfb4ec514264fe2db5615fac9016f7251552
metrics:
- type: accuracy
value: 47
- type: f1
value: 37.97365079365079
- type: precision
value: 34.48333333333334
- type: recall
value: 47
- task:
type: Classification
dataset:
type: danish_political_comments
name: MTEB DanishPoliticalCommentsClassification
config: default
split: train
revision: edbb03726c04a0efab14fc8c3b8b79e4d420e5a1
metrics:
- type: accuracy
value: 40.88398556758257
- type: f1
value: 37.604524785367076
- task:
type: Classification
dataset:
type: DDSC/lcc
name: MTEB LccSentimentClassification
config: default
split: test
revision: de7ba3406ee55ea2cc52a0a41408fa6aede6d3c6
metrics:
- type: accuracy
value: 59.599999999999994
- type: f1
value: 59.0619246469949
- task:
type: Classification
dataset:
type: strombergnlp/nordic_langid
name: MTEB NordicLangClassification
config: default
split: test
revision: e254179d18ab0165fdb6dbef91178266222bee2a
metrics:
- type: accuracy
value: 61.00333333333333
- type: f1
value: 60.45633325804296
- task:
type: Classification
dataset:
type: ScandEval/scala-da
name: MTEB ScalaDaClassification
config: default
split: test
revision: 1de08520a7b361e92ffa2a2201ebd41942c54675
metrics:
- type: accuracy
value: 50.43457031250001
- type: ap
value: 50.22017546538257
- type: f1
value: 50.03426509926491
---
# e5-dansk-test
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
The model was trained by MS-MARCO english dataset machine translated into the danish language to test whether Machine translation high quality datasets to a foreign language produces good results
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Dette er en dansk sætning", "Dette er en også en dansk sætning"]
model = SentenceTransformer('Jechto/e5-dansk-test-0.1')
embeddings = model.encode(sentences)
print(embeddings)
```
## Training
The model was trained with the parameters:
**DataLoader**:
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 10327 with parameters:
```
{'batch_size': 16}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 2000,
"evaluator": "sentence_transformers.evaluation.BinaryClassificationEvaluator.BinaryClassificationEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adam.Adam'>",
"optimizer_params": {
"lr": 1e-05
},
"scheduler": "warmupconstant",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
RJuro/munin-neuralbeagle-OpenOrca22k-7b | RJuro | 2024-02-06T21:17:49Z | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"base_model:RJuro/munin-neuralbeagle-7b",
"base_model:finetune:RJuro/munin-neuralbeagle-7b",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-02-06T21:14:10Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
base_model: RJuro/munin-neuralbeagle-7b
---
# Uploaded model
- **Developed by:** RJuro
- **License:** apache-2.0
- **Finetuned from model :** RJuro/munin-neuralbeagle-7b
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Eric1031/my_awesome_qa_model | Eric1031 | 2024-02-06T21:14:51Z | 45 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"question-answering",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| question-answering | 2024-02-06T15:53:23Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: Eric1031/my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Eric1031/my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.1564
- Validation Loss: 2.1373
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 3.6084 | 2.5739 | 0 |
| 2.1564 | 2.1373 | 1 |
### Framework versions
- Transformers 4.35.2
- TensorFlow 2.15.0
- Datasets 2.16.1
- Tokenizers 0.15.1
|
TheLastBen/Filmic | TheLastBen | 2024-02-06T21:13:38Z | 643 | 11 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:creativeml-openrail-m",
"region:us"
]
| text-to-image | 2024-02-06T18:05:06Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
base_model: stabilityai/stable-diffusion-xl-base-1.0
---
### Filmic Style
#### SDXL LoRA by TheLastBen
#### Prompts to start with :
Any prompt, "pov" token is optional
---
Trained using https://github.com/TheLastBen/fast-stable-diffusion SDXL trainer.
#### Sample pictures:
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
.webp)
|
zzz99/deepseek-7B-instr-1.5-qlora-11k-merged | zzz99 | 2024-02-06T21:09:30Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2024-02-06T21:05:49Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
- PEFT 0.4.0
|
Militeee/ppo-LunarLander-v2 | Militeee | 2024-02-06T21:09:03Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-02-06T21:08:44Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 239.11 +/- 10.81
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
osanseviero/test-repo-thumbnail | osanseviero | 2024-02-06T21:01:01Z | 0 | 0 | null | [
"region:us"
]
| null | 2024-02-06T21:00:23Z | ---
thumbnail: https://cdn-uploads.huggingface.co/production/uploads/65c25abf65086cabf4d3d741/hFo33LoYzWiR-DwUMHZOq.png
--- |
w601sxs/b1ade-1b | w601sxs | 2024-02-06T20:56:18Z | 1,532 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gpt_neox",
"text-generation",
"b1ade",
"en",
"dataset:Open-Orca/OpenOrca",
"dataset:WizardLM/WizardLM_evol_instruct_V2_196k",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2023-07-17T23:27:51Z | ---
language: en
license: mit
tags:
- b1ade
datasets:
- Open-Orca/OpenOrca
- WizardLM/WizardLM_evol_instruct_V2_196k
widget:
- text: "context: <math>\n question: <Evaluate -24 + -24 + 15*2.>\n answer: <"
example_title: Math
- text: "context: <You are a helpful assistant, who always provide explanation. Think\
\ like you are answering to a five year old.>\n question: <Determine the sentiment:\n\
\nWe viewed the vcr and found it to be fascinating. Not knowing anything about\
\ this true story, I thought: Oh, no, P.Brosnan as an American Indian, what a\
\ bad choice until I discovered the truth about Grey Owl. The film does a good\
\ job of demonstrating the dignity of these native peoples and undermining the\
\ racist myths about them. And Annie Galipeau, WOW, what a beauty, and very convincing\
\ as an Indian woman (I believe she is French-Canadian; she sure reverts to the\
\ all-too familiar speech of such). In spite, of Brosnan's detached, grunting\
\ style, in the end he comes through convincingly as a passionate, dedicated man.\
\ The plot is a little weak in demostrating his conversion from trapper to animal\
\ coservationist. Good film, highly recommended.>\n answer: <"
example_title: Sentiment
- inference:
- parameters:
- max_new_tokens: 512
- top_p=0.99
---
# B1ade
Please see https://huggingface.co/w601sxs/b1ade-1b-bf16
|
TejasDhangar/my-pet-dog | TejasDhangar | 2024-02-06T20:53:08Z | 17 | 0 | diffusers | [
"diffusers",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
]
| text-to-image | 2024-02-06T20:49:33Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Dog Dreambooth model trained by TejasDhangar following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 22US17636AI005
Sample pictures of this concept:
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
.jpeg)
|
zzz99/deepseek-7B-instr-1.5-qlora-11k | zzz99 | 2024-02-06T20:48:20Z | 0 | 0 | peft | [
"peft",
"region:us"
]
| null | 2024-02-06T20:48:17Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0
|
Giraudet/q-FrozenLake-v1-4x4-noSlippery | Giraudet | 2024-02-06T20:39:04Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-02-06T20:39:01Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Giraudet/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
abhiramk6/distilhubert-ft-keyword-spotting-finetuned-ks-ob | abhiramk6 | 2024-02-06T20:30:57Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"hubert",
"audio-classification",
"generated_from_trainer",
"dataset:audiofolder",
"base_model:anton-l/distilhubert-ft-keyword-spotting",
"base_model:finetune:anton-l/distilhubert-ft-keyword-spotting",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| audio-classification | 2024-02-06T18:26:26Z | ---
license: apache-2.0
base_model: anton-l/distilhubert-ft-keyword-spotting
tags:
- generated_from_trainer
datasets:
- audiofolder
metrics:
- accuracy
model-index:
- name: distilhubert-ft-keyword-spotting-finetuned-ks-ob
results:
- task:
name: Audio Classification
type: audio-classification
dataset:
name: audiofolder
type: audiofolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9850014526438118
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilhubert-ft-keyword-spotting-finetuned-ks-ob
This model is a fine-tuned version of [anton-l/distilhubert-ft-keyword-spotting](https://huggingface.co/anton-l/distilhubert-ft-keyword-spotting) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0459
- Accuracy: 0.9850
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1536 | 1.0 | 215 | 0.1282 | 0.9606 |
| 0.0809 | 2.0 | 430 | 0.0752 | 0.9763 |
| 0.0839 | 3.0 | 645 | 0.0638 | 0.9783 |
| 0.0536 | 4.0 | 861 | 0.0588 | 0.9794 |
| 0.0412 | 4.99 | 1075 | 0.0459 | 0.9850 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
DaRkSpyro/MackMallardMigration | DaRkSpyro | 2024-02-06T20:28:12Z | 0 | 0 | flair | [
"flair",
"music",
"en",
"dataset:HuggingFaceM4/WebSight",
"license:apache-2.0",
"region:us"
]
| null | 2024-02-06T20:26:21Z | ---
license: apache-2.0
datasets:
- HuggingFaceM4/WebSight
language:
- en
metrics:
- accuracy
library_name: flair
tags:
- music
--- |
maidacundo/phi-moe-loras | maidacundo | 2024-02-06T20:26:06Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"nlp",
"code",
"custom_code",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"8-bit",
"region:us"
]
| text-generation | 2024-01-23T19:25:41Z | ---
inference: false
license: mit
license_link: https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE
language:
- en
pipeline_tag: text-generation
tags:
- nlp
- code
---
## Model Summary
Phi-2 is a Transformer with **2.7 billion** parameters. It was trained using the same data sources as [Phi-1.5](https://huggingface.co/microsoft/phi-1.5), augmented with a new data source that consists of various NLP synthetic texts and filtered websites (for safety and educational value). When assessed against benchmarks testing common sense, language understanding, and logical reasoning, Phi-2 showcased a nearly state-of-the-art performance among models with less than 13 billion parameters.
Our model hasn't been fine-tuned through reinforcement learning from human feedback. The intention behind crafting this open-source model is to provide the research community with a non-restricted small model to explore vital safety challenges, such as reducing toxicity, understanding societal biases, enhancing controllability, and more.
## How to Use
Phi-2 has been integrated in the development version (4.37.0.dev) of `transformers`. Until the official version is released through `pip`, ensure that you are doing one of the following:
* When loading the model, ensure that `trust_remote_code=True` is passed as an argument of the `from_pretrained()` function.
* Update your local `transformers` to the development version: `pip uninstall -y transformers && pip install git+https://github.com/huggingface/transformers`. The previous command is an alternative to cloning and installing from the source.
The current `transformers` version can be verified with: `pip list | grep transformers`.
## Intended Uses
Given the nature of the training data, the Phi-2 model is best suited for prompts using the QA format, the chat format, and the code format.
### QA Format:
You can provide the prompt as a standalone question as follows:
```markdown
Write a detailed analogy between mathematics and a lighthouse.
```
where the model generates the text after "." .
To encourage the model to write more concise answers, you can also try the following QA format using "Instruct: \<prompt\>\nOutput:"
```markdown
Instruct: Write a detailed analogy between mathematics and a lighthouse.
Output: Mathematics is like a lighthouse. Just as a lighthouse guides ships safely to shore, mathematics provides a guiding light in the world of numbers and logic. It helps us navigate through complex problems and find solutions. Just as a lighthouse emits a steady beam of light, mathematics provides a consistent framework for reasoning and problem-solving. It illuminates the path to understanding and helps us make sense of the world around us.
```
where the model generates the text after "Output:".
### Chat Format:
```markdown
Alice: I don't know why, I'm struggling to maintain focus while studying. Any suggestions?
Bob: Well, have you tried creating a study schedule and sticking to it?
Alice: Yes, I have, but it doesn't seem to help much.
Bob: Hmm, maybe you should try studying in a quiet environment, like the library.
Alice: ...
```
where the model generates the text after the first "Bob:".
### Code Format:
```python
def print_prime(n):
"""
Print all primes between 1 and n
"""
primes = []
for num in range(2, n+1):
is_prime = True
for i in range(2, int(math.sqrt(num))+1):
if num % i == 0:
is_prime = False
break
if is_prime:
primes.append(num)
print(primes)
```
where the model generates the text after the comments.
**Notes:**
* Phi-2 is intended for QA, chat, and code purposes. The model-generated text/code should be treated as a starting point rather than a definitive solution for potential use cases. Users should be cautious when employing these models in their applications.
* Direct adoption for production tasks without evaluation is out of scope of this project. As a result, the Phi-2 model has not been tested to ensure that it performs adequately for any production-level application. Please refer to the limitation sections of this document for more details.
* If you are using `transformers<4.37.0`, always load the model with `trust_remote_code=True` to prevent side-effects.
## Sample Code
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
torch.set_default_device("cuda")
model = AutoModelForCausalLM.from_pretrained("microsoft/phi-2", torch_dtype="auto", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("microsoft/phi-2", trust_remote_code=True)
inputs = tokenizer('''def print_prime(n):
"""
Print all primes between 1 and n
"""''', return_tensors="pt", return_attention_mask=False)
outputs = model.generate(**inputs, max_length=200)
text = tokenizer.batch_decode(outputs)[0]
print(text)
```
## Limitations of Phi-2
* Generate Inaccurate Code and Facts: The model may produce incorrect code snippets and statements. Users should treat these outputs as suggestions or starting points, not as definitive or accurate solutions.
* Limited Scope for code: Majority of Phi-2 training data is based in Python and use common packages such as "typing, math, random, collections, datetime, itertools". If the model generates Python scripts that utilize other packages or scripts in other languages, we strongly recommend users manually verify all API uses.
* Unreliable Responses to Instruction: The model has not undergone instruction fine-tuning. As a result, it may struggle or fail to adhere to intricate or nuanced instructions provided by users.
* Language Limitations: The model is primarily designed to understand standard English. Informal English, slang, or any other languages might pose challenges to its comprehension, leading to potential misinterpretations or errors in response.
* Potential Societal Biases: Phi-2 is not entirely free from societal biases despite efforts in assuring training data safety. There's a possibility it may generate content that mirrors these societal biases, particularly if prompted or instructed to do so. We urge users to be aware of this and to exercise caution and critical thinking when interpreting model outputs.
* Toxicity: Despite being trained with carefully selected data, the model can still produce harmful content if explicitly prompted or instructed to do so. We chose to release the model to help the open-source community develop the most effective ways to reduce the toxicity of a model directly after pretraining.
* Verbosity: Phi-2 being a base model often produces irrelevant or extra text and responses following its first answer to user prompts within a single turn. This is due to its training dataset being primarily textbooks, which results in textbook-like responses.
## Training
### Model
* Architecture: a Transformer-based model with next-word prediction objective
* Context length: 2048 tokens
* Dataset size: 250B tokens, combination of NLP synthetic data created by AOAI GPT-3.5 and filtered web data from Falcon RefinedWeb and SlimPajama, which was assessed by AOAI GPT-4.
* Training tokens: 1.4T tokens
* GPUs: 96xA100-80G
* Training time: 14 days
### Software
* [PyTorch](https://github.com/pytorch/pytorch)
* [DeepSpeed](https://github.com/microsoft/DeepSpeed)
* [Flash-Attention](https://github.com/HazyResearch/flash-attention)
### License
The model is licensed under the [MIT license](https://huggingface.co/microsoft/phi-2/resolve/main/LICENSE).
## Trademarks
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow [Microsoft’s Trademark & Brand Guidelines](https://www.microsoft.com/en-us/legal/intellectualproperty/trademarks). Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party’s policies. |
QT321/quynh_deberta-v3-Base-finetuned-AI_req_5 | QT321 | 2024-02-06T20:15:53Z | 4 | 0 | transformers | [
"transformers",
"tf",
"deberta-v2",
"text-classification",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-02-06T20:15:32Z | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: quynh_deberta-v3-Base-finetuned-AI_req_5
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# quynh_deberta-v3-Base-finetuned-AI_req_5
This model is a fine-tuned version of [microsoft/deberta-v3-Base](https://huggingface.co/microsoft/deberta-v3-Base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0813
- Train Accuracy: 0.9739
- Validation Loss: 0.9358
- Validation Accuracy: 0.8190
- Epoch: 12
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2730, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.8536 | 0.6181 | 0.7137 | 0.6952 | 0 |
| 0.6579 | 0.7349 | 0.5152 | 0.8190 | 1 |
| 0.5153 | 0.7830 | 0.4833 | 0.8571 | 2 |
| 0.4369 | 0.8022 | 0.5064 | 0.8286 | 3 |
| 0.3922 | 0.8255 | 0.6123 | 0.7905 | 4 |
| 0.3616 | 0.8352 | 0.4985 | 0.8381 | 5 |
| 0.3034 | 0.8640 | 0.5926 | 0.8000 | 6 |
| 0.3187 | 0.8654 | 0.5392 | 0.8286 | 7 |
| 0.2134 | 0.9080 | 0.5991 | 0.8095 | 8 |
| 0.2041 | 0.9148 | 0.8289 | 0.8190 | 9 |
| 0.1532 | 0.9464 | 0.7176 | 0.8381 | 10 |
| 0.1690 | 0.9313 | 0.8189 | 0.8190 | 11 |
| 0.0813 | 0.9739 | 0.9358 | 0.8190 | 12 |
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.9.1
- Datasets 2.16.1
- Tokenizers 0.13.3
|
ArmaanSeth/Llama-2-7b-chat-hf-shards-mental-health-counselling | ArmaanSeth | 2024-02-06T20:10:49Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-06T19:31:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
caliex/all-MiniLM-L6-v2-f16.gguf | caliex | 2024-02-06T19:57:33Z | 348 | 8 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| null | 2024-02-06T19:55:46Z | ---
license: apache-2.0
---
all-MiniLM-L6-v2-f16.gguf Model uploaded to HuggingFace from GPT4ALL |
LeKyks1/dqn-SpaceInvadersNoFrameskip-v4 | LeKyks1 | 2024-02-06T19:55:49Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-02-06T01:18:39Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 329.00 +/- 157.97
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga LeKyks1 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga LeKyks1 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga LeKyks1
```
## Hyperparameters
```python
OrderedDict([('batch_size', 16),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.01),
('learning_starts', 1000),
('n_timesteps', 800000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 500),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
tboudou/MiniCPM-2B-sft-int4-finetuned-tosql | tboudou | 2024-02-06T19:48:40Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:openbmb/MiniCPM-2B-sft-int4",
"base_model:adapter:openbmb/MiniCPM-2B-sft-int4",
"region:us"
]
| null | 2024-02-06T15:20:43Z | ---
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: openbmb/MiniCPM-2B-sft-int4
model-index:
- name: MiniCPM-2B-sft-int4-finetuned-tosql
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MiniCPM-2B-sft-int4-finetuned-tosql
This model is a fine-tuned version of [openbmb/MiniCPM-2B-sft-int4](https://huggingface.co/openbmb/MiniCPM-2B-sft-int4) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.8.2
- Transformers 4.37.0
- Pytorch 2.1.2.post301
- Datasets 2.16.1
- Tokenizers 0.15.1 |
varun-v-rao/opt-1.3b-lora-3.15M-snli-model1 | varun-v-rao | 2024-02-06T19:47:54Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"opt",
"text-classification",
"generated_from_trainer",
"base_model:facebook/opt-1.3b",
"base_model:finetune:facebook/opt-1.3b",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-02-06T03:53:05Z | ---
license: other
base_model: facebook/opt-1.3b
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: opt-1.3b-lora-3.15M-snli-model1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-1.3b-lora-3.15M-snli-model1
This model is a fine-tuned version of [facebook/opt-1.3b](https://huggingface.co/facebook/opt-1.3b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6728
- Accuracy: 0.7625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 97
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3527 | 1.0 | 4292 | 0.2811 | 0.8977 |
| 0.3231 | 2.0 | 8584 | 0.2631 | 0.9042 |
| 0.3141 | 3.0 | 12876 | 0.2580 | 0.9067 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
QT321/quynh_deberta-v3-Base-finetuned-AI_req_3 | QT321 | 2024-02-06T19:45:01Z | 44 | 0 | transformers | [
"transformers",
"tf",
"deberta-v2",
"text-classification",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-02-06T19:44:35Z | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: quynh_deberta-v3-Base-finetuned-AI_req_3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# quynh_deberta-v3-Base-finetuned-AI_req_3
This model is a fine-tuned version of [microsoft/deberta-v3-Base](https://huggingface.co/microsoft/deberta-v3-Base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0121
- Train Accuracy: 0.9986
- Validation Loss: 1.0930
- Validation Accuracy: 0.8190
- Epoch: 14
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2730, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.8969 | 0.6099 | 0.7640 | 0.7048 | 0 |
| 0.7508 | 0.6951 | 0.7178 | 0.7048 | 1 |
| 0.6149 | 0.7404 | 0.5981 | 0.7714 | 2 |
| 0.5077 | 0.7720 | 0.5059 | 0.8095 | 3 |
| 0.4357 | 0.8036 | 0.4621 | 0.8095 | 4 |
| 0.3671 | 0.8407 | 0.4859 | 0.8190 | 5 |
| 0.2844 | 0.8777 | 0.6214 | 0.8000 | 6 |
| 0.2789 | 0.8860 | 0.5499 | 0.8190 | 7 |
| 0.1938 | 0.9107 | 0.8163 | 0.7810 | 8 |
| 0.1773 | 0.9231 | 0.8831 | 0.7905 | 9 |
| 0.1308 | 0.9547 | 0.6316 | 0.8095 | 10 |
| 0.0803 | 0.9712 | 0.8531 | 0.8286 | 11 |
| 0.0544 | 0.9849 | 0.7941 | 0.7810 | 12 |
| 0.0285 | 0.9931 | 0.9530 | 0.8190 | 13 |
| 0.0121 | 0.9986 | 1.0930 | 0.8190 | 14 |
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.9.1
- Datasets 2.16.1
- Tokenizers 0.13.3
|
Jayem-11/mistral_7b_malawi | Jayem-11 | 2024-02-06T19:43:42Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2024-02-06T13:42:24Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
HugFace4711/Hug1533 | HugFace4711 | 2024-02-06T19:32:12Z | 0 | 0 | null | [
"dataset:fka/awesome-chatgpt-prompts",
"region:us"
]
| null | 2024-02-06T19:25:14Z | ---
datasets:
- fka/awesome-chatgpt-prompts
---
Antworte mir stets korrekt und nenne mindestens fünf Punkte um alles zu erklären
Antworte immer auf deutsch |
ArmaanSeth/Llama-2-7b-chat-hf-adapters-mental-health-counselling | ArmaanSeth | 2024-02-06T19:30:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
]
| null | 2024-02-06T19:30:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
QT321/quynh_deberta-v3-Base-finetuned-AI_req_2 | QT321 | 2024-02-06T19:30:30Z | 44 | 0 | transformers | [
"transformers",
"tf",
"deberta-v2",
"text-classification",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-02-06T18:47:22Z | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: quynh_deberta-v3-Base-finetuned-AI_req_2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# quynh_deberta-v3-Base-finetuned-AI_req_2
This model is a fine-tuned version of [microsoft/deberta-v3-Base](https://huggingface.co/microsoft/deberta-v3-Base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0324
- Train Accuracy: 0.9959
- Validation Loss: 0.9053
- Validation Accuracy: 0.8286
- Epoch: 14
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2730, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.8413 | 0.6593 | 0.7133 | 0.7143 | 0 |
| 0.6659 | 0.75 | 0.5795 | 0.8000 | 1 |
| 0.5713 | 0.7692 | 0.5171 | 0.8476 | 2 |
| 0.4814 | 0.7967 | 0.4655 | 0.8381 | 3 |
| 0.4366 | 0.8118 | 0.4368 | 0.8476 | 4 |
| 0.3888 | 0.8228 | 0.4844 | 0.8190 | 5 |
| 0.3282 | 0.8571 | 0.5208 | 0.8286 | 6 |
| 0.2678 | 0.8723 | 0.5297 | 0.8381 | 7 |
| 0.2422 | 0.8970 | 0.6020 | 0.8190 | 8 |
| 0.2069 | 0.9272 | 0.6953 | 0.7429 | 9 |
| 0.1441 | 0.9519 | 0.6943 | 0.7524 | 10 |
| 0.1426 | 0.9492 | 0.6897 | 0.8190 | 11 |
| 0.0947 | 0.9725 | 0.9910 | 0.8000 | 12 |
| 0.0536 | 0.9835 | 0.9079 | 0.8095 | 13 |
| 0.0324 | 0.9959 | 0.9053 | 0.8286 | 14 |
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.9.1
- Datasets 2.16.1
- Tokenizers 0.13.3
|
Ening/dog_or_foot_model | Ening | 2024-02-06T19:23:41Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| image-classification | 2024-02-06T15:49:04Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dog_or_foot_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dog_or_foot_model
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0346
- Accuracy: 0.9976
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3161 | 0.99 | 26 | 0.1164 | 0.9976 |
| 0.0495 | 1.98 | 52 | 0.0490 | 0.9905 |
| 0.0371 | 2.97 | 78 | 0.0346 | 0.9976 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
QT321/quynh_deberta-v3-Base-finetuned-AI_req_1 | QT321 | 2024-02-06T19:14:45Z | 44 | 0 | transformers | [
"transformers",
"tf",
"deberta-v2",
"text-classification",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-02-06T18:44:58Z | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: quynh_deberta-v3-Base-finetuned-AI_req_1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# quynh_deberta-v3-Base-finetuned-AI_req_1
This model is a fine-tuned version of [microsoft/deberta-v3-Base](https://huggingface.co/microsoft/deberta-v3-Base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0260
- Train Accuracy: 0.9918
- Validation Loss: 1.1900
- Validation Accuracy: 0.7810
- Epoch: 12
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 2730, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.8121 | 0.6690 | 0.6778 | 0.7524 | 0 |
| 0.5487 | 0.8049 | 0.5841 | 0.7810 | 1 |
| 0.4181 | 0.8420 | 0.4797 | 0.8000 | 2 |
| 0.3674 | 0.8462 | 0.5794 | 0.7905 | 3 |
| 0.3232 | 0.8654 | 0.5766 | 0.7810 | 4 |
| 0.2762 | 0.8887 | 0.6246 | 0.8000 | 5 |
| 0.2165 | 0.9148 | 0.5751 | 0.7429 | 6 |
| 0.1623 | 0.9464 | 0.6580 | 0.8000 | 7 |
| 0.1645 | 0.9464 | 0.7932 | 0.7810 | 8 |
| 0.1231 | 0.9574 | 1.0112 | 0.8095 | 9 |
| 0.1089 | 0.9574 | 0.8745 | 0.7619 | 10 |
| 0.0587 | 0.9794 | 0.9496 | 0.7905 | 11 |
| 0.0260 | 0.9918 | 1.1900 | 0.7810 | 12 |
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.9.1
- Datasets 2.16.1
- Tokenizers 0.13.3
|
Overgrown7380/a2c-PandaPickAndPlace-v3 | Overgrown7380 | 2024-02-06T19:14:17Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaPickAndPlace-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-02-06T19:10:13Z | ---
library_name: stable-baselines3
tags:
- PandaPickAndPlace-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaPickAndPlace-v3
type: PandaPickAndPlace-v3
metrics:
- type: mean_reward
value: -50.00 +/- 0.00
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaPickAndPlace-v3**
This is a trained model of a **A2C** agent playing **PandaPickAndPlace-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
thrunlab/original_glue_boolq | thrunlab | 2024-02-06T19:02:41Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"trl",
"sft",
"generated_from_trainer",
"dataset:super_glue",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.1",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| null | 2024-02-06T14:06:08Z | ---
license: apache-2.0
base_model: mistralai/Mistral-7B-Instruct-v0.1
tags:
- trl
- sft
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: original_glue_boolq
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# original_glue_boolq
This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3297
- Accuracy: 0.8700
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 4
- seed: 2
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4632 | 0.05 | 50 | 0.4840 | 0.7958 |
| 0.3453 | 0.1 | 100 | 0.3888 | 0.8226 |
| 0.2722 | 0.15 | 150 | 0.3590 | 0.8396 |
| 0.3266 | 0.2 | 200 | 0.3811 | 0.8459 |
| 0.3699 | 0.25 | 250 | 0.3534 | 0.8438 |
| 0.3554 | 0.3 | 300 | 0.3378 | 0.8565 |
| 0.1229 | 0.35 | 350 | 0.3368 | 0.8643 |
| 0.3522 | 0.4 | 400 | 0.3424 | 0.8643 |
| 0.2548 | 0.45 | 450 | 0.3467 | 0.8664 |
| 0.2119 | 0.5 | 500 | 0.3439 | 0.8714 |
| 0.2113 | 0.55 | 550 | 0.3518 | 0.8657 |
| 0.2122 | 0.6 | 600 | 0.3110 | 0.8770 |
| 0.3251 | 0.65 | 650 | 0.3323 | 0.8728 |
| 0.2904 | 0.7 | 700 | 0.3152 | 0.8792 |
| 0.6366 | 0.75 | 750 | 0.3502 | 0.8763 |
| 0.4161 | 0.8 | 800 | 0.3250 | 0.8806 |
| 0.1605 | 0.85 | 850 | 0.3258 | 0.8834 |
| 0.271 | 0.9 | 900 | 0.3330 | 0.8848 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
ashishkgpian/sharded_astromistral | ashishkgpian | 2024-02-06T18:58:55Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
]
| text-generation | 2024-02-06T18:56:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Imadsarvm/Sarvm-Translation | Imadsarvm | 2024-02-06T18:57:06Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"kn",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2024-02-06T18:27:50Z | ---
language:
- kn
license: apache-2.0
tags:
- whisper-event
metrics:
- wer
model-index:
- name: Whisper Kannada Medium - Vasista Sai Lodagala
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: google/fleurs
type: google/fleurs
config: kn_in
split: test
metrics:
- type: wer
value: 7.65
name: WER
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Kannada Medium
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Kannada data available from multiple publicly available ASR corpuses.
It has been fine-tuned as a part of the Whisper fine-tuning sprint.
**NOTE:** The code used to train this model is available for re-use in the [whisper-finetune](https://github.com/vasistalodagala/whisper-finetune) repository.
## Usage
In order to evaluate this model on an entire dataset, the evaluation codes available in the [whisper-finetune](https://github.com/vasistalodagala/whisper-finetune) repository can be used.
The same repository also provides the scripts for faster inference using whisper-jax.
In order to infer a single audio file using this model, the following code snippet can be used:
```python
>>> import torch
>>> from transformers import pipeline
>>> # path to the audio file to be transcribed
>>> audio = "/path/to/audio.format"
>>> device = "cuda:0" if torch.cuda.is_available() else "cpu"
>>> transcribe = pipeline(task="automatic-speech-recognition", model="vasista22/whisper-kannada-medium", chunk_length_s=30, device=device)
>>> transcribe.model.config.forced_decoder_ids = transcribe.tokenizer.get_decoder_prompt_ids(language="kn", task="transcribe")
>>> print('Transcription: ', transcribe(audio)["text"])
```
For faster inference of whisper models, the [whisper-jax](https://github.com/sanchit-gandhi/whisper-jax) library can be used. Please follow the necessary installation steps as mentioned [here](https://github.com/vasistalodagala/whisper-finetune#faster-evaluation-with-whisper-jax), before using the following code snippet:
```python
>>> import jax.numpy as jnp
>>> from whisper_jax import FlaxWhisperForConditionalGeneration, FlaxWhisperPipline
>>> # path to the audio file to be transcribed
>>> audio = "/path/to/audio.format"
>>> transcribe = FlaxWhisperPipline("vasista22/whisper-kannada-medium", batch_size=16)
>>> transcribe.model.config.forced_decoder_ids = transcribe.tokenizer.get_decoder_prompt_ids(language="kn", task="transcribe")
>>> print('Transcription: ', transcribe(audio)["text"])
```
## Training and evaluation data
Training Data:
- [IISc-MILE Kannada ASR Corpus](https://www.openslr.org/126/)
- [ULCA ASR Corpus](https://github.com/Open-Speech-EkStep/ULCA-asr-dataset-corpus#kannada-labelled-total-duration-is-60891-hours)
- [Shrutilipi ASR Corpus](https://ai4bharat.org/shrutilipi)
- [Google/Fleurs Train+Dev set](https://huggingface.co/datasets/google/fleurs)
Evaluation Data:
- [Google/Fleurs Test Set](https://huggingface.co/datasets/google/fleurs)
- [IISc-MILE Test Set](https://www.openslr.org/126/)
- [OpenSLR](https://www.openslr.org/79/)
## Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 24
- eval_batch_size: 48
- seed: 22
- optimizer: adamw_bnb_8bit
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10000
- training_steps: 13752 (terminated upon convergence. Initially set to 51570 steps)
- mixed_precision_training: True
## Acknowledgement
This work was done at [Speech Lab, IIT Madras](https://asr.iitm.ac.in/).
The compute resources for this work were funded by "Bhashini: National Language translation Mission" project of the Ministry of Electronics and Information Technology (MeitY), Government of India.
|
CLMBR/det-noun-lstm-3 | CLMBR | 2024-02-06T18:51:20Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"rnn",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
]
| null | 2024-02-01T11:59:40Z | ---
tags:
- generated_from_trainer
model-index:
- name: det-noun-lstm-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# det-noun-lstm-3
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9674
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 3
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.7819 | 0.03 | 76320 | 4.7449 |
| 4.4958 | 1.03 | 152640 | 4.4655 |
| 4.3518 | 0.03 | 228960 | 4.3313 |
| 4.2646 | 0.03 | 305280 | 4.2495 |
| 4.2051 | 1.03 | 381600 | 4.1935 |
| 4.1582 | 0.03 | 457920 | 4.1528 |
| 4.1191 | 1.03 | 534240 | 4.1216 |
| 4.0845 | 0.03 | 610560 | 4.0979 |
| 4.0555 | 1.03 | 686880 | 4.0782 |
| 4.0303 | 0.03 | 763200 | 4.0623 |
| 4.0098 | 1.03 | 839520 | 4.0498 |
| 3.9926 | 0.03 | 915840 | 4.0389 |
| 3.9779 | 0.03 | 992160 | 4.0301 |
| 3.9569 | 0.03 | 1068480 | 4.0224 |
| 3.9473 | 1.03 | 1144800 | 4.0156 |
| 3.9428 | 0.03 | 1221120 | 4.0101 |
| 3.9302 | 1.03 | 1297440 | 4.0054 |
| 3.9204 | 0.03 | 1373760 | 4.0012 |
| 3.9088 | 1.03 | 1450080 | 3.9972 |
| 3.9046 | 0.03 | 1526400 | 3.9938 |
| 3.9033 | 1.03 | 1602720 | 3.9913 |
| 3.8973 | 0.03 | 1679040 | 3.9880 |
| 3.8911 | 0.03 | 1755360 | 3.9859 |
| 3.8831 | 1.03 | 1831680 | 3.9839 |
| 3.8761 | 0.03 | 1908000 | 3.9819 |
| 3.8693 | 0.03 | 1984320 | 3.9800 |
| 3.8642 | 0.03 | 2060640 | 3.9783 |
| 3.8582 | 0.03 | 2136960 | 3.9762 |
| 3.8532 | 1.03 | 2213280 | 3.9749 |
| 3.8436 | 0.03 | 2289600 | 3.9733 |
| 3.8406 | 1.03 | 2365920 | 3.9727 |
| 3.8455 | 0.03 | 2442240 | 3.9717 |
| 3.8404 | 0.03 | 2518560 | 3.9710 |
| 3.8376 | 1.03 | 2594880 | 3.9702 |
| 3.8293 | 0.03 | 2671200 | 3.9697 |
| 3.8323 | 1.03 | 2747520 | 3.9689 |
| 3.8336 | 0.03 | 2823840 | 3.9686 |
| 3.8347 | 1.03 | 2900160 | 3.9681 |
| 3.8328 | 0.03 | 2976480 | 3.9678 |
| 3.8301 | 1.02 | 3052726 | 3.9674 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Eyesiga/Runyakore_XlSR_WAV2VEC | Eyesiga | 2024-02-06T18:50:10Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:facebook/wav2vec2-large-xlsr-53",
"base_model:finetune:facebook/wav2vec2-large-xlsr-53",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
]
| automatic-speech-recognition | 2024-02-06T16:34:20Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-large-xlsr-53
tags:
- generated_from_trainer
model-index:
- name: Runyakore_XlSR_WAV2VEC
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Runyakore_XlSR_WAV2VEC
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.4952
- eval_wer: 0.5667
- eval_runtime: 16.7338
- eval_samples_per_second: 5.737
- eval_steps_per_second: 0.717
- epoch: 5.4
- step: 13000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 20
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
CLMBR/det-noun-transformer-3 | CLMBR | 2024-02-06T18:41:41Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"opt",
"text-generation",
"generated_from_trainer",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-01T11:58:58Z | ---
tags:
- generated_from_trainer
model-index:
- name: det-noun-transformer-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# det-noun-transformer-3
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8604
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 3
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.229 | 0.03 | 76320 | 4.1966 |
| 4.0225 | 1.03 | 152640 | 4.0278 |
| 3.9119 | 0.03 | 228960 | 3.9528 |
| 3.8437 | 0.03 | 305280 | 3.9120 |
| 3.7953 | 1.03 | 381600 | 3.8871 |
| 3.7538 | 0.03 | 457920 | 3.8699 |
| 3.7193 | 1.03 | 534240 | 3.8599 |
| 3.6865 | 0.03 | 610560 | 3.8533 |
| 3.6568 | 1.03 | 686880 | 3.8486 |
| 3.6323 | 0.03 | 763200 | 3.8451 |
| 3.612 | 1.03 | 839520 | 3.8445 |
| 3.593 | 0.03 | 915840 | 3.8424 |
| 3.5738 | 0.03 | 992160 | 3.8430 |
| 3.5506 | 1.03 | 1068480 | 3.8433 |
| 3.5384 | 0.03 | 1144800 | 3.8435 |
| 3.5287 | 1.03 | 1221120 | 3.8440 |
| 3.512 | 0.03 | 1297440 | 3.8463 |
| 3.499 | 1.03 | 1373760 | 3.8464 |
| 3.4808 | 0.03 | 1450080 | 3.8489 |
| 3.4728 | 1.03 | 1526400 | 3.8487 |
| 3.4668 | 0.03 | 1602720 | 3.8506 |
| 3.4605 | 1.03 | 1679040 | 3.8525 |
| 3.4498 | 0.03 | 1755360 | 3.8531 |
| 3.4355 | 1.03 | 1831680 | 3.8544 |
| 3.4245 | 0.03 | 1908000 | 3.8548 |
| 3.4117 | 0.03 | 1984320 | 3.8570 |
| 3.401 | 1.03 | 2060640 | 3.8575 |
| 3.393 | 0.03 | 2136960 | 3.8583 |
| 3.3814 | 1.03 | 2213280 | 3.8602 |
| 3.3636 | 0.03 | 2289600 | 3.8605 |
| 3.3563 | 1.03 | 2365920 | 3.8619 |
| 3.3539 | 0.03 | 2442240 | 3.8625 |
| 3.3429 | 1.03 | 2518560 | 3.8624 |
| 3.3318 | 0.03 | 2594880 | 3.8638 |
| 3.3196 | 1.03 | 2671200 | 3.8634 |
| 3.315 | 0.03 | 2747520 | 3.8632 |
| 3.3121 | 1.03 | 2823840 | 3.8632 |
| 3.3058 | 0.03 | 2900160 | 3.8624 |
| 3.3017 | 1.03 | 2976480 | 3.8610 |
| 3.2907 | 0.02 | 3052726 | 3.8604 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
CLMBR/det-noun-lstm-4 | CLMBR | 2024-02-06T18:22:52Z | 2 | 0 | transformers | [
"transformers",
"pytorch",
"rnn",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
]
| null | 2024-02-01T11:59:40Z | ---
tags:
- generated_from_trainer
model-index:
- name: det-noun-lstm-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# det-noun-lstm-4
This model is a fine-tuned version of [](https://huggingface.co/) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9719
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 3052726
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-------:|:---------------:|
| 4.8017 | 0.03 | 76320 | 4.7666 |
| 4.5163 | 1.03 | 152640 | 4.4850 |
| 4.3691 | 0.03 | 228960 | 4.3481 |
| 4.2822 | 1.03 | 305280 | 4.2648 |
| 4.2214 | 0.03 | 381600 | 4.2067 |
| 4.1738 | 1.03 | 457920 | 4.1648 |
| 4.1346 | 0.03 | 534240 | 4.1335 |
| 4.0994 | 1.03 | 610560 | 4.1086 |
| 4.0717 | 0.03 | 686880 | 4.0880 |
| 4.0442 | 1.03 | 763200 | 4.0724 |
| 4.0257 | 0.03 | 839520 | 4.0599 |
| 4.0076 | 1.03 | 915840 | 4.0482 |
| 3.9916 | 0.03 | 992160 | 4.0389 |
| 3.9715 | 0.03 | 1068480 | 4.0308 |
| 3.96 | 1.03 | 1144800 | 4.0236 |
| 3.9543 | 0.03 | 1221120 | 4.0178 |
| 3.942 | 1.03 | 1297440 | 4.0123 |
| 3.9331 | 0.03 | 1373760 | 4.0075 |
| 3.9207 | 1.03 | 1450080 | 4.0036 |
| 3.9177 | 0.03 | 1526400 | 4.0000 |
| 3.9148 | 1.03 | 1602720 | 3.9964 |
| 3.9113 | 0.03 | 1679040 | 3.9938 |
| 3.9051 | 1.03 | 1755360 | 3.9917 |
| 3.8958 | 0.03 | 1831680 | 3.9896 |
| 3.888 | 1.03 | 1908000 | 3.9873 |
| 3.8823 | 0.03 | 1984320 | 3.9855 |
| 3.8771 | 0.03 | 2060640 | 3.9836 |
| 3.871 | 1.03 | 2136960 | 3.9821 |
| 3.8663 | 0.03 | 2213280 | 3.9807 |
| 3.8558 | 1.03 | 2289600 | 3.9791 |
| 3.853 | 0.03 | 2365920 | 3.9776 |
| 3.8596 | 1.03 | 2442240 | 3.9766 |
| 3.8526 | 0.03 | 2518560 | 3.9758 |
| 3.8496 | 1.03 | 2594880 | 3.9751 |
| 3.8438 | 0.03 | 2671200 | 3.9745 |
| 3.8452 | 1.03 | 2747520 | 3.9738 |
| 3.8491 | 0.03 | 2823840 | 3.9733 |
| 3.8485 | 1.03 | 2900160 | 3.9728 |
| 3.8481 | 0.03 | 2976480 | 3.9722 |
| 3.8418 | 1.02 | 3052726 | 3.9719 |
### Framework versions
- Transformers 4.33.3
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
thrunlab/mistral_sparse_80_percent_boolq_1000 | thrunlab | 2024-02-06T18:21:53Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"trl",
"sft",
"generated_from_trainer",
"dataset:super_glue",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| null | 2024-02-06T13:19:28Z | ---
tags:
- trl
- sft
- generated_from_trainer
datasets:
- super_glue
metrics:
- accuracy
model-index:
- name: mistral_sparse_80_percent_boolq_1000
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mistral_sparse_80_percent_boolq_1000
This model is a fine-tuned version of [](https://huggingface.co/) on the super_glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3381
- Accuracy: 0.8664
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 4
- seed: 2
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4991 | 0.05 | 50 | 0.5522 | 0.7216 |
| 0.3812 | 0.1 | 100 | 0.4342 | 0.8141 |
| 0.369 | 0.15 | 150 | 0.4112 | 0.8170 |
| 0.4132 | 0.2 | 200 | 0.4139 | 0.8382 |
| 0.4219 | 0.25 | 250 | 0.3940 | 0.8339 |
| 0.4144 | 0.3 | 300 | 0.3803 | 0.8481 |
| 0.1534 | 0.35 | 350 | 0.3786 | 0.8516 |
| 0.4855 | 0.4 | 400 | 0.3821 | 0.8502 |
| 0.2109 | 0.45 | 450 | 0.3583 | 0.8516 |
| 0.3026 | 0.5 | 500 | 0.3675 | 0.8558 |
| 0.2903 | 0.55 | 550 | 0.3744 | 0.8537 |
| 0.2988 | 0.6 | 600 | 0.3573 | 0.8587 |
| 0.3432 | 0.65 | 650 | 0.3396 | 0.8657 |
| 0.3156 | 0.7 | 700 | 0.3299 | 0.8671 |
| 0.4978 | 0.75 | 750 | 0.3623 | 0.8657 |
| 0.4523 | 0.8 | 800 | 0.3240 | 0.8700 |
| 0.2367 | 0.85 | 850 | 0.3393 | 0.8678 |
| 0.3334 | 0.9 | 900 | 0.3252 | 0.8834 |
| 0.3286 | 0.95 | 950 | 0.3605 | 0.8742 |
| 0.1659 | 1.0 | 1000 | 0.3269 | 0.8742 |
| 0.2373 | 1.05 | 1050 | 0.3256 | 0.8792 |
| 0.5102 | 1.1 | 1100 | 0.3633 | 0.8749 |
### Framework versions
- Transformers 4.35.2
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
|
vaicai/kaifa-support-chat-v4 | vaicai | 2024-02-06T18:18:42Z | 0 | 0 | null | [
"license:other",
"region:us"
]
| null | 2024-02-06T18:18:42Z | ---
license: other
license_name: do-not-use
license_link: LICENSE
---
|
Overgrown7380/a2c-PandaReachDense-v3 | Overgrown7380 | 2024-02-06T18:16:27Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-02-06T18:08:39Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v3
type: PandaReachDense-v3
metrics:
- type: mean_reward
value: -0.26 +/- 0.12
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v3**
This is a trained model of a **A2C** agent playing **PandaReachDense-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
pawkanarek/spraix_sdxl_9frames_25epochs | pawkanarek | 2024-02-06T18:14:53Z | 3 | 0 | diffusers | [
"diffusers",
"art",
"text-to-image",
"dataset:pawkanarek/spraix_1024_9frames",
"license:gpl-3.0",
"diffusers:FlaxStableDiffusionXLPipeline",
"region:us"
]
| text-to-image | 2024-02-06T12:26:57Z | ---
license: gpl-3.0
datasets:
- pawkanarek/spraix_1024_9frames
library_name: diffusers
pipeline_tag: text-to-image
tags:
- art
---
### Why
My intention was to creat model that could generate animated sprites.
### How
Trained for ~48 hours in total 25 epochs, on Google TPU v3-8 with this my custom script [train_text_to_image_flax_sdxl](https://github.com/PawKanarek/spraix/blob/48d8c209a359622e6db56e6d555667ac466dc952/train_text_to_image_flax_sdxl.py) on my custom dataset [spraix_1024_9frames](https://huggingface.co/datasets/pawkanarek/spraix_1024_9frames).
This is my second attempt to create such model. This time i changed dataset to always consist 9 frames, but it doesn't help much.
### Appendix
This is demonstration only. This is my first time where i wrote training script of such complicated model in flax framework. Scirpt is probably full of bugs. Dataset that i prepared is also far from perfect. But I'm happy that i can train, save, and load my custom SDXL model.
### How to use
Intented to use with FLAX
Model will generate ugly sprite animations, out of shape, deformed, pixelated and uneven.
Prompt ideas
```
"Pixel-art animation of a blue water droplet with legs, that: is swiniging axe, facing: East",
"Pixel-art animation of a Tree, that: is Idle, facing: South",
"Pixel-art animation of a Dinosaur with a backpack, that: is jumping, facing: North",
"Pixel-art animation of a Fire demon with axe, that: is running, facing West",
```
Samples of the output of this model can be seen here: |
rhsaeedy/PPO-Lunarlander-v3 | rhsaeedy | 2024-02-06T18:12:32Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-02-06T18:12:21Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 289.03 +/- 17.47
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
SalehAhmad/Initial_Knowledge_Assessment_Test-Model-Phi2_3Epochs | SalehAhmad | 2024-02-06T18:08:05Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"phi",
"text-generation",
"custom_code",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-06T17:03:05Z | ---
language:
- en
library_name: transformers
pipeline_tag: text-generation
widget:
- text: |-
Instruct: You are a chatbot, who is helping to curate datasets. Based on the input paragraph as context generate as many mcq question as possible without repeptition. You donot generate repetitive questions.
When you are given a paragraph for context. You will generate multiple mcq questions, it's 4 options and it's actual answer.
For Example:
Paragraph: .....
-Start of Question-
Question: ......
Options:
a) .....
b) .....
c) .....
d) .....
Actual Answer: b)....
-End of Question-
-Start of Question-
Question: ......
Options:
a) .....
b) .....
c) .....
d) .....
Actual Answer: d)....
-End of Question-
and so on.
Paragraph: Computer science theories and basic programming principles form the foundation of the ever-evolving field of technology. At its core, computer science is not just about writing code but involves the exploration and application of fundamental principles that underpin the design and functioning of computers. One key theory in computer science is the Turing Machine, proposed by Alan Turing in the 1930s. This theoretical construct laid the groundwork for understanding the limits and possibilities of computation. The idea that any computable function could be computed by a Turing Machine provided a theoretical framework for the development of modern computers. Another essential theory in computer science is the concept of algorithms. Algorithms are step-by-step procedures or formulas for solving problems and performing tasks. They are crucial in programming as they guide the computer in executing tasks efficiently. The study of algorithms involves analyzing their efficiency and correctness, and it plays a pivotal role in designing software that can handle large datasets and complex computations. Moreover, algorithms are closely related to data structures, which are the ways in which data is organized and stored in a computer's memory. Efficient data structures are essential for optimizing the performance of algorithms.
Output:
example_title: "Example 1"
- text: |-
Instruct: You are a chatbot, who is helping to curate datasets. Based on the input paragraph as context generate as many mcq question as possible without repeptition. You donot generate repetitive questions.
When you are given a paragraph for context. You will generate multiple mcq questions, it's 4 options and it's actual answer.
For Example:
Paragraph: .....
-Start of Question-
Question: ......
Options:
a) .....
b) .....
c) .....
d) .....
Actual Answer: b)....
-End of Question-
-Start of Question-
Question: ......
Options:
a) .....
b) .....
c) .....
d) .....
Actual Answer: d)....
-End of Question-
and so on.
Paragraph: Business financial education is an essential aspect of any successful enterprise. It encompasses a range of knowledge and skills necessary for effectively managing the financial aspects of a business, including budgeting, financial analysis, investment strategies, and risk management. A solid understanding of financial concepts enables business owners and managers to make informed decisions that drive profitability and sustainability. It empowers individuals within organizations to interpret financial statements, assess performance metrics, and identify opportunities for growth and improvement. Moreover, financial education fosters accountability and transparency, ensuring that stakeholders have a clear understanding of the financial health and trajectory of the business. By investing in financial education, businesses can mitigate risks, optimize resources, and ultimately achieve their long-term objectives.
Output:
example_title: "Example 2"
- text: |-
Instruct: You are a chatbot, who is helping to curate datasets. Based on the input paragraph as context generate as many mcq question as possible without repeptition. You donot generate repetitive questions.
When you are given a paragraph for context. You will generate multiple mcq questions, it's 4 options and it's actual answer.
For Example:
Paragraph: .....
-Start of Question-
Question: ......
Options:
a) .....
b) .....
c) .....
d) .....
Actual Answer: b)....
-End of Question-
-Start of Question-
Question: ......
Options:
a) .....
b) .....
c) .....
d) .....
Actual Answer: d)....
-End of Question-
and so on.
Paragraph: LLMs, or Language Model Models, are advanced artificial intelligence systems designed to process and generate human-like text based on input prompts. LLMs leverage sophisticated algorithms and vast datasets to understand and generate coherent language across a wide range of topics and contexts. Businesses and individuals can benefit from LLMs in various ways, including content creation, customer support, language translation, and data analysis. By leveraging LLMs, businesses can automate repetitive tasks, streamline workflows, and improve efficiency. Moreover, LLMs can assist in generating personalized content, enhancing customer engagement, and driving conversions. To maximize the benefits of LLMs, it's essential to understand their capabilities and limitations, as well as best practices for integrating them into existing workflows. Additionally, staying updated on advancements in LLM technology and investing in ongoing training and development can ensure that businesses harness the full potential of these powerful tools to achieve their objectives.
Output:
example_title: "Example 3"
---
This model is for the module
# Initial Knowledge Assessment Test Generation
## Steps
- Data was gathered by:
- Downloading youtube playlists for each course from every category
- The videos were transcribed
- The text was fed to chatgpt via API, to formulate prompts n reponse pairs.
- 2.78 Billion parameter Phi2 model by [Microsoft](https://huggingface.co/microsoft/phi-2) was finetuned on the curated data.
## How to use the model?
### Note the format of the prompt. Only change the text in the variable "paragraph". This is the text which acts as the context for the generated test./
```
# Use a huggingafce pipeline as a high-level helper
from transformers import pipeline
import torch
pipe = pipeline("text-generation",
model="SalehAhmad/Initial_Knowledge_Assessment_Test-Model-Phi2_3Epochs",
device_map='auto',
torch_dtype=torch.bfloat16,
max_new_tokens=1024)
paragraph = '''Computer science theories and basic programming principles form the foundation of the ever-evolving field of technology. At its core, computer science is not just about writing code but involves the exploration and application of fundamental principles that underpin the design and functioning of computers. One key theory in computer science is the Turing Machine, proposed by Alan Turing in the 1930s. This theoretical construct laid the groundwork for understanding the limits and possibilities of computation. The idea that any computable function could be computed by a Turing Machine provided a theoretical framework for the development of modern computers.
Another essential theory in computer science is the concept of algorithms. Algorithms are step-by-step procedures or formulas for solving problems and performing tasks. They are crucial in programming as they guide the computer in executing tasks efficiently. The study of algorithms involves analyzing their efficiency and correctness, and it plays a pivotal role in designing software that can handle large datasets and complex computations. Moreover, algorithms are closely related to data structures, which are the ways in which data is organized and stored in a computer's memory. Efficient data structures are essential for optimizing the performance of algorithms.'''
prompt = f'''Instruct: You are a chatbot, who is helping to curate datasets. Based on the input paragraph as context generate as many mcq question as possible without repeptition. You donot generate repetitive questions.
When you are given a paragraph for context. You will generate multiple mcq questions, it's 4 options and it's actual answer.
For Example:
Paragraph: .....
-Start of Question-
Question: ......
Options:
a) .....
b) .....
c) .....
d) .....
Actual Answer: b)....
-End of Question-
-Start of Question-
Question: ......
Options:
a) .....
b) .....
c) .....
d) .....
Actual Answer: d)....
-End of Question-
and so on.
Paragraph: {paragraph}
Output: '''
output = pipe(prompt,
num_return_sequences=1,
return_full_text=False)
print(output[0]['generated_text'])
``` |
Tamnemtf/gpt2_oscar-mini | Tamnemtf | 2024-02-06T18:05:50Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"dataset:nthngdy/oscar-mini",
"dataset:Tamnemtf/VietNamese_lang",
"license:unknown",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-01-28T11:49:13Z | ---
license: unknown
datasets:
- nthngdy/oscar-mini
- Tamnemtf/VietNamese_lang
--- |
imageomics/bioclip-vit-b-16-inat-only | imageomics | 2024-02-06T18:05:43Z | 2 | 0 | open_clip | [
"open_clip",
"zero-shot-image-classification",
"clip",
"biology",
"CV",
"images",
"animals",
"species",
"taxonomy",
"rare species",
"endangered species",
"evolutionary biology",
"multimodal",
"knowledge-guided",
"en",
"dataset:iNat21",
"arxiv:2311.18803",
"license:mit",
"region:us"
]
| zero-shot-image-classification | 2024-02-06T16:34:45Z | ---
license:
- mit
language:
- en
library_name: open_clip
tags:
- zero-shot-image-classification
- clip
- biology
- CV
- images
- animals
- species
- taxonomy
- rare species
- endangered species
- evolutionary biology
- multimodal
- knowledge-guided
datasets:
- iNat21
---
# Model Card for BioCLIP
<!--
This modelcard has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). And further altered to suit Imageomics Institute needs -->
BioCLIP is a foundation model for the tree of life, built using CLIP architecture as a vision model for general organismal biology.
This model is trained on [iNat21](https://github.com/visipedia/inat_comp/tree/master/2021), different from [BioCLIP](https://huggingface.co/imageomics/bioclip) which is trained on [TreeOfLife-10M](https://huggingface.co/datasets/imageomics/TreeOfLife-10M). More information can be found in [BioCLIP](https://huggingface.co/imageomics/bioclip).
## How to Get Started with the Model
BioCLIP can be used with the `open_clip` library:
```py
import open_clip
model, preprocess_train, preprocess_val = open_clip.create_model_and_transforms('hf-hub:imageomics/bioclip-vit-b-16-inat-only')
tokenizer = open_clip.get_tokenizer('hf-hub:imageomics/bioclip-vit-b-16-inat-only')
```
## Training Details
### Compute Infrastructure
Training was performed on 4 NVIDIA A100-80GB GPUs distributed over 1 node on [OSC's](https://www.osc.edu/) Ascend HPC Cluster with global batch size 16,384 for 2 days.
Based on Machine Learning Impact calculator presented in Lacoste et al. (2019), that's 33.16 kg of CO2 eq., or 134km driven by an average ICE car.
### Training Data
This model was trained on [iNat21](https://github.com/visipedia/inat_comp/tree/master/2021), which is a compilation of images matched to [Linnaean taxonomic rank](https://www.britannica.com/science/taxonomy/The-objectives-of-biological-classification) from kingdom through species. They are also matched with common (vernacular) name of the subject of the image where available.
### Training Hyperparameters
- **Training regime:**
Different from [BioCLIP](https://huggingface.co/imageomics/bioclip), this model is trained with a batch size of 16K. We pick epoch 65 with lowest loss on validation set (~5% of training samples) for downstream task evaluation.
### Summary
BioCLIP outperforms general-domain baselines by 10% on average.
### Model Examination
We encourage readers to see Section 4.6 of [our paper](https://doi.org/10.48550/arXiv.2311.18803).
In short, BioCLIP iNat21 only forms representations that more closely align to the taxonomic hierarchy compared to general-domain baselines like CLIP or OpenCLIP.
## Citation
**BibTeX:**
```
@software{bioclip2023,
author = {Samuel Stevens and Jiaman Wu and Matthew J. Thompson and Elizabeth G. Campolongo and Chan Hee Song and David Edward Carlyn and Li Dong and Wasila M. Dahdul and Charles Stewart and Tanya Berger-Wolf and Wei-Lun Chao and Yu Su},
doi = {10.57967/hf/1511},
month = nov,
title = {BioCLIP},
version = {v0.1},
year = {2023}
}
```
Please also cite our paper:
```
@article{stevens2023bioclip,
title = {BIOCLIP: A Vision Foundation Model for the Tree of Life},
author = {Samuel Stevens and Jiaman Wu and Matthew J Thompson and Elizabeth G Campolongo and Chan Hee Song and David Edward Carlyn and Li Dong and Wasila M Dahdul and Charles Stewart and Tanya Berger-Wolf and Wei-Lun Chao and Yu Su},
year = {2023},
eprint = {2311.18803},
archivePrefix = {arXiv},
primaryClass = {cs.CV}
}
```
Please also consider citing OpenCLIP and iNat21:
```
@software{ilharco_gabriel_2021_5143773,
author={Ilharco, Gabriel and Wortsman, Mitchell and Wightman, Ross and Gordon, Cade and Carlini, Nicholas and Taori, Rohan and Dave, Achal and Shankar, Vaishaal and Namkoong, Hongseok and Miller, John and Hajishirzi, Hannaneh and Farhadi, Ali and Schmidt, Ludwig},
title={OpenCLIP},
year={2021},
doi={10.5281/zenodo.5143773},
}
```
```
@misc{inat2021,
author={Van Horn, Grant and Mac Aodha, Oisin},
title={iNat Challenge 2021 - FGVC8},
publisher={Kaggle},
year={2021},
url={https://kaggle.com/competitions/inaturalist-2021}
}
```
## Acknowledgements
The authors would like to thank Josef Uyeda, Jim Balhoff, Dan Rubenstein, Hank Bart, Hilmar Lapp, Sara Beery, and colleagues from the Imageomics Institute and the OSU NLP group for their valuable feedback. We also thank the BIOSCAN-1M team and the iNaturalist team for making their data available and easy to use, and Jennifer Hammack at EOL for her invaluable help in accessing EOL’s images.
The [Imageomics Institute](https://imageomics.org) is funded by the US National Science Foundation's Harnessing the Data Revolution (HDR) program under [Award #2118240](https://www.nsf.gov/awardsearch/showAward?AWD_ID=2118240) (Imageomics: A New Frontier of Biological Information Powered by Knowledge-Guided Machine Learning). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
## Model Card Authors
Elizabeth G. Campolongo, Samuel Stevens, and Jiaman Wu
## Model Card Contact
[[email protected]](mailto:[email protected]) |
oyemade/distilbert-base-uncased-finetuned-emotion | oyemade | 2024-02-06T17:46:45Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
]
| text-classification | 2024-02-06T17:23:51Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9245
- name: F1
type: f1
value: 0.9243518892752073
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2161
- Accuracy: 0.9245
- F1: 0.9244
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.3067 | 0.911 | 0.9101 |
| No log | 2.0 | 500 | 0.2161 | 0.9245 | 0.9244 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.1.0+cu121
- Datasets 2.16.1
- Tokenizers 0.15.1
|
AthenaAgent/Mockingbirdv1-merged-SFT | AthenaAgent | 2024-02-06T17:45:34Z | 7 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"en",
"dataset:AthenaAgent/MockingBirdv1-SFT",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
]
| text-generation | 2024-02-02T07:54:56Z | ---
library_name: transformers
license: apache-2.0
datasets:
- AthenaAgent/MockingBirdv1-SFT
language:
- en
---
# Model Card for Model ID
Mockingbirdv1-SFT
An opinionated Large Langugae Model.
Model is trained to pick a perspective on given topic and then provide nuanced arguments to back up that worldview.
## Model Details
### Model Description
- **Developed by:** AthenaAgent
- **Language(s) (NLP):** English
- **License:** Apache-2.0
- **Finetuned from model [optional]:** mistralai/Mistral-7B-Instruct-v0.2
## Uses
The model uses same instruction formet as Mistral Instruct. Start with sentence id followed by [INST], then add the question or prompt followed by [/INST].
For example, tokenizer.bos_token + "[INST] Is accelerating the techno-capital machine best bet for humanity's survival? [/INST]"
|
TUCN/Segformer_OCT_Retina | TUCN | 2024-02-06T17:44:04Z | 17 | 0 | transformers | [
"transformers",
"pytorch",
"segformer",
"license:mit",
"endpoints_compatible",
"region:us"
]
| null | 2024-02-06T17:38:44Z | ---
license: mit
---
# SegFormer model fine-tuned on AROI
SegFormer model fine-tuned on AROI dataset [AROI: Annotated Retinal OCT Images Database](https://ieeexplore.ieee.org/abstract/document/9596934).
Disclaimer: The team releasing SegFormer did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
SegFormer consists of a hierarchical Transformer encoder and a lightweight all-MLP decode head to achieve great results on semantic segmentation benchmarks such as ADE20K and Cityscapes. The hierarchical Transformer is first pre-trained on ImageNet-1k, after which a decode head is added and fine-tuned altogether on a downstream dataset.
|
FatmaYoussef/q-taxi-v3 | FatmaYoussef | 2024-02-06T17:43:06Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
]
| reinforcement-learning | 2024-02-06T17:38:12Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.16 +/- 3.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="FatmaYoussef/q-taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Subsets and Splits