modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-26 18:27:55
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 499
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-26 18:27:32
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
touhidulislam/BERTweet_retrain_2021_15 | touhidulislam | 2024-11-23T14:06:47Z | 178 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:vinai/bertweet-base",
"base_model:finetune:vinai/bertweet-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-11-23T14:06:26Z | ---
library_name: transformers
license: mit
base_model: vinai/bertweet-base
tags:
- generated_from_trainer
model-index:
- name: BERTweet_retrain_2021_15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTweet_retrain_2021_15
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5915
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.7312 | 1.0 | 5863 | 2.6694 |
| 2.7401 | 2.0 | 11726 | 2.6246 |
| 2.7781 | 3.0 | 17589 | 2.5933 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.1.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
zelk12/MT3-Gen2-IMUBMA-gemma-2-9B | zelk12 | 2024-11-23T13:56:18Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:zelk12/MT3-Gen2-BMA-gemma-2-9B",
"base_model:merge:zelk12/MT3-Gen2-BMA-gemma-2-9B",
"base_model:zelk12/MT3-Gen2-IMU-gemma-2-9B",
"base_model:merge:zelk12/MT3-Gen2-IMU-gemma-2-9B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-20T15:53:14Z | ---
base_model:
- zelk12/MT3-Gen2-BMA-gemma-2-9B
- zelk12/MT3-Gen2-IMU-gemma-2-9B
library_name: transformers
tags:
- mergekit
- merge
---
# Quants
Provided by @mradermacher
GGUF Static: https://huggingface.co/mradermacher/MT3-Gen2-IMUBMA-gemma-2-9B-GGUF
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [zelk12/MT3-Gen2-BMA-gemma-2-9B](https://huggingface.co/zelk12/MT3-Gen2-BMA-gemma-2-9B)
* [zelk12/MT3-Gen2-IMU-gemma-2-9B](https://huggingface.co/zelk12/MT3-Gen2-IMU-gemma-2-9B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: zelk12/MT3-Gen2-IMU-gemma-2-9B
- model: zelk12/MT3-Gen2-BMA-gemma-2-9B
merge_method: slerp
base_model: zelk12/MT3-Gen2-IMU-gemma-2-9B
dtype: bfloat16
parameters:
t: 0.25
```
|
zelk12/MT3-Gen2-BMA-gemma-2-9B | zelk12 | 2024-11-23T13:55:52Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:zelk12/MT3-Gen2-BB-gemma-2-MTMRAv0.1-9B",
"base_model:merge:zelk12/MT3-Gen2-BB-gemma-2-MTMRAv0.1-9B",
"base_model:zelk12/MT3-Gen2-MA-gemma-2-MTMQv1-9B",
"base_model:merge:zelk12/MT3-Gen2-MA-gemma-2-MTMQv1-9B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-20T15:43:14Z | ---
base_model:
- zelk12/MT3-Gen2-MA-gemma-2-MTMQv1-9B
- zelk12/MT3-Gen2-BB-gemma-2-MTMRAv0.1-9B
library_name: transformers
tags:
- mergekit
- merge
---
# Quants
Provided by @mradermacher
GGUF Static: https://huggingface.co/mradermacher/MT3-Gen2-BMA-gemma-2-9B-GGUF
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [zelk12/MT3-Gen2-MA-gemma-2-MTMQv1-9B](https://huggingface.co/zelk12/MT3-Gen2-MA-gemma-2-MTMQv1-9B)
* [zelk12/MT3-Gen2-BB-gemma-2-MTMRAv0.1-9B](https://huggingface.co/zelk12/MT3-Gen2-BB-gemma-2-MTMRAv0.1-9B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: zelk12/MT3-Gen2-BB-gemma-2-MTMRAv0.1-9B
- model: zelk12/MT3-Gen2-MA-gemma-2-MTMQv1-9B
merge_method: slerp
base_model: zelk12/MT3-Gen2-BB-gemma-2-MTMRAv0.1-9B
dtype: bfloat16
parameters:
t: 0.25
```
|
zelk12/MT3-Gen2-IMU-gemma-2-9B | zelk12 | 2024-11-23T13:55:28Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:zelk12/MT3-Gen2-IF-gemma-2-RAv0.1Av4aA-9B",
"base_model:merge:zelk12/MT3-Gen2-IF-gemma-2-RAv0.1Av4aA-9B",
"base_model:zelk12/MT3-Gen2-MU-gemma-2-GQv1-9B",
"base_model:merge:zelk12/MT3-Gen2-MU-gemma-2-GQv1-9B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-20T14:59:12Z | ---
base_model:
- zelk12/MT3-Gen2-MU-gemma-2-GQv1-9B
- zelk12/MT3-Gen2-IF-gemma-2-RAv0.1Av4aA-9B
library_name: transformers
tags:
- mergekit
- merge
---
# Quants
Provided by @mradermacher
GGUF Static: https://huggingface.co/mradermacher/MT3-Gen2-IMU-gemma-2-9B-GGUF
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [zelk12/MT3-Gen2-MU-gemma-2-GQv1-9B](https://huggingface.co/zelk12/MT3-Gen2-MU-gemma-2-GQv1-9B)
* [zelk12/MT3-Gen2-IF-gemma-2-RAv0.1Av4aA-9B](https://huggingface.co/zelk12/MT3-Gen2-IF-gemma-2-RAv0.1Av4aA-9B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: zelk12/MT3-Gen2-IF-gemma-2-RAv0.1Av4aA-9B
- model: zelk12/MT3-Gen2-MU-gemma-2-GQv1-9B
merge_method: slerp
base_model: zelk12/MT3-Gen2-IF-gemma-2-RAv0.1Av4aA-9B
dtype: bfloat16
parameters:
t: 0.25
```
|
zelk12/MT3-Gen2-GMM-gemma-2-9B | zelk12 | 2024-11-23T13:54:59Z | 5 | 1 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:zelk12/MT3-Gen2-GP-gemma-2-RAv0.1MTM-9B",
"base_model:merge:zelk12/MT3-Gen2-GP-gemma-2-RAv0.1MTM-9B",
"base_model:zelk12/MT3-Gen2-MM-gemma-2-RAv0.1Av4aA-9B",
"base_model:merge:zelk12/MT3-Gen2-MM-gemma-2-RAv0.1Av4aA-9B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-20T14:18:42Z | ---
base_model:
- zelk12/MT3-Gen2-MM-gemma-2-RAv0.1Av4aA-9B
- zelk12/MT3-Gen2-GP-gemma-2-RAv0.1MTM-9B
library_name: transformers
tags:
- mergekit
- merge
---
# Quants
Provided by @mradermacher
GGUF Static: https://huggingface.co/mradermacher/MT3-Gen2-GMM-gemma-2-9B-GGUF
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [zelk12/MT3-Gen2-MM-gemma-2-RAv0.1Av4aA-9B](https://huggingface.co/zelk12/MT3-Gen2-MM-gemma-2-RAv0.1Av4aA-9B)
* [zelk12/MT3-Gen2-GP-gemma-2-RAv0.1MTM-9B](https://huggingface.co/zelk12/MT3-Gen2-GP-gemma-2-RAv0.1MTM-9B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: zelk12/MT3-Gen2-GP-gemma-2-RAv0.1MTM-9B
- model: zelk12/MT3-Gen2-MM-gemma-2-RAv0.1Av4aA-9B
merge_method: slerp
base_model: zelk12/MT3-Gen2-GP-gemma-2-RAv0.1MTM-9B
dtype: bfloat16
parameters:
t: 0.25
```
|
mav23/vicuna-33b-v1.3-GGUF | mav23 | 2024-11-23T13:54:33Z | 93 | 1 | null | [
"gguf",
"arxiv:2302.13971",
"arxiv:2306.05685",
"region:us"
] | null | 2024-11-23T09:36:47Z | ---
inference: false
---
# Vicuna Model Card
## Model Details
Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.
- **Developed by:** [LMSYS](https://lmsys.org/)
- **Model type:** An auto-regressive language model based on the transformer architecture.
- **License:** Non-commercial license
- **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971).
### Model Sources
- **Repository:** https://github.com/lm-sys/FastChat
- **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/
- **Paper:** https://arxiv.org/abs/2306.05685
- **Demo:** https://chat.lmsys.org/
## Uses
The primary use of Vicuna is research on large language models and chatbots.
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
## How to Get Started with the Model
- Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights.
- APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api.
## Training Details
Vicuna v1.3 is fine-tuned from LLaMA with supervised instruction fine-tuning.
The training data is around 125K conversations collected from ShareGPT.com.
See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf).
## Evaluation
Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard).
## Difference between different versions of Vicuna
See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md) |
zelk12/MT3-Gen2-GP-gemma-2-RAv0.1MTM-9B | zelk12 | 2024-11-23T13:49:35Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:zelk12/MT-Merge-gemma-2-9B",
"base_model:merge:zelk12/MT-Merge-gemma-2-9B",
"base_model:zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1",
"base_model:merge:zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-20T13:22:58Z | ---
base_model:
- zelk12/MT-Merge-gemma-2-9B
- zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1
library_name: transformers
tags:
- mergekit
- merge
---
# Quants
Provided by @mradermacher
GGUF Static: https://huggingface.co/mradermacher/MT3-Gen2-GP-gemma-2-RAv0.1MTM-9B-GGUF
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [zelk12/MT-Merge-gemma-2-9B](https://huggingface.co/zelk12/MT-Merge-gemma-2-9B)
* [zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1](https://huggingface.co/zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1
- model: zelk12/MT-Merge-gemma-2-9B
merge_method: slerp
base_model: zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1
dtype: bfloat16
parameters:
t: 0.25
```
|
zelk12/MT3-Gen2-MA-gemma-2-MTMQv1-9B | zelk12 | 2024-11-23T13:48:59Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:sam-paech/Quill-v1",
"base_model:merge:sam-paech/Quill-v1",
"base_model:zelk12/MT-Merge-gemma-2-9B",
"base_model:merge:zelk12/MT-Merge-gemma-2-9B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-20T13:02:31Z | ---
base_model:
- sam-paech/Quill-v1
- zelk12/MT-Merge-gemma-2-9B
library_name: transformers
tags:
- mergekit
- merge
---
# Quants
Provided by @mradermacher
GGUF Static: https://huggingface.co/mradermacher/MT3-Gen2-MA-gemma-2-MTMQv1-9B-GGUF
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [sam-paech/Quill-v1](https://huggingface.co/sam-paech/Quill-v1)
* [zelk12/MT-Merge-gemma-2-9B](https://huggingface.co/zelk12/MT-Merge-gemma-2-9B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: zelk12/MT-Merge-gemma-2-9B
- model: sam-paech/Quill-v1
merge_method: slerp
base_model: zelk12/MT-Merge-gemma-2-9B
dtype: bfloat16
parameters:
t: 0.25
```
|
zelk12/MT3-Gen2-BB-gemma-2-MTMRAv0.1-9B | zelk12 | 2024-11-23T13:48:32Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:zelk12/MT-Merge-gemma-2-9B",
"base_model:merge:zelk12/MT-Merge-gemma-2-9B",
"base_model:zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1",
"base_model:merge:zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-20T12:51:21Z | ---
base_model:
- zelk12/MT-Merge-gemma-2-9B
- zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1
library_name: transformers
tags:
- mergekit
- merge
---
# Quants
Provided by @mradermacher
GGUF Static: https://huggingface.co/mradermacher/MT3-Gen2-BB-gemma-2-MTMRAv0.1-9B-GGUF
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [zelk12/MT-Merge-gemma-2-9B](https://huggingface.co/zelk12/MT-Merge-gemma-2-9B)
* [zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1](https://huggingface.co/zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: zelk12/MT-Merge-gemma-2-9B
- model: zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1
merge_method: slerp
base_model: zelk12/MT-Merge-gemma-2-9B
dtype: bfloat16
parameters:
t: 0.25
```
|
zelk12/MT2-Gen2-BG-gemma-2-9B | zelk12 | 2024-11-23T13:45:00Z | 6 | 1 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:zelk12/MT2-Gen2-BB-gemma-2-MTMMT5-9B",
"base_model:merge:zelk12/MT2-Gen2-BB-gemma-2-MTMMT5-9B",
"base_model:zelk12/MT2-Gen2-GP-gemma-2-RIv0.1MTM-9B",
"base_model:merge:zelk12/MT2-Gen2-GP-gemma-2-RIv0.1MTM-9B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-12T16:18:04Z | ---
base_model:
- zelk12/MT2-Gen2-BB-gemma-2-MTMMT5-9B
- zelk12/MT2-Gen2-GP-gemma-2-RIv0.1MTM-9B
library_name: transformers
tags:
- mergekit
- merge
---
# Quants
Provided by @mradermacher
GGUF Static: https://huggingface.co/models?other=base_model:quantized:zelk12/MT2-Gen2-BG-gemma-2-9B
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [zelk12/MT2-Gen2-BB-gemma-2-MTMMT5-9B](https://huggingface.co/zelk12/MT2-Gen2-BB-gemma-2-MTMMT5-9B)
* [zelk12/MT2-Gen2-GP-gemma-2-RIv0.1MTM-9B](https://huggingface.co/zelk12/MT2-Gen2-GP-gemma-2-RIv0.1MTM-9B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: zelk12/MT2-Gen2-BB-gemma-2-MTMMT5-9B
- model: zelk12/MT2-Gen2-GP-gemma-2-RIv0.1MTM-9B
merge_method: slerp
base_model: zelk12/MT2-Gen2-BB-gemma-2-MTMMT5-9B
dtype: bfloat16
parameters:
t: 0.25
```
|
zelk12/MT2-Gen2-IMM-gemma-2-9B | zelk12 | 2024-11-23T13:44:19Z | 5 | 1 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:zelk12/MT2-Gen2-IF-gemma-2-MT5MTM-9B",
"base_model:merge:zelk12/MT2-Gen2-IF-gemma-2-MT5MTM-9B",
"base_model:zelk12/MT2-Gen2-MM-gemma-2-Rv0.4RAv0.1t0.25-9B",
"base_model:merge:zelk12/MT2-Gen2-MM-gemma-2-Rv0.4RAv0.1t0.25-9B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-12T15:00:48Z | ---
base_model:
- zelk12/MT2-Gen2-IF-gemma-2-MT5MTM-9B
- zelk12/MT2-Gen2-MM-gemma-2-Rv0.4RAv0.1t0.25-9B
library_name: transformers
tags:
- mergekit
- merge
---
# Quants
Provided by @mradermacher
GGUF Static: https://huggingface.co/mradermacher/MT2-Gen2-IMM-gemma-2-9B-GGUF
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [zelk12/MT2-Gen2-IF-gemma-2-MT5MTM-9B](https://huggingface.co/zelk12/MT2-Gen2-IF-gemma-2-MT5MTM-9B)
* [zelk12/MT2-Gen2-MM-gemma-2-Rv0.4RAv0.1t0.25-9B](https://huggingface.co/zelk12/MT2-Gen2-MM-gemma-2-Rv0.4RAv0.1t0.25-9B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: zelk12/MT2-Gen2-IF-gemma-2-MT5MTM-9B
- model: zelk12/MT2-Gen2-MM-gemma-2-Rv0.4RAv0.1t0.25-9B
merge_method: slerp
base_model: zelk12/MT2-Gen2-IF-gemma-2-MT5MTM-9B
dtype: bfloat16
parameters:
t: 0.25
```
|
mcguiver/xlm-roberta-base-finetuned-panx-de | mcguiver | 2024-11-23T13:43:12Z | 114 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-11-16T05:46:11Z | ---
library_name: transformers
license: mit
base_model: xlm-roberta-base
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1376
- F1: 0.8644
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2579 | 1.0 | 525 | 0.1546 | 0.8179 |
| 0.1283 | 2.0 | 1050 | 0.1378 | 0.8518 |
| 0.0805 | 3.0 | 1575 | 0.1376 | 0.8644 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cpu
- Datasets 3.1.0
- Tokenizers 0.20.3
|
zelk12/MT2-Gen2-IF-gemma-2-MT5MTM-9B | zelk12 | 2024-11-23T13:42:29Z | 5 | 1 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:zelk12/MT-Merge-gemma-2-9B",
"base_model:merge:zelk12/MT-Merge-gemma-2-9B",
"base_model:zelk12/MT5-gemma-2-9B",
"base_model:merge:zelk12/MT5-gemma-2-9B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-12T13:44:20Z | ---
base_model:
- zelk12/MT5-gemma-2-9B
- zelk12/MT-Merge-gemma-2-9B
library_name: transformers
tags:
- mergekit
- merge
---
# Quants
Provided by @mradermacher
GGUF Static: https://huggingface.co/mradermacher/MT2-Gen2-IF-gemma-2-MT5MTM-9B-GGUF
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [zelk12/MT5-gemma-2-9B](https://huggingface.co/zelk12/MT5-gemma-2-9B)
* [zelk12/MT-Merge-gemma-2-9B](https://huggingface.co/zelk12/MT-Merge-gemma-2-9B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: zelk12/MT5-gemma-2-9B
- model: zelk12/MT-Merge-gemma-2-9B
merge_method: slerp
base_model: zelk12/MT5-gemma-2-9B
dtype: bfloat16
parameters:
t: 0.25
```
|
zelk12/MT1-Gen2-MMMU-gemma-2-9B | zelk12 | 2024-11-23T13:39:39Z | 5 | 1 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:zelk12/MT1-Gen2-MM-gemma-2-RAv0.1t0.25Dv1-9B",
"base_model:merge:zelk12/MT1-Gen2-MM-gemma-2-RAv0.1t0.25Dv1-9B",
"base_model:zelk12/MT1-Gen2-MU-gemma-2-Qv1DMv1-9B",
"base_model:merge:zelk12/MT1-Gen2-MU-gemma-2-Qv1DMv1-9B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-11T16:59:54Z | ---
base_model:
- zelk12/MT1-Gen2-MM-gemma-2-RAv0.1t0.25Dv1-9B
- zelk12/MT1-Gen2-MU-gemma-2-Qv1DMv1-9B
library_name: transformers
tags:
- mergekit
- merge
---
# Quants
Provided by @mradermacher
GGUF Static: https://huggingface.co/mradermacher/MT1-Gen2-MMMU-gemma-2-9B-GGUF
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [zelk12/MT1-Gen2-MM-gemma-2-RAv0.1t0.25Dv1-9B](https://huggingface.co/zelk12/MT1-Gen2-MM-gemma-2-RAv0.1t0.25Dv1-9B)
* [zelk12/MT1-Gen2-MU-gemma-2-Qv1DMv1-9B](https://huggingface.co/zelk12/MT1-Gen2-MU-gemma-2-Qv1DMv1-9B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: zelk12/MT1-Gen2-MM-gemma-2-RAv0.1t0.25Dv1-9B
- model: zelk12/MT1-Gen2-MU-gemma-2-Qv1DMv1-9B
merge_method: slerp
base_model: zelk12/MT1-Gen2-MM-gemma-2-RAv0.1t0.25Dv1-9B
dtype: bfloat16
parameters:
t: 0.25
```
|
Nishitbaria/LoRa-Flux-Anime-Style | Nishitbaria | 2024-11-23T13:39:08Z | 932 | 4 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | 2024-11-23T12:32:34Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: >-
A ANMCH A girl at the edge of a lush, mysterious forest. Her long, flowing
hair dances in the gentle breeze, framing a face serene yet filled with
longing. The soft, dappled sunlight highlights the delicate features of her
countenance, accentuating the depth of emotion in her eyes. This exquisite
portrait captures the essence of youth and contemplation, inviting viewers
to be transported into a world of beauty and introspection
output:
url: images/9705468a-0ac1-46a0-97a5-e51c43da41cc.jpeg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: ANMCH
---
# Anime-Style
<Gallery />
## Model description
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: ANMCH
---
# Animechar
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `ANMCH` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('FounderFeed/AnimeChar', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Trigger words
You should use `ANMCH` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/Nishitbaria/LoRa-Flux-Anime-Style/tree/main) them in the Files & versions tab.
|
isspek/xlnet-base-cased_ebola_llama_5_2e-5_16_undersampling_0.6 | isspek | 2024-11-23T13:38:46Z | 117 | 0 | transformers | [
"transformers",
"safetensors",
"xlnet",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-23T13:38:32Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
isspek/xlnet-base-cased_ebola_llama_3_2e-5_16_undersampling_0.6 | isspek | 2024-11-23T13:37:44Z | 117 | 0 | transformers | [
"transformers",
"safetensors",
"xlnet",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-23T13:37:31Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
zelk12/MT1-Gen2-MM-gemma-2-RAv0.1t0.25Dv1-9B | zelk12 | 2024-11-23T13:37:31Z | 9 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:sam-paech/Delirium-v1",
"base_model:merge:sam-paech/Delirium-v1",
"base_model:zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25",
"base_model:merge:zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-11T16:35:18Z | ---
base_model:
- zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25
- sam-paech/Delirium-v1
library_name: transformers
tags:
- mergekit
- merge
---
# Quants
Provided by @mradermacher
GGUF Static: https://huggingface.co/mradermacher/MT1-Gen2-MM-gemma-2-RAv0.1t0.25Dv1-9B-GGUF
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25](https://huggingface.co/zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25)
* [sam-paech/Delirium-v1](https://huggingface.co/sam-paech/Delirium-v1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25
- model: sam-paech/Delirium-v1
merge_method: slerp
base_model: zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25
dtype: bfloat16
parameters:
t: 0.25
```
|
zelk12/MT1-Gen2-MU-gemma-2-Qv1DMv1-9B | zelk12 | 2024-11-23T13:36:51Z | 6 | 1 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:sam-paech/Darkest-muse-v1",
"base_model:merge:sam-paech/Darkest-muse-v1",
"base_model:sam-paech/Quill-v1",
"base_model:merge:sam-paech/Quill-v1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-11T16:27:24Z | ---
base_model:
- sam-paech/Darkest-muse-v1
- sam-paech/Quill-v1
library_name: transformers
tags:
- mergekit
- merge
---
# Quants
Provided by @mradermacher
GGUF Static: https://huggingface.co/mradermacher/MT1-Gen2-MU-gemma-2-Qv1DMv1-9B-GGUF
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [sam-paech/Darkest-muse-v1](https://huggingface.co/sam-paech/Darkest-muse-v1)
* [sam-paech/Quill-v1](https://huggingface.co/sam-paech/Quill-v1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: sam-paech/Quill-v1
- model: sam-paech/Darkest-muse-v1
merge_method: slerp
base_model: sam-paech/Quill-v1
dtype: bfloat16
parameters:
t: 0.25
```
|
kabachuha/gemma-2-2b-it-abl-rudpo-Q4_K_M-GGUF | kabachuha | 2024-11-23T13:34:51Z | 10 | 1 | transformers | [
"transformers",
"gguf",
"conversational",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"base_model:radm/gemma-2-2b-it-abl-rudpo",
"base_model:quantized:radm/gemma-2-2b-it-abl-rudpo",
"license:gemma",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-23T13:34:35Z | ---
license: gemma
library_name: transformers
pipeline_tag: text-generation
tags:
- conversational
- llama-cpp
- gguf-my-repo
base_model: radm/gemma-2-2b-it-abl-rudpo
---
# kabachuha/gemma-2-2b-it-abl-rudpo-Q4_K_M-GGUF
This model was converted to GGUF format from [`radm/gemma-2-2b-it-abl-rudpo`](https://huggingface.co/radm/gemma-2-2b-it-abl-rudpo) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/radm/gemma-2-2b-it-abl-rudpo) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo kabachuha/gemma-2-2b-it-abl-rudpo-Q4_K_M-GGUF --hf-file gemma-2-2b-it-abl-rudpo-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo kabachuha/gemma-2-2b-it-abl-rudpo-Q4_K_M-GGUF --hf-file gemma-2-2b-it-abl-rudpo-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo kabachuha/gemma-2-2b-it-abl-rudpo-Q4_K_M-GGUF --hf-file gemma-2-2b-it-abl-rudpo-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo kabachuha/gemma-2-2b-it-abl-rudpo-Q4_K_M-GGUF --hf-file gemma-2-2b-it-abl-rudpo-q4_k_m.gguf -c 2048
```
|
touhidulislam/BERTweet_retrain_2022_14 | touhidulislam | 2024-11-23T13:33:49Z | 173 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:vinai/bertweet-base",
"base_model:finetune:vinai/bertweet-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-11-23T13:33:20Z | ---
library_name: transformers
license: mit
base_model: vinai/bertweet-base
tags:
- generated_from_trainer
model-index:
- name: BERTweet_retrain_2022_14
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTweet_retrain_2022_14
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5422
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.7233 | 1.0 | 6150 | 2.6205 |
| 2.6665 | 2.0 | 12300 | 2.5542 |
| 2.6632 | 3.0 | 18450 | 2.5398 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.1.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
isspek/xlnet-base-cased_ebola_llama_2_2e-5_16_undersampling_0.6 | isspek | 2024-11-23T13:32:59Z | 117 | 0 | transformers | [
"transformers",
"safetensors",
"xlnet",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-23T13:32:45Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
isspek/xlnet-base-cased_ebola_llama_4_2e-5_16_undersampling_0.6 | isspek | 2024-11-23T13:32:08Z | 117 | 0 | transformers | [
"transformers",
"safetensors",
"xlnet",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-23T13:31:54Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
zelk12/MT-Gen2-MM-gemma-2-RAv0.1t0.25MT4G1-9B | zelk12 | 2024-11-23T13:31:13Z | 6 | 1 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:zelk12/MT4-Gen1-gemma-2-9B",
"base_model:merge:zelk12/MT4-Gen1-gemma-2-9B",
"base_model:zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25",
"base_model:merge:zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-10T19:54:58Z | ---
base_model:
- zelk12/MT4-Gen1-gemma-2-9B
- zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25
library_name: transformers
tags:
- mergekit
- merge
---
# Quants
Provided by @mradermacher
GGUF Static: https://huggingface.co/mradermacher/MT-Gen2-MM-gemma-2-RAv0.1t0.25MT4G1-9B-GGUF
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [zelk12/MT4-Gen1-gemma-2-9B](https://huggingface.co/zelk12/MT4-Gen1-gemma-2-9B)
* [zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25](https://huggingface.co/zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25
- model: zelk12/MT4-Gen1-gemma-2-9B
merge_method: slerp
base_model: zelk12/recoilme-gemma-2-Ataraxy-9B-v0.1-t0.25
dtype: bfloat16
parameters:
t: 0.25
```
|
touhidulislam/BERTweet_retrain_2021_14 | touhidulislam | 2024-11-23T13:31:12Z | 178 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:vinai/bertweet-base",
"base_model:finetune:vinai/bertweet-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-11-23T13:30:51Z | ---
library_name: transformers
license: mit
base_model: vinai/bertweet-base
tags:
- generated_from_trainer
model-index:
- name: BERTweet_retrain_2021_14
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTweet_retrain_2021_14
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5822
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.7667 | 1.0 | 5953 | 2.6613 |
| 2.7117 | 2.0 | 11906 | 2.6057 |
| 2.6391 | 3.0 | 17859 | 2.5902 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.1.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
zelk12/MT-Gen2-MMMA-gemma-2-9B | zelk12 | 2024-11-23T13:28:23Z | 6 | 1 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:zelk12/MT-Gen2-MA-gemma-2-MT4RAv0.1t0.25-9B",
"base_model:merge:zelk12/MT-Gen2-MA-gemma-2-MT4RAv0.1t0.25-9B",
"base_model:zelk12/MT-Gen2-MM-gemma-2-RAv0.1t0.25MT4G1-9B",
"base_model:merge:zelk12/MT-Gen2-MM-gemma-2-RAv0.1t0.25MT4G1-9B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-10T20:30:22Z | ---
base_model:
- zelk12/MT-Gen2-MA-gemma-2-MT4RAv0.1t0.25-9B
- zelk12/MT-Gen2-MM-gemma-2-RAv0.1t0.25MT4G1-9B
library_name: transformers
tags:
- mergekit
- merge
---
# Quants
Provided by @mradermacher
GGUF Static: https://huggingface.co/mradermacher/MT-Gen2-MMMA-gemma-2-9B-GGUF
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [zelk12/MT-Gen2-MA-gemma-2-MT4RAv0.1t0.25-9B](https://huggingface.co/zelk12/MT-Gen2-MA-gemma-2-MT4RAv0.1t0.25-9B)
* [zelk12/MT-Gen2-MM-gemma-2-RAv0.1t0.25MT4G1-9B](https://huggingface.co/zelk12/MT-Gen2-MM-gemma-2-RAv0.1t0.25MT4G1-9B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: zelk12/MT-Gen2-MM-gemma-2-RAv0.1t0.25MT4G1-9B
- model: zelk12/MT-Gen2-MA-gemma-2-MT4RAv0.1t0.25-9B
merge_method: slerp
base_model: zelk12/MT-Gen2-MM-gemma-2-RAv0.1t0.25MT4G1-9B
dtype: bfloat16
parameters:
t: 0.25
```
|
zelk12/MT-Gen2-GIMMMA-gemma-2-9B | zelk12 | 2024-11-23T13:26:55Z | 7 | 1 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:zelk12/MT-Gen2-GI-gemma-2-9B",
"base_model:merge:zelk12/MT-Gen2-GI-gemma-2-9B",
"base_model:zelk12/MT-Gen2-MMMA-gemma-2-9B",
"base_model:merge:zelk12/MT-Gen2-MMMA-gemma-2-9B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-10T20:38:17Z | ---
base_model:
- zelk12/MT-Gen2-MMMA-gemma-2-9B
- zelk12/MT-Gen2-GI-gemma-2-9B
library_name: transformers
tags:
- mergekit
- merge
---
# Quants
Provided by @mradermacher
GGUF Static: https://huggingface.co/mradermacher/MT-Gen2-GIMMMA-gemma-2-9B-GGUF
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [zelk12/MT-Gen2-MMMA-gemma-2-9B](https://huggingface.co/zelk12/MT-Gen2-MMMA-gemma-2-9B)
* [zelk12/MT-Gen2-GI-gemma-2-9B](https://huggingface.co/zelk12/MT-Gen2-GI-gemma-2-9B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: zelk12/MT-Gen2-GI-gemma-2-9B
- model: zelk12/MT-Gen2-MMMA-gemma-2-9B
merge_method: slerp
base_model: zelk12/MT-Gen2-GI-gemma-2-9B
dtype: bfloat16
parameters:
t: 0.25
```
|
axel-darmouni/paligemma_dataset2 | axel-darmouni | 2024-11-23T13:15:27Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"paligemma",
"image-text-to-text",
"generated_from_trainer",
"base_model:google/paligemma-3b-pt-448",
"base_model:finetune:google/paligemma-3b-pt-448",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2024-11-23T11:59:24Z | ---
library_name: transformers
license: gemma
base_model: google/paligemma-3b-pt-448
tags:
- generated_from_trainer
model-index:
- name: paligemma_dataset2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# paligemma_dataset2
This model is a fine-tuned version of [google/paligemma-3b-pt-448](https://huggingface.co/google/paligemma-3b-pt-448) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_hf with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu118
- Datasets 3.1.0
- Tokenizers 0.20.3
|
priyankrathore/Flan_T5_chunk | priyankrathore | 2024-11-23T13:07:01Z | 105 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-11-23T13:06:22Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
touhidulislam/BERTweet_retrain_2022_43 | touhidulislam | 2024-11-23T12:56:38Z | 172 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:vinai/bertweet-base",
"base_model:finetune:vinai/bertweet-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-11-23T12:56:14Z | ---
library_name: transformers
license: mit
base_model: vinai/bertweet-base
tags:
- generated_from_trainer
model-index:
- name: BERTweet_retrain_2022_43
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTweet_retrain_2022_43
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4504
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.8207 | 1.0 | 6104 | 2.5413 |
| 2.4681 | 2.0 | 12208 | 2.4710 |
| 2.7499 | 3.0 | 18312 | 2.4517 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.1.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
touhidulislam/BERTweet_retrain_2022_13 | touhidulislam | 2024-11-23T12:56:12Z | 178 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:vinai/bertweet-base",
"base_model:finetune:vinai/bertweet-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-11-23T12:55:49Z | ---
library_name: transformers
license: mit
base_model: vinai/bertweet-base
tags:
- generated_from_trainer
model-index:
- name: BERTweet_retrain_2022_13
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTweet_retrain_2022_13
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.9022 | 1.0 | 6108 | 2.6424 |
| 2.5631 | 2.0 | 12216 | 2.5838 |
| 2.6221 | 3.0 | 18324 | 2.5451 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.1.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
touhidulislam/BERTweet_retrain_2021_13 | touhidulislam | 2024-11-23T12:55:12Z | 172 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:vinai/bertweet-base",
"base_model:finetune:vinai/bertweet-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-11-23T12:54:51Z | ---
library_name: transformers
license: mit
base_model: vinai/bertweet-base
tags:
- generated_from_trainer
model-index:
- name: BERTweet_retrain_2021_13
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTweet_retrain_2021_13
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5722
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.8384 | 1.0 | 6041 | 2.6546 |
| 2.6996 | 2.0 | 12082 | 2.5974 |
| 2.5276 | 3.0 | 18123 | 2.5779 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.1.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
tintnguyen/bert-base-vi-uncased-st | tintnguyen | 2024-11-23T12:54:37Z | 7 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:631587",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:tintnguyen/bert-base-vi-uncased",
"base_model:finetune:tintnguyen/bert-base-vi-uncased",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-11-23T12:54:03Z | ---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:631587
- loss:MultipleNegativesRankingLoss
base_model: tintnguyen/bert-base-vi-uncased
widget:
- source_sentence: thành phố bất động sản đắt nhất
sentences:
- 'Đồng hồ: 18.066 đô la. Paris, kinh đô thời trang của thế giới và thủ đô thực
tế của đất nước Pháp hiện được công nhận là một trong những thành phố đắt đỏ nhất
để mua bất động sản, khi thị trường bất động sản tiếp tục phát triển mạnh với
nền kinh tế Pháp ổn định.'
- Hầu hết các cô gái và chàng trai đeo chiếc nhẫn thuần khiết của họ trên ngón áp
út của họ. Một số cô gái thích đeo nó trên ngón áp út của bàn tay trái, và thay
thế nó bằng chiếc nhẫn cưới của họ. Nhẫn tinh khiết không dành riêng cho ngón
tay đeo nhẫn; bạn có thể mặc nó bất cứ nơi nào thoải mái nhất cho bạn. Một số
thậm chí đeo nhẫn của họ trên một chuỗi như một chiếc vòng cổ. Một.
- 'Michaela Kennedy Cuomo, con gái út của Andrew Cuomo, phải nhập viện một thời
gian ngắn sau khi bất tỉnh: báo cáo. Theo các báo cáo, Michaela Kennedy Cuomo
(bên phải, trong ảnh với cha Thống đốc Cuomo), 17 tuổi, đã phải nhập viện một
thời gian ngắn sau khi được tìm thấy trong tình trạng bất tỉnh trong ngôi nhà
ở Westchester mà cô ở cùng mẹ, Kerry Kennedy, theo báo cáo. (Hình ảnh Spencer
Platt / Getty)'
- source_sentence: núi đọ ở tỉnh nào
sentences:
- Việc phát hiện một số di tích mà tiêu biểu là Núi Đọ vào cuối năm 1960, đã xuất
lộ những công cụ đá thô sơ đầu tiên của con người. Các nhà Khảo cổ học Việt Nam
cùng với Giáo sư Boriskovski đã nghiên cứu và chứng minh rằng ở Núi Đọ từng tồn
tại một nền văn hóa sơ kỳ thời đại đá cũ. Di tích Núi Đọ thuộc địa phận hai xã
Thiệu Tân và Thiệu Khánh, huyện Đông Sơn, Thanh Hóa. Núi Đọ là một quả núi thấp,
sườn núi dốc thoai thoải từ 20 độ đến 25 độ, cao 158 m so với mặt nước biển, nằm
ngay bên bờ hữu ngạn sông Chu; chỗ hợp lưu của hai dòng sông Mã và sông Chu cách
Thành phố Thanh Hóa 7 km về phía bắc - tây bắc.
- 'Virus Zika có thể lây truyền qua muỗi hoặc qua đường tình dục. Hầu hết mọi người
có thể phục hồi hoàn toàn và các triệu chứng virus zika tự hết trong khoảng một
tuần. Đa số các trường hợp không có triệu chứng nhiễm virus Zika. Tuy nhiên, nếu
có thì các biểu hiện virus zika thông thường gồm: Sốt, phát ban, đau khớp, đau
đầu, kết mạc mắt đỏ, đau cơ, cảm giác đau ở lưng. Bệnh virus Zika đặc biệt nguy
hiểm nếu đi từ mẹ sang con. Bà bầu bị nhiễm virus Zika khi mang thai có thể khiến
thai nhi mắc chứng đầu nhỏ. Khi lớn hơn, bé có thể bị suy giảm thị giác, thính
giác, tăng trưởng kém và thậm chí co giật.'
- Ý nghĩa hoa Alstroemeria Astroemeria cứng cáp là một bông hoa tuyệt đẹp kết hợp
nhiều màu sắc khác nhau thành một vẻ đẹp gắn kết. Loài hoa tươi sáng này tượng
trưng cho tình bạn, vẻ đẹp bền lâu của sự cam kết và chăm sóc.
- source_sentence: giáo của người đông sơn có hình gì
sentences:
- Chủ nhân của Văn hóa Đông Sơn đã chế tạo nhiều loại vũ khí từ đồng dùng để đánh
xa, gồm có lao, đầu mũi tên. Để bắn tên tất phải có cung, nỏ bằng gỗ hoặc tre.
Việc phát hiện bộ lẫy nỏ có hộp, có rãnh đặt mũi tên, có nấc để giữ dây nỏ, có
lẫy cong đùng để bóp cò, không còn nguyên vẹn ở làng Vạc, cho thấy việc dùng cung
nỏ của người Đông Sơn rất lợi hại khi săn bắn, chiến tranh là điều có thể tin
được. Giáo hình búp da, hình lá mía. Lao cũng giống như giáo nhưng kích cỡ nhỏ
hơn. Vũ khí đánh gần có dao găm. Dao găm có nhiều kiểu phân biệt dựa vào phần
cán và đốc chắn. Nhiều chiếc dao găm được đúc rất công phu. Chuôi dao đúc hình
tượng người nam hoặc nữ, y phục hoa văn trang sức đẹp đẽ, sống động. Phần cán
dao găm có những chiếc được chạm trổ rất độc đáo với hình tượng động vật như rắn
ngậm chân hổ, hổ ngậm chân voi, hay rắn ngậm chân voi...
- 'Xét nghiệm máu FSH kiểm tra mức độ hormone kích thích nang trứng trong máu. Mức
độ FHS có thể xác định xem các cơ quan sinh dục ở cả nam và nữ có hoạt động bình
thường hay không. Mức độ FHS cũng có thể được kiểm tra để phát hiện các vấn đề
với tuyến yên. Số lượng: * Chỉ số nguyên. Hormone kích thích nang trứng, hoặc
FSH, được sản xuất bởi tuyến yên để kích thích sản xuất và phát triển trứng ở
nữ và tinh trùng ở nam. FSH cũng kích thích sản xuất các hormone khác, bao gồm
testosterone và estrogen.'
- White Lightning là một tàu lượn siêu tốc bằng gỗ tại Fun Spot America ở Orlando,
Florida. Chuyến xe được thiết kế riêng do Great Coasters International sản xuất.
Chuyến đi là tàu lượn bằng gỗ đầu tiên của Orlando.
- source_sentence: bước đầu tiên trong quá trình tổng hợp protein là gì
sentences:
- 'Sáu bước đơn giản để trồng hoa nhài: 1 Hoa nhài phát triển mạnh trong môi trường
ban ngày nóng ẩm. 2 Chọn nơi trồng hoa nhài của bạn. 3 Chuẩn bị đất để trồng bằng
cách làm việc với một lượng lớn vật liệu hữu cơ, chẳng hạn như rêu than bùn hoặc
phân trộn. Bón phân đa dụng mỗi tháng một lần từ tháng 3 đến tháng 11.'
- 'PHẦN A. Đọc phần sau và ghi chú vào giấy của bạn: Tổng hợp protein là quá trình
cơ thể sử dụng để tạo ra protein. Bước đầu tiên của quá trình tổng hợp protein
được gọi là Phiên mã. Nó xảy ra trong nhân. Trong quá trình phiên mã, mRNA phiên
mã (bản sao) DNA. DNA là ࢠ€Š“được giải nénࢠ€Â và sợi mRNA
sao chép một sợi DNA. Một khi nó làm được điều này, mRNA sẽ rời khỏi nhân và đi
vào tế bào chất. mRNA sau đó sẽ tự gắn vào ribosome.'
- LIÊN KẾT / TRANG WEB THÊM VÀO DANH SÁCH CÔNG VIỆC. danh từ. Skinny jeans là loại
quần denim bó sát với da thường được làm bằng vải co giãn. Một ví dụ về quần jean
bó là những chiếc quần jean bó sát từ eo đến mắt cá chân.
- source_sentence: phương pháp lóc ối bác sĩ sẽ làm gì
sentences:
- Gương mặt thân quen mùa 1 được phát sóng trên kênh VTV3 từ ngày 5 tháng 1 năm
2013 đến 23 tháng 3 năm 2013 với các thí sinh gồm Khởi My, Đại Nghĩa, Thúy Uyên,
Kyo York, Phương Thanh và Chí Thiện. Bộ ba giám khảo chính là nhạc sĩ Đức Huy,
ca sĩ Mỹ Linh và NSƯT Hoài Linh. Người dẫn chương trình mùa này là nghệ sĩ Thanh
Bạch. Sau 10 tuần thi, kết quả chung cuộc giải nhất thuộc về thí sinh Khởi My.
- 'Khi thai phụ đã quá ngày dự sinh, các phương pháp sau đây có thể được bác sĩ
cân nhắc lựa chọn để gây khởi phát chuyển dạ:
Lóc ối: Với biện pháp này, bác sĩ sẽ đeo găng và dùng ngón tay để tách màng ối
ra khỏi thành tử cung.
Phá vỡ túi nước ối: Bác sĩ sẽ tạo một lỗ nhỏ trên túi nước ối để làm vỡ ối, qua
đó kích thích sự chuyển dạ.
Oxytocin: Là một loại thuốc giúp tạo ra các cơn co thắt chuyển dạ, được tiêm theo
đường tĩnh mạch vào cánh tay của thai phụ. Liều lượng có thể được tăng dần theo
thời gian nhưng phải theo dõi cẩn thận.
Các chất tương tự Prostaglandin: Đây là những loại thuốc được đặt trong âm đạo
để làm chín muồi cổ tử cung.
Làm giãn nở cổ tử cung: Bác sĩ có thể đặt ống thông có gắn một quả bong bóng rất
nhỏ vào cuối cổ tử cung của thai phụ. Sau đó, nước sẽ được bơm vào quả bóng. Khi
bóng đã được bơm căng, nó sẽ gây ra tác động áp lực, giúp cổ tử cung mở ra và
quá trình chuyển dạ sẽ bắt đầu.'
- 'Nghĩa quân Tây Sơn ở vào thế bất lợi: phía bắc có quân Trịnh, phía nam còn quân
Nguyễn. Nguyễn Nhạc phải tạm hoà hoãn với quân Trịnh để dồn sức đánh Nguyễn. Từ
năm 1776 đến năm 1783, nghĩa quân Tây Sơn đã bốn lần đánh vào Gia Định.Trong lần
tiến quân năm 1777, Tây Sơn bắt giết được chúa Nguyễn, chỉ còn Nguyễn Ánh chạy
thoát. Chính quyền họ Nguyễn ở Đàng Trong đến đây bị lật đổ.'
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on tintnguyen/bert-base-vi-uncased
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [tintnguyen/bert-base-vi-uncased](https://huggingface.co/tintnguyen/bert-base-vi-uncased). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [tintnguyen/bert-base-vi-uncased](https://huggingface.co/tintnguyen/bert-base-vi-uncased) <!-- at revision 4e50493f19bb3d6599ec112044d6c84f65b733bb -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("tintnguyen/bert-base-vi-uncased-st")
# Run inference
sentences = [
'phương pháp lóc ối bác sĩ sẽ làm gì',
'Khi thai phụ đã quá ngày dự sinh, các phương pháp sau đây có thể được bác sĩ cân nhắc lựa chọn để gây khởi phát chuyển dạ:\nLóc ối: Với biện pháp này, bác sĩ sẽ đeo găng và dùng ngón tay để tách màng ối ra khỏi thành tử cung.\nPhá vỡ túi nước ối: Bác sĩ sẽ tạo một lỗ nhỏ trên túi nước ối để làm vỡ ối, qua đó kích thích sự chuyển dạ.\nOxytocin: Là một loại thuốc giúp tạo ra các cơn co thắt chuyển dạ, được tiêm theo đường tĩnh mạch vào cánh tay của thai phụ. Liều lượng có thể được tăng dần theo thời gian nhưng phải theo dõi cẩn thận.\nCác chất tương tự Prostaglandin: Đây là những loại thuốc được đặt trong âm đạo để làm chín muồi cổ tử cung.\nLàm giãn nở cổ tử cung: Bác sĩ có thể đặt ống thông có gắn một quả bong bóng rất nhỏ vào cuối cổ tử cung của thai phụ. Sau đó, nước sẽ được bơm vào quả bóng. Khi bóng đã được bơm căng, nó sẽ gây ra tác động áp lực, giúp cổ tử cung mở ra và quá trình chuyển dạ sẽ bắt đầu.',
'Nghĩa quân Tây Sơn ở vào thế bất lợi: phía bắc có quân Trịnh, phía nam còn quân Nguyễn. Nguyễn Nhạc phải tạm hoà hoãn với quân Trịnh để dồn sức đánh Nguyễn. Từ năm 1776 đến năm 1783, nghĩa quân Tây Sơn đã bốn lần đánh vào Gia Định.Trong lần tiến quân năm 1777, Tây Sơn bắt giết được chúa Nguyễn, chỉ còn Nguyễn Ánh chạy thoát. Chính quyền họ Nguyễn ở Đàng Trong đến đây bị lật đổ.',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities.shape)
# [3, 3]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 631,587 training samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 1000 samples:
| | anchor | positive |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 10.92 tokens</li><li>max: 41 tokens</li></ul> | <ul><li>min: 8 tokens</li><li>mean: 106.79 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor | positive |
|:---------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>bạn có thể lấy hộ chiếu ở dmv không</code> | <code>Nộp đơn xin Hộ chiếu Hoa Kỳ tại Văn phòng DMV. Xuất bản 27/09/2001 01:53 PM | Cập nhật 24/08/2011 11:05 AM. Bạn có thể nộp đơn xin Hộ chiếu Hoa Kỳ tại một số văn phòng xe cơ giới do các nhân viên quận điều hành. Bạn có thể nộp đơn xin Hộ chiếu Hoa Kỳ tại một số Bưu điện Hoa Kỳ. NYSDMV không cấp hộ chiếu và không thể cung cấp cho bạn thông tin về cách xin hộ chiếu. Liên hệ với Bưu điện Hoa Kỳ hoặc văn phòng thư ký quận được liệt kê trong các trang màu xanh trong danh bạ điện thoại của bạn. NYSDMV không cấp hộ chiếu và không thể cung cấp cho bạn thông tin về cách nộp đơn xin hộ chiếu. Liên hệ với Bưu điện Hoa Kỳ hoặc văn phòng lục sự quận được liệt kê trong các trang màu xanh lam của danh bạ điện thoại của bạn.</code> |
| <code>tổng số người mỹ thiệt mạng trong tất cả các cuộc chiến tranh</code> | <code>1 Con số chính thức của người Mỹ thiệt mạng trong Chiến tranh Cách mạng (4.435) chỉ bằng khoảng 2/3 con số mà các nhà sử học đưa ra. Tổng số người Mỹ thiệt mạng trong Nội chiến được đưa ra là 140.414 - cao hơn khoảng 30.000 so với hầu hết các nhà sử học ước tính. Tôi nghi ngờ (nhưng không biết chắc chắn) rằng sự gia tăng có thể đến từ việc DoD tính số người chết trong tù binh là chiến đấu hơn là bệnh tật.</code> |
| <code>lý thuyết vụ nổ lớn được quay ở đâu</code> | <code>Thuyết Vụ nổ lớn được ghi hình tại Warner Bros. Studios ở Burbank, California. Bạn phải từ mười tám tuổi trở lên để tham gia buổi ghi hình Thuyết Vụ nổ lớn. Số lượng vé có hạn và nhu cầu rất cao. Vé được bán miễn phí trên TVTickets.com. Phần thứ 10 của Big Bang Theory hiện đang được sản xuất.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Evaluation Dataset
#### Unnamed Dataset
* Size: 300 evaluation samples
* Columns: <code>anchor</code> and <code>positive</code>
* Approximate statistics based on the first 300 samples:
| | anchor | positive |
|:--------|:----------------------------------------------------------------------------------|:------------------------------------------------------------------------------------|
| type | string | string |
| details | <ul><li>min: 5 tokens</li><li>mean: 11.02 tokens</li><li>max: 43 tokens</li></ul> | <ul><li>min: 4 tokens</li><li>mean: 108.76 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor | positive |
|:--------------------------------------------------------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>lúa mạch đen được dùng làm gì và ăn ở pháp</code> | <code>Tiêu dùng và sử dụng của con người. Như đã nói trước đó, hầu hết lúa mạch đen được trồng ở châu Âu là để làm bánh mì. Canada có một số lượng hạn chế hạt lúa mạch đen được sử dụng để chưng cất và sử dụng thực phẩm, và ở Mỹ khoảng một nửa lúa mạch đen làm ngũ cốc được sử dụng cho những mục đích này (7, 8). Hạt lúa mạch đen được sử dụng để làm bột mì, thức ăn gia súc, hoặc bia. Nó cũng có thể được ăn toàn bộ, dưới dạng quả mọng lúa mạch đen luộc, hoặc bằng cách cuộn lại, tương tự như yến mạch cán (10).</code> |
| <code>kỳ hạm của hải quân chúng tôi là gì</code> | <code>USS Hiến pháp là kỳ hạm truyền thống của Hải quân Hoa Kỳ. Và nó vẫn đang được thực hiện và được điều khiển bởi một thủy thủ đoàn Hải quân Hoa Kỳ. USS Constellation CVN-64 được Tổng thống Ronald Reagan đặt cho biệt danh là Chiến hạm của Hoa Kỳ. Nó hiện đã ngừng hoạt động và được thay thế trong hạm đội bằng tàu sân bay USS Ronald Reagan CVN-76.</code> |
| <code>cửa sổ kính lớn nhất</code> | <code>Cửa sổ kính màu lớn nhất thế giới. Cửa sổ kính màu lớn nhất thế giới thực sự nằm trong lăng mộ tại Nghĩa trang Phục sinh ở Công lý. Nó chứa 2.448 tấm và rộng 22.381 feet vuông. Cũng cần lưu ý thêm, nó chỉ cách nhà máy cải tạo nước Stickney, cơ sở xử lý nước thải lớn nhất (vào tháng 7 và cả tháng 8, là cơ sở xử lý nước thải bốc mùi nhất trên thế giới, vài dặm ngắn) chỉ vài dặm ngắn.</code> |
* Loss: [<code>MultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#multiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim"
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `learning_rate`: 2e-05
- `num_train_epochs`: 2
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 2e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: False
- `hub_always_push`: False
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `dispatch_batches`: None
- `split_batches`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
</details>
### Training Logs
<details><summary>Click to expand</summary>
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:-----:|:-------------:|:---------------:|
| 0.0025 | 100 | 1.6877 | - |
| 0.0051 | 200 | 1.259 | - |
| 0.0076 | 300 | 0.6289 | - |
| 0.0101 | 400 | 0.3623 | - |
| 0.0127 | 500 | 0.3353 | - |
| 0.0152 | 600 | 0.2671 | - |
| 0.0177 | 700 | 0.1881 | - |
| 0.0203 | 800 | 0.179 | - |
| 0.0228 | 900 | 0.1611 | - |
| 0.0253 | 1000 | 0.1587 | 0.1468 |
| 0.0279 | 1100 | 0.141 | - |
| 0.0304 | 1200 | 0.13 | - |
| 0.0329 | 1300 | 0.1257 | - |
| 0.0355 | 1400 | 0.1153 | - |
| 0.0380 | 1500 | 0.1341 | - |
| 0.0405 | 1600 | 0.1227 | - |
| 0.0431 | 1700 | 0.1024 | - |
| 0.0456 | 1800 | 0.0818 | - |
| 0.0481 | 1900 | 0.1069 | - |
| 0.0507 | 2000 | 0.0831 | 0.0978 |
| 0.0532 | 2100 | 0.1035 | - |
| 0.0557 | 2200 | 0.0949 | - |
| 0.0583 | 2300 | 0.1037 | - |
| 0.0608 | 2400 | 0.0894 | - |
| 0.0633 | 2500 | 0.0831 | - |
| 0.0659 | 2600 | 0.1085 | - |
| 0.0684 | 2700 | 0.0815 | - |
| 0.0709 | 2800 | 0.071 | - |
| 0.0735 | 2900 | 0.0889 | - |
| 0.0760 | 3000 | 0.0832 | 0.0704 |
| 0.0785 | 3100 | 0.0992 | - |
| 0.0811 | 3200 | 0.0733 | - |
| 0.0836 | 3300 | 0.0878 | - |
| 0.0861 | 3400 | 0.0757 | - |
| 0.0887 | 3500 | 0.0476 | - |
| 0.0912 | 3600 | 0.0741 | - |
| 0.0937 | 3700 | 0.0766 | - |
| 0.0963 | 3800 | 0.0736 | - |
| 0.0988 | 3900 | 0.0673 | - |
| 0.1013 | 4000 | 0.0718 | 0.0566 |
| 0.1039 | 4100 | 0.0649 | - |
| 0.1064 | 4200 | 0.0767 | - |
| 0.1089 | 4300 | 0.073 | - |
| 0.1115 | 4400 | 0.0745 | - |
| 0.1140 | 4500 | 0.0692 | - |
| 0.1165 | 4600 | 0.0652 | - |
| 0.1191 | 4700 | 0.077 | - |
| 0.1216 | 4800 | 0.0749 | - |
| 0.1241 | 4900 | 0.0493 | - |
| 0.1267 | 5000 | 0.0653 | 0.0533 |
| 0.1292 | 5100 | 0.073 | - |
| 0.1317 | 5200 | 0.0652 | - |
| 0.1343 | 5300 | 0.0639 | - |
| 0.1368 | 5400 | 0.0549 | - |
| 0.1393 | 5500 | 0.0731 | - |
| 0.1419 | 5600 | 0.0832 | - |
| 0.1444 | 5700 | 0.0687 | - |
| 0.1469 | 5800 | 0.0711 | - |
| 0.1495 | 5900 | 0.0709 | - |
| 0.1520 | 6000 | 0.0547 | 0.0626 |
| 0.1545 | 6100 | 0.084 | - |
| 0.1571 | 6200 | 0.0743 | - |
| 0.1596 | 6300 | 0.0706 | - |
| 0.1621 | 6400 | 0.0664 | - |
| 0.1647 | 6500 | 0.0682 | - |
| 0.1672 | 6600 | 0.0534 | - |
| 0.1697 | 6700 | 0.0642 | - |
| 0.1723 | 6800 | 0.0624 | - |
| 0.1748 | 6900 | 0.0648 | - |
| 0.1773 | 7000 | 0.0697 | 0.0509 |
| 0.1799 | 7100 | 0.0784 | - |
| 0.1824 | 7200 | 0.0871 | - |
| 0.1849 | 7300 | 0.0711 | - |
| 0.1875 | 7400 | 0.0718 | - |
| 0.1900 | 7500 | 0.0543 | - |
| 0.1925 | 7600 | 0.0676 | - |
| 0.1951 | 7700 | 0.0724 | - |
| 0.1976 | 7800 | 0.0579 | - |
| 0.2001 | 7900 | 0.0781 | - |
| 0.2027 | 8000 | 0.0909 | 0.0736 |
| 0.2052 | 8100 | 0.0653 | - |
| 0.2077 | 8200 | 0.0535 | - |
| 0.2103 | 8300 | 0.0801 | - |
| 0.2128 | 8400 | 0.0794 | - |
| 0.2153 | 8500 | 0.0615 | - |
| 0.2179 | 8600 | 0.0646 | - |
| 0.2204 | 8700 | 0.0497 | - |
| 0.2229 | 8800 | 0.06 | - |
| 0.2255 | 8900 | 0.0495 | - |
| 0.2280 | 9000 | 0.0685 | 0.0450 |
| 0.2305 | 9100 | 0.0606 | - |
| 0.2331 | 9200 | 0.0577 | - |
| 0.2356 | 9300 | 0.0464 | - |
| 0.2381 | 9400 | 0.0622 | - |
| 0.2407 | 9500 | 0.0567 | - |
| 0.2432 | 9600 | 0.0545 | - |
| 0.2457 | 9700 | 0.0455 | - |
| 0.2483 | 9800 | 0.0642 | - |
| 0.2508 | 9900 | 0.0612 | - |
| 0.2533 | 10000 | 0.0658 | 0.0310 |
| 0.2559 | 10100 | 0.0618 | - |
| 0.2584 | 10200 | 0.052 | - |
| 0.2609 | 10300 | 0.0504 | - |
| 0.2635 | 10400 | 0.0593 | - |
| 0.2660 | 10500 | 0.0534 | - |
| 0.2685 | 10600 | 0.0555 | - |
| 0.2711 | 10700 | 0.0583 | - |
| 0.2736 | 10800 | 0.0472 | - |
| 0.2761 | 10900 | 0.0591 | - |
| 0.2787 | 11000 | 0.039 | 0.0300 |
| 0.2812 | 11100 | 0.0446 | - |
| 0.2837 | 11200 | 0.0375 | - |
| 0.2863 | 11300 | 0.0515 | - |
| 0.2888 | 11400 | 0.0577 | - |
| 0.2913 | 11500 | 0.046 | - |
| 0.2939 | 11600 | 0.0518 | - |
| 0.2964 | 11700 | 0.055 | - |
| 0.2989 | 11800 | 0.0492 | - |
| 0.3015 | 11900 | 0.0513 | - |
| 0.3040 | 12000 | 0.0442 | 0.0278 |
| 0.3065 | 12100 | 0.0675 | - |
| 0.3091 | 12200 | 0.0526 | - |
| 0.3116 | 12300 | 0.0688 | - |
| 0.3141 | 12400 | 0.0589 | - |
| 0.3167 | 12500 | 0.0602 | - |
| 0.3192 | 12600 | 0.0551 | - |
| 0.3217 | 12700 | 0.0681 | - |
| 0.3243 | 12800 | 0.0522 | - |
| 0.3268 | 12900 | 0.047 | - |
| 0.3293 | 13000 | 0.0376 | 0.0282 |
| 0.3319 | 13100 | 0.0396 | - |
| 0.3344 | 13200 | 0.0467 | - |
| 0.3369 | 13300 | 0.0498 | - |
| 0.3395 | 13400 | 0.0402 | - |
| 0.3420 | 13500 | 0.0398 | - |
| 0.3445 | 13600 | 0.041 | - |
| 0.3471 | 13700 | 0.0516 | - |
| 0.3496 | 13800 | 0.0518 | - |
| 0.3521 | 13900 | 0.0413 | - |
| 0.3547 | 14000 | 0.0463 | 0.0199 |
| 0.3572 | 14100 | 0.0442 | - |
| 0.3597 | 14200 | 0.0695 | - |
| 0.3623 | 14300 | 0.0595 | - |
| 0.3648 | 14400 | 0.0435 | - |
| 0.3673 | 14500 | 0.0372 | - |
| 0.3699 | 14600 | 0.0398 | - |
| 0.3724 | 14700 | 0.0357 | - |
| 0.3749 | 14800 | 0.0467 | - |
| 0.3775 | 14900 | 0.0611 | - |
| 0.3800 | 15000 | 0.054 | 0.0233 |
| 0.3825 | 15100 | 0.0411 | - |
| 0.3851 | 15200 | 0.0485 | - |
| 0.3876 | 15300 | 0.0388 | - |
| 0.3901 | 15400 | 0.0474 | - |
| 0.3927 | 15500 | 0.0525 | - |
| 0.3952 | 15600 | 0.0568 | - |
| 0.3977 | 15700 | 0.0414 | - |
| 0.4003 | 15800 | 0.0375 | - |
| 0.4028 | 15900 | 0.0606 | - |
| 0.4053 | 16000 | 0.0495 | 0.0238 |
| 0.4079 | 16100 | 0.0407 | - |
| 0.4104 | 16200 | 0.0383 | - |
| 0.4129 | 16300 | 0.0318 | - |
| 0.4155 | 16400 | 0.0503 | - |
| 0.4180 | 16500 | 0.0386 | - |
| 0.4205 | 16600 | 0.0397 | - |
| 0.4231 | 16700 | 0.0409 | - |
| 0.4256 | 16800 | 0.0484 | - |
| 0.4281 | 16900 | 0.0514 | - |
| 0.4307 | 17000 | 0.0359 | 0.0216 |
| 0.4332 | 17100 | 0.0411 | - |
| 0.4357 | 17200 | 0.0372 | - |
| 0.4383 | 17300 | 0.0489 | - |
| 0.4408 | 17400 | 0.0364 | - |
| 0.4433 | 17500 | 0.0517 | - |
| 0.4459 | 17600 | 0.0422 | - |
| 0.4484 | 17700 | 0.0334 | - |
| 0.4509 | 17800 | 0.0532 | - |
| 0.4535 | 17900 | 0.0384 | - |
| 0.4560 | 18000 | 0.03 | 0.0200 |
| 0.4585 | 18100 | 0.034 | - |
| 0.4611 | 18200 | 0.0429 | - |
| 0.4636 | 18300 | 0.0448 | - |
| 0.4661 | 18400 | 0.03 | - |
| 0.4687 | 18500 | 0.0338 | - |
| 0.4712 | 18600 | 0.0436 | - |
| 0.4737 | 18700 | 0.0271 | - |
| 0.4763 | 18800 | 0.0516 | - |
| 0.4788 | 18900 | 0.0358 | - |
| 0.4813 | 19000 | 0.046 | 0.0255 |
| 0.4839 | 19100 | 0.0367 | - |
| 0.4864 | 19200 | 0.032 | - |
| 0.4889 | 19300 | 0.0363 | - |
| 0.4915 | 19400 | 0.0352 | - |
| 0.4940 | 19500 | 0.041 | - |
| 0.4965 | 19600 | 0.0508 | - |
| 0.4991 | 19700 | 0.0454 | - |
| 0.5016 | 19800 | 0.0459 | - |
| 0.5041 | 19900 | 0.0295 | - |
| 0.5066 | 20000 | 0.0415 | 0.0228 |
| 0.5092 | 20100 | 0.0422 | - |
| 0.5117 | 20200 | 0.0317 | - |
| 0.5142 | 20300 | 0.0263 | - |
| 0.5168 | 20400 | 0.0568 | - |
| 0.5193 | 20500 | 0.0339 | - |
| 0.5218 | 20600 | 0.0295 | - |
| 0.5244 | 20700 | 0.042 | - |
| 0.5269 | 20800 | 0.0343 | - |
| 0.5294 | 20900 | 0.0322 | - |
| 0.5320 | 21000 | 0.0328 | 0.0204 |
| 0.5345 | 21100 | 0.0407 | - |
| 0.5370 | 21200 | 0.0306 | - |
| 0.5396 | 21300 | 0.0295 | - |
| 0.5421 | 21400 | 0.0329 | - |
| 0.5446 | 21500 | 0.0297 | - |
| 0.5472 | 21600 | 0.0298 | - |
| 0.5497 | 21700 | 0.0261 | - |
| 0.5522 | 21800 | 0.0429 | - |
| 0.5548 | 21900 | 0.039 | - |
| 0.5573 | 22000 | 0.0336 | 0.0151 |
| 0.5598 | 22100 | 0.0417 | - |
| 0.5624 | 22200 | 0.0424 | - |
| 0.5649 | 22300 | 0.0447 | - |
| 0.5674 | 22400 | 0.0482 | - |
| 0.5700 | 22500 | 0.0253 | - |
| 0.5725 | 22600 | 0.0412 | - |
| 0.5750 | 22700 | 0.0425 | - |
| 0.5776 | 22800 | 0.0304 | - |
| 0.5801 | 22900 | 0.0302 | - |
| 0.5826 | 23000 | 0.0275 | 0.0144 |
| 0.5852 | 23100 | 0.0255 | - |
| 0.5877 | 23200 | 0.0266 | - |
| 0.5902 | 23300 | 0.038 | - |
| 0.5928 | 23400 | 0.0254 | - |
| 0.5953 | 23500 | 0.0486 | - |
| 0.5978 | 23600 | 0.0325 | - |
| 0.6004 | 23700 | 0.041 | - |
| 0.6029 | 23800 | 0.0307 | - |
| 0.6054 | 23900 | 0.037 | - |
| 0.6080 | 24000 | 0.0377 | 0.0194 |
| 0.6105 | 24100 | 0.0331 | - |
| 0.6130 | 24200 | 0.0386 | - |
| 0.6156 | 24300 | 0.0184 | - |
| 0.6181 | 24400 | 0.0244 | - |
| 0.6206 | 24500 | 0.0279 | - |
| 0.6232 | 24600 | 0.0351 | - |
| 0.6257 | 24700 | 0.0577 | - |
| 0.6282 | 24800 | 0.0434 | - |
| 0.6308 | 24900 | 0.0223 | - |
| 0.6333 | 25000 | 0.0264 | 0.0151 |
| 0.6358 | 25100 | 0.0378 | - |
| 0.6384 | 25200 | 0.0212 | - |
| 0.6409 | 25300 | 0.0245 | - |
| 0.6434 | 25400 | 0.0321 | - |
| 0.6460 | 25500 | 0.0391 | - |
| 0.6485 | 25600 | 0.0276 | - |
| 0.6510 | 25700 | 0.0253 | - |
| 0.6536 | 25800 | 0.0295 | - |
| 0.6561 | 25900 | 0.0225 | - |
| 0.6586 | 26000 | 0.0312 | 0.0133 |
| 0.6612 | 26100 | 0.0367 | - |
| 0.6637 | 26200 | 0.029 | - |
| 0.6662 | 26300 | 0.0311 | - |
| 0.6688 | 26400 | 0.0383 | - |
| 0.6713 | 26500 | 0.0357 | - |
| 0.6738 | 26600 | 0.0259 | - |
| 0.6764 | 26700 | 0.0277 | - |
| 0.6789 | 26800 | 0.0278 | - |
| 0.6814 | 26900 | 0.0242 | - |
| 0.6840 | 27000 | 0.0288 | 0.0183 |
| 0.6865 | 27100 | 0.0352 | - |
| 0.6890 | 27200 | 0.0298 | - |
| 0.6916 | 27300 | 0.0448 | - |
| 0.6941 | 27400 | 0.0299 | - |
| 0.6966 | 27500 | 0.0385 | - |
| 0.6992 | 27600 | 0.0365 | - |
| 0.7017 | 27700 | 0.022 | - |
| 0.7042 | 27800 | 0.0339 | - |
| 0.7068 | 27900 | 0.0371 | - |
| 0.7093 | 28000 | 0.0322 | 0.0183 |
| 0.7118 | 28100 | 0.0365 | - |
| 0.7144 | 28200 | 0.0271 | - |
| 0.7169 | 28300 | 0.0238 | - |
| 0.7194 | 28400 | 0.033 | - |
| 0.7220 | 28500 | 0.0225 | - |
| 0.7245 | 28600 | 0.022 | - |
| 0.7270 | 28700 | 0.0132 | - |
| 0.7296 | 28800 | 0.0304 | - |
| 0.7321 | 28900 | 0.0357 | - |
| 0.7346 | 29000 | 0.025 | 0.0149 |
| 0.7372 | 29100 | 0.0251 | - |
| 0.7397 | 29200 | 0.0238 | - |
| 0.7422 | 29300 | 0.0337 | - |
| 0.7448 | 29400 | 0.0277 | - |
| 0.7473 | 29500 | 0.02 | - |
| 0.7498 | 29600 | 0.0216 | - |
| 0.7524 | 29700 | 0.0203 | - |
| 0.7549 | 29800 | 0.0216 | - |
| 0.7574 | 29900 | 0.0317 | - |
| 0.7600 | 30000 | 0.0274 | 0.0116 |
| 0.7625 | 30100 | 0.0284 | - |
| 0.7650 | 30200 | 0.0407 | - |
| 0.7676 | 30300 | 0.0326 | - |
| 0.7701 | 30400 | 0.0207 | - |
| 0.7726 | 30500 | 0.0284 | - |
| 0.7752 | 30600 | 0.0386 | - |
| 0.7777 | 30700 | 0.031 | - |
| 0.7802 | 30800 | 0.0215 | - |
| 0.7828 | 30900 | 0.0243 | - |
| 0.7853 | 31000 | 0.0248 | 0.0132 |
| 0.7878 | 31100 | 0.0366 | - |
| 0.7904 | 31200 | 0.0248 | - |
| 0.7929 | 31300 | 0.0336 | - |
| 0.7954 | 31400 | 0.0316 | - |
| 0.7980 | 31500 | 0.0252 | - |
| 0.8005 | 31600 | 0.0236 | - |
| 0.8030 | 31700 | 0.0277 | - |
| 0.8056 | 31800 | 0.0256 | - |
| 0.8081 | 31900 | 0.0462 | - |
| 0.8106 | 32000 | 0.0322 | 0.0155 |
| 0.8132 | 32100 | 0.0159 | - |
| 0.8157 | 32200 | 0.0216 | - |
| 0.8182 | 32300 | 0.018 | - |
| 0.8208 | 32400 | 0.0232 | - |
| 0.8233 | 32500 | 0.024 | - |
| 0.8258 | 32600 | 0.0254 | - |
| 0.8284 | 32700 | 0.0334 | - |
| 0.8309 | 32800 | 0.0204 | - |
| 0.8334 | 32900 | 0.0352 | - |
| 0.8360 | 33000 | 0.024 | 0.0180 |
| 0.8385 | 33100 | 0.0368 | - |
| 0.8410 | 33200 | 0.0243 | - |
| 0.8436 | 33300 | 0.0196 | - |
| 0.8461 | 33400 | 0.0264 | - |
| 0.8486 | 33500 | 0.026 | - |
| 0.8512 | 33600 | 0.0201 | - |
| 0.8537 | 33700 | 0.0245 | - |
| 0.8562 | 33800 | 0.0205 | - |
| 0.8588 | 33900 | 0.0244 | - |
| 0.8613 | 34000 | 0.0174 | 0.0211 |
| 0.8638 | 34100 | 0.019 | - |
| 0.8664 | 34200 | 0.031 | - |
| 0.8689 | 34300 | 0.0257 | - |
| 0.8714 | 34400 | 0.0195 | - |
| 0.8740 | 34500 | 0.0274 | - |
| 0.8765 | 34600 | 0.0197 | - |
| 0.8790 | 34700 | 0.0154 | - |
| 0.8816 | 34800 | 0.0233 | - |
| 0.8841 | 34900 | 0.0314 | - |
| 0.8866 | 35000 | 0.0223 | 0.0172 |
| 0.8892 | 35100 | 0.0264 | - |
| 0.8917 | 35200 | 0.0214 | - |
| 0.8942 | 35300 | 0.0264 | - |
| 0.8968 | 35400 | 0.0194 | - |
| 0.8993 | 35500 | 0.0221 | - |
| 0.9018 | 35600 | 0.0185 | - |
| 0.9044 | 35700 | 0.029 | - |
| 0.9069 | 35800 | 0.0188 | - |
| 0.9094 | 35900 | 0.0407 | - |
| 0.9120 | 36000 | 0.0251 | 0.0188 |
| 0.9145 | 36100 | 0.0295 | - |
| 0.9170 | 36200 | 0.0233 | - |
| 0.9196 | 36300 | 0.0265 | - |
| 0.9221 | 36400 | 0.027 | - |
| 0.9246 | 36500 | 0.022 | - |
| 0.9272 | 36600 | 0.0174 | - |
| 0.9297 | 36700 | 0.0204 | - |
| 0.9322 | 36800 | 0.0314 | - |
| 0.9348 | 36900 | 0.0256 | - |
| 0.9373 | 37000 | 0.0139 | 0.0129 |
| 0.9398 | 37100 | 0.0237 | - |
| 0.9424 | 37200 | 0.0235 | - |
| 0.9449 | 37300 | 0.0202 | - |
| 0.9474 | 37400 | 0.0178 | - |
| 0.9500 | 37500 | 0.0225 | - |
| 0.9525 | 37600 | 0.0224 | - |
| 0.9550 | 37700 | 0.0259 | - |
| 0.9576 | 37800 | 0.0215 | - |
| 0.9601 | 37900 | 0.0197 | - |
| 0.9626 | 38000 | 0.0208 | 0.0108 |
| 0.9652 | 38100 | 0.0296 | - |
| 0.9677 | 38200 | 0.019 | - |
| 0.9702 | 38300 | 0.0185 | - |
| 0.9728 | 38400 | 0.0271 | - |
| 0.9753 | 38500 | 0.0336 | - |
| 0.9778 | 38600 | 0.0209 | - |
| 0.9804 | 38700 | 0.0321 | - |
| 0.9829 | 38800 | 0.0138 | - |
| 0.9854 | 38900 | 0.0185 | - |
| 0.9880 | 39000 | 0.0226 | 0.0119 |
| 0.9905 | 39100 | 0.0201 | - |
| 0.9930 | 39200 | 0.0183 | - |
| 0.9956 | 39300 | 0.0253 | - |
| 0.9981 | 39400 | 0.0304 | - |
| 1.0006 | 39500 | 0.0163 | - |
| 1.0032 | 39600 | 0.0291 | - |
| 1.0057 | 39700 | 0.0202 | - |
| 1.0082 | 39800 | 0.0125 | - |
| 1.0108 | 39900 | 0.0171 | - |
| 1.0133 | 40000 | 0.0159 | 0.0169 |
| 1.0158 | 40100 | 0.0188 | - |
| 1.0184 | 40200 | 0.024 | - |
| 1.0209 | 40300 | 0.0269 | - |
| 1.0234 | 40400 | 0.0286 | - |
| 1.0260 | 40500 | 0.0194 | - |
| 1.0285 | 40600 | 0.0174 | - |
| 1.0310 | 40700 | 0.0241 | - |
| 1.0336 | 40800 | 0.0198 | - |
| 1.0361 | 40900 | 0.0214 | - |
| 1.0386 | 41000 | 0.0182 | 0.0138 |
| 1.0412 | 41100 | 0.0148 | - |
| 1.0437 | 41200 | 0.0161 | - |
| 1.0462 | 41300 | 0.0234 | - |
| 1.0488 | 41400 | 0.0177 | - |
| 1.0513 | 41500 | 0.0105 | - |
| 1.0538 | 41600 | 0.0201 | - |
| 1.0564 | 41700 | 0.0211 | - |
| 1.0589 | 41800 | 0.0157 | - |
| 1.0614 | 41900 | 0.0164 | - |
| 1.0640 | 42000 | 0.0146 | 0.0080 |
| 1.0665 | 42100 | 0.0223 | - |
| 1.0690 | 42200 | 0.0269 | - |
| 1.0716 | 42300 | 0.0218 | - |
| 1.0741 | 42400 | 0.0294 | - |
| 1.0766 | 42500 | 0.0166 | - |
| 1.0792 | 42600 | 0.0173 | - |
| 1.0817 | 42700 | 0.015 | - |
| 1.0842 | 42800 | 0.015 | - |
| 1.0868 | 42900 | 0.0166 | - |
| 1.0893 | 43000 | 0.0123 | 0.0088 |
| 1.0918 | 43100 | 0.0137 | - |
| 1.0944 | 43200 | 0.01 | - |
| 1.0969 | 43300 | 0.0156 | - |
| 1.0994 | 43400 | 0.0126 | - |
| 1.1020 | 43500 | 0.0197 | - |
| 1.1045 | 43600 | 0.014 | - |
| 1.1070 | 43700 | 0.0154 | - |
| 1.1096 | 43800 | 0.0214 | - |
| 1.1121 | 43900 | 0.0157 | - |
| 1.1146 | 44000 | 0.0151 | 0.0093 |
| 1.1172 | 44100 | 0.014 | - |
| 1.1197 | 44200 | 0.0138 | - |
| 1.1222 | 44300 | 0.0126 | - |
| 1.1248 | 44400 | 0.0084 | - |
| 1.1273 | 44500 | 0.0124 | - |
| 1.1298 | 44600 | 0.0117 | - |
| 1.1324 | 44700 | 0.0098 | - |
| 1.1349 | 44800 | 0.0099 | - |
| 1.1374 | 44900 | 0.0115 | - |
| 1.1400 | 45000 | 0.0188 | 0.0051 |
| 1.1425 | 45100 | 0.0129 | - |
| 1.1450 | 45200 | 0.0128 | - |
| 1.1476 | 45300 | 0.015 | - |
| 1.1501 | 45400 | 0.0106 | - |
| 1.1526 | 45500 | 0.0115 | - |
| 1.1552 | 45600 | 0.0144 | - |
| 1.1577 | 45700 | 0.0144 | - |
| 1.1602 | 45800 | 0.0078 | - |
| 1.1628 | 45900 | 0.0143 | - |
| 1.1653 | 46000 | 0.0122 | 0.0089 |
| 1.1678 | 46100 | 0.0059 | - |
| 1.1704 | 46200 | 0.0119 | - |
| 1.1729 | 46300 | 0.0103 | - |
| 1.1754 | 46400 | 0.0083 | - |
| 1.1780 | 46500 | 0.0148 | - |
| 1.1805 | 46600 | 0.0097 | - |
| 1.1830 | 46700 | 0.0067 | - |
| 1.1856 | 46800 | 0.0116 | - |
| 1.1881 | 46900 | 0.0124 | - |
| 1.1906 | 47000 | 0.0063 | 0.0125 |
| 1.1932 | 47100 | 0.007 | - |
| 1.1957 | 47200 | 0.0095 | - |
| 1.1982 | 47300 | 0.0072 | - |
| 1.2008 | 47400 | 0.0124 | - |
| 1.2033 | 47500 | 0.0109 | - |
| 1.2058 | 47600 | 0.0108 | - |
| 1.2084 | 47700 | 0.0057 | - |
| 1.2109 | 47800 | 0.0133 | - |
| 1.2134 | 47900 | 0.0095 | - |
| 1.2160 | 48000 | 0.0057 | 0.0107 |
| 1.2185 | 48100 | 0.0085 | - |
| 1.2210 | 48200 | 0.0037 | - |
| 1.2236 | 48300 | 0.0077 | - |
| 1.2261 | 48400 | 0.0128 | - |
| 1.2286 | 48500 | 0.0124 | - |
| 1.2312 | 48600 | 0.0081 | - |
| 1.2337 | 48700 | 0.008 | - |
| 1.2362 | 48800 | 0.0051 | - |
| 1.2388 | 48900 | 0.0101 | - |
| 1.2413 | 49000 | 0.0059 | 0.0124 |
| 1.2438 | 49100 | 0.0063 | - |
| 1.2464 | 49200 | 0.0075 | - |
| 1.2489 | 49300 | 0.0064 | - |
| 1.2514 | 49400 | 0.0065 | - |
| 1.2540 | 49500 | 0.0056 | - |
| 1.2565 | 49600 | 0.0098 | - |
| 1.2590 | 49700 | 0.0062 | - |
| 1.2616 | 49800 | 0.0067 | - |
| 1.2641 | 49900 | 0.0046 | - |
| 1.2666 | 50000 | 0.0088 | 0.0114 |
| 1.2692 | 50100 | 0.005 | - |
| 1.2717 | 50200 | 0.0083 | - |
| 1.2742 | 50300 | 0.0073 | - |
| 1.2768 | 50400 | 0.0084 | - |
| 1.2793 | 50500 | 0.0044 | - |
| 1.2818 | 50600 | 0.0052 | - |
| 1.2844 | 50700 | 0.0045 | - |
| 1.2869 | 50800 | 0.0085 | - |
| 1.2894 | 50900 | 0.0057 | - |
| 1.2920 | 51000 | 0.0048 | 0.0111 |
| 1.2945 | 51100 | 0.0059 | - |
| 1.2970 | 51200 | 0.0065 | - |
| 1.2996 | 51300 | 0.0057 | - |
| 1.3021 | 51400 | 0.0059 | - |
| 1.3046 | 51500 | 0.0056 | - |
| 1.3072 | 51600 | 0.0124 | - |
| 1.3097 | 51700 | 0.0067 | - |
| 1.3122 | 51800 | 0.011 | - |
| 1.3148 | 51900 | 0.0078 | - |
| 1.3173 | 52000 | 0.0068 | 0.0110 |
| 1.3198 | 52100 | 0.006 | - |
| 1.3224 | 52200 | 0.0084 | - |
| 1.3249 | 52300 | 0.0064 | - |
| 1.3274 | 52400 | 0.0055 | - |
| 1.3300 | 52500 | 0.0032 | - |
| 1.3325 | 52600 | 0.0049 | - |
| 1.3350 | 52700 | 0.0068 | - |
| 1.3376 | 52800 | 0.0067 | - |
| 1.3401 | 52900 | 0.006 | - |
| 1.3426 | 53000 | 0.0058 | 0.0098 |
| 1.3452 | 53100 | 0.0046 | - |
| 1.3477 | 53200 | 0.0055 | - |
| 1.3502 | 53300 | 0.0074 | - |
| 1.3528 | 53400 | 0.0029 | - |
| 1.3553 | 53500 | 0.0071 | - |
| 1.3578 | 53600 | 0.0074 | - |
| 1.3604 | 53700 | 0.0068 | - |
| 1.3629 | 53800 | 0.0066 | - |
| 1.3654 | 53900 | 0.0077 | - |
| 1.3680 | 54000 | 0.0069 | 0.0107 |
| 1.3705 | 54100 | 0.0039 | - |
| 1.3730 | 54200 | 0.0051 | - |
| 1.3756 | 54300 | 0.0038 | - |
| 1.3781 | 54400 | 0.0073 | - |
| 1.3806 | 54500 | 0.0087 | - |
| 1.3832 | 54600 | 0.0053 | - |
| 1.3857 | 54700 | 0.0054 | - |
| 1.3882 | 54800 | 0.0091 | - |
| 1.3908 | 54900 | 0.0067 | - |
| 1.3933 | 55000 | 0.0071 | 0.0094 |
| 1.3958 | 55100 | 0.0056 | - |
| 1.3984 | 55200 | 0.0043 | - |
| 1.4009 | 55300 | 0.0059 | - |
| 1.4034 | 55400 | 0.007 | - |
| 1.4060 | 55500 | 0.0064 | - |
| 1.4085 | 55600 | 0.006 | - |
| 1.4110 | 55700 | 0.0031 | - |
| 1.4136 | 55800 | 0.0058 | - |
| 1.4161 | 55900 | 0.0056 | - |
| 1.4186 | 56000 | 0.0052 | 0.0096 |
| 1.4212 | 56100 | 0.0045 | - |
| 1.4237 | 56200 | 0.0046 | - |
| 1.4262 | 56300 | 0.0044 | - |
| 1.4288 | 56400 | 0.0076 | - |
| 1.4313 | 56500 | 0.0029 | - |
| 1.4338 | 56600 | 0.005 | - |
| 1.4364 | 56700 | 0.0042 | - |
| 1.4389 | 56800 | 0.0066 | - |
| 1.4414 | 56900 | 0.0119 | - |
| 1.4440 | 57000 | 0.0033 | 0.0076 |
| 1.4465 | 57100 | 0.0076 | - |
| 1.4490 | 57200 | 0.0058 | - |
| 1.4516 | 57300 | 0.0054 | - |
| 1.4541 | 57400 | 0.0039 | - |
| 1.4566 | 57500 | 0.0057 | - |
| 1.4592 | 57600 | 0.008 | - |
| 1.4617 | 57700 | 0.0082 | - |
| 1.4642 | 57800 | 0.0041 | - |
| 1.4668 | 57900 | 0.0037 | - |
| 1.4693 | 58000 | 0.0048 | 0.0078 |
| 1.4718 | 58100 | 0.0041 | - |
| 1.4744 | 58200 | 0.0049 | - |
| 1.4769 | 58300 | 0.0085 | - |
| 1.4794 | 58400 | 0.0036 | - |
| 1.4820 | 58500 | 0.0061 | - |
| 1.4845 | 58600 | 0.0039 | - |
| 1.4870 | 58700 | 0.0049 | - |
| 1.4896 | 58800 | 0.0027 | - |
| 1.4921 | 58900 | 0.003 | - |
| 1.4946 | 59000 | 0.006 | 0.0097 |
| 1.4972 | 59100 | 0.0068 | - |
| 1.4997 | 59200 | 0.0083 | - |
| 1.5022 | 59300 | 0.0066 | - |
| 1.5047 | 59400 | 0.0049 | - |
| 1.5073 | 59500 | 0.0034 | - |
| 1.5098 | 59600 | 0.0044 | - |
| 1.5123 | 59700 | 0.0036 | - |
| 1.5149 | 59800 | 0.0041 | - |
| 1.5174 | 59900 | 0.006 | - |
| 1.5199 | 60000 | 0.0063 | 0.0099 |
| 1.5225 | 60100 | 0.0028 | - |
| 1.5250 | 60200 | 0.0045 | - |
| 1.5275 | 60300 | 0.0056 | - |
| 1.5301 | 60400 | 0.0046 | - |
| 1.5326 | 60500 | 0.0053 | - |
| 1.5351 | 60600 | 0.0044 | - |
| 1.5377 | 60700 | 0.0053 | - |
| 1.5402 | 60800 | 0.0044 | - |
| 1.5427 | 60900 | 0.0034 | - |
| 1.5453 | 61000 | 0.0033 | 0.0073 |
| 1.5478 | 61100 | 0.005 | - |
| 1.5503 | 61200 | 0.0027 | - |
| 1.5529 | 61300 | 0.0049 | - |
| 1.5554 | 61400 | 0.0048 | - |
| 1.5579 | 61500 | 0.0032 | - |
| 1.5605 | 61600 | 0.0043 | - |
| 1.5630 | 61700 | 0.0049 | - |
| 1.5655 | 61800 | 0.0062 | - |
| 1.5681 | 61900 | 0.0076 | - |
| 1.5706 | 62000 | 0.006 | 0.0053 |
| 1.5731 | 62100 | 0.0078 | - |
| 1.5757 | 62200 | 0.0033 | - |
| 1.5782 | 62300 | 0.0031 | - |
| 1.5807 | 62400 | 0.0038 | - |
| 1.5833 | 62500 | 0.0026 | - |
| 1.5858 | 62600 | 0.0036 | - |
| 1.5883 | 62700 | 0.0034 | - |
| 1.5909 | 62800 | 0.0076 | - |
| 1.5934 | 62900 | 0.0039 | - |
| 1.5959 | 63000 | 0.006 | 0.0073 |
| 1.5985 | 63100 | 0.0055 | - |
| 1.6010 | 63200 | 0.0046 | - |
| 1.6035 | 63300 | 0.0042 | - |
| 1.6061 | 63400 | 0.0061 | - |
| 1.6086 | 63500 | 0.003 | - |
| 1.6111 | 63600 | 0.0034 | - |
| 1.6137 | 63700 | 0.0058 | - |
| 1.6162 | 63800 | 0.0036 | - |
| 1.6187 | 63900 | 0.0015 | - |
| 1.6213 | 64000 | 0.0052 | 0.0076 |
| 1.6238 | 64100 | 0.0047 | - |
| 1.6263 | 64200 | 0.0083 | - |
| 1.6289 | 64300 | 0.0035 | - |
| 1.6314 | 64400 | 0.0025 | - |
| 1.6339 | 64500 | 0.0052 | - |
| 1.6365 | 64600 | 0.0029 | - |
| 1.6390 | 64700 | 0.0019 | - |
| 1.6415 | 64800 | 0.0036 | - |
| 1.6441 | 64900 | 0.002 | - |
| 1.6466 | 65000 | 0.007 | 0.0074 |
| 1.6491 | 65100 | 0.0038 | - |
| 1.6517 | 65200 | 0.0051 | - |
| 1.6542 | 65300 | 0.0027 | - |
| 1.6567 | 65400 | 0.003 | - |
| 1.6593 | 65500 | 0.0045 | - |
| 1.6618 | 65600 | 0.0067 | - |
| 1.6643 | 65700 | 0.003 | - |
| 1.6669 | 65800 | 0.0033 | - |
| 1.6694 | 65900 | 0.0043 | - |
| 1.6719 | 66000 | 0.0025 | 0.0071 |
| 1.6745 | 66100 | 0.0025 | - |
| 1.6770 | 66200 | 0.0057 | - |
| 1.6795 | 66300 | 0.0029 | - |
| 1.6821 | 66400 | 0.0016 | - |
| 1.6846 | 66500 | 0.0055 | - |
| 1.6871 | 66600 | 0.0029 | - |
| 1.6897 | 66700 | 0.0031 | - |
| 1.6922 | 66800 | 0.006 | - |
| 1.6947 | 66900 | 0.003 | - |
| 1.6973 | 67000 | 0.0042 | 0.0072 |
| 1.6998 | 67100 | 0.0049 | - |
| 1.7023 | 67200 | 0.0018 | - |
| 1.7049 | 67300 | 0.0043 | - |
| 1.7074 | 67400 | 0.007 | - |
| 1.7099 | 67500 | 0.0025 | - |
| 1.7125 | 67600 | 0.0051 | - |
| 1.7150 | 67700 | 0.0056 | - |
| 1.7175 | 67800 | 0.003 | - |
| 1.7201 | 67900 | 0.0041 | - |
| 1.7226 | 68000 | 0.0025 | 0.0082 |
| 1.7251 | 68100 | 0.0018 | - |
| 1.7277 | 68200 | 0.0034 | - |
| 1.7302 | 68300 | 0.0065 | - |
| 1.7327 | 68400 | 0.0047 | - |
| 1.7353 | 68500 | 0.0052 | - |
| 1.7378 | 68600 | 0.0013 | - |
| 1.7403 | 68700 | 0.0063 | - |
| 1.7429 | 68800 | 0.0047 | - |
| 1.7454 | 68900 | 0.004 | - |
| 1.7479 | 69000 | 0.0026 | 0.0077 |
| 1.7505 | 69100 | 0.0032 | - |
| 1.7530 | 69200 | 0.0031 | - |
| 1.7555 | 69300 | 0.0024 | - |
| 1.7581 | 69400 | 0.0022 | - |
| 1.7606 | 69500 | 0.0029 | - |
| 1.7631 | 69600 | 0.0055 | - |
| 1.7657 | 69700 | 0.0031 | - |
| 1.7682 | 69800 | 0.004 | - |
| 1.7707 | 69900 | 0.0032 | - |
| 1.7733 | 70000 | 0.0034 | 0.0067 |
| 1.7758 | 70100 | 0.007 | - |
| 1.7783 | 70200 | 0.0049 | - |
| 1.7809 | 70300 | 0.0023 | - |
| 1.7834 | 70400 | 0.0028 | - |
| 1.7859 | 70500 | 0.0048 | - |
| 1.7885 | 70600 | 0.0042 | - |
| 1.7910 | 70700 | 0.006 | - |
| 1.7935 | 70800 | 0.006 | - |
| 1.7961 | 70900 | 0.0044 | - |
| 1.7986 | 71000 | 0.0036 | 0.0063 |
| 1.8011 | 71100 | 0.0025 | - |
| 1.8037 | 71200 | 0.0027 | - |
| 1.8062 | 71300 | 0.0033 | - |
| 1.8087 | 71400 | 0.0045 | - |
| 1.8113 | 71500 | 0.0037 | - |
| 1.8138 | 71600 | 0.0023 | - |
| 1.8163 | 71700 | 0.0021 | - |
| 1.8189 | 71800 | 0.0019 | - |
| 1.8214 | 71900 | 0.0046 | - |
| 1.8239 | 72000 | 0.0029 | 0.0065 |
| 1.8265 | 72100 | 0.0061 | - |
| 1.8290 | 72200 | 0.005 | - |
| 1.8315 | 72300 | 0.0036 | - |
| 1.8341 | 72400 | 0.0057 | - |
| 1.8366 | 72500 | 0.0049 | - |
| 1.8391 | 72600 | 0.0068 | - |
| 1.8417 | 72700 | 0.0026 | - |
| 1.8442 | 72800 | 0.0032 | - |
| 1.8467 | 72900 | 0.0036 | - |
| 1.8493 | 73000 | 0.0026 | 0.0066 |
| 1.8518 | 73100 | 0.0024 | - |
| 1.8543 | 73200 | 0.0014 | - |
| 1.8569 | 73300 | 0.0022 | - |
| 1.8594 | 73400 | 0.0039 | - |
| 1.8619 | 73500 | 0.0019 | - |
| 1.8645 | 73600 | 0.0016 | - |
| 1.8670 | 73700 | 0.0034 | - |
| 1.8695 | 73800 | 0.004 | - |
| 1.8721 | 73900 | 0.0014 | - |
| 1.8746 | 74000 | 0.004 | 0.0062 |
| 1.8771 | 74100 | 0.0014 | - |
| 1.8797 | 74200 | 0.0025 | - |
| 1.8822 | 74300 | 0.0025 | - |
| 1.8847 | 74400 | 0.0037 | - |
| 1.8873 | 74500 | 0.0038 | - |
| 1.8898 | 74600 | 0.0029 | - |
| 1.8923 | 74700 | 0.0037 | - |
| 1.8949 | 74800 | 0.0026 | - |
| 1.8974 | 74900 | 0.0019 | - |
| 1.8999 | 75000 | 0.0013 | 0.0062 |
| 1.9025 | 75100 | 0.0027 | - |
| 1.9050 | 75200 | 0.0028 | - |
| 1.9075 | 75300 | 0.0014 | - |
| 1.9101 | 75400 | 0.0067 | - |
| 1.9126 | 75500 | 0.0023 | - |
| 1.9151 | 75600 | 0.0024 | - |
| 1.9177 | 75700 | 0.0021 | - |
| 1.9202 | 75800 | 0.0062 | - |
| 1.9227 | 75900 | 0.0104 | - |
| 1.9253 | 76000 | 0.0021 | 0.0064 |
| 1.9278 | 76100 | 0.0023 | - |
| 1.9303 | 76200 | 0.0059 | - |
| 1.9329 | 76300 | 0.0055 | - |
| 1.9354 | 76400 | 0.002 | - |
| 1.9379 | 76500 | 0.0029 | - |
| 1.9405 | 76600 | 0.0028 | - |
| 1.9430 | 76700 | 0.0021 | - |
| 1.9455 | 76800 | 0.0037 | - |
| 1.9481 | 76900 | 0.0019 | - |
| 1.9506 | 77000 | 0.0027 | 0.0062 |
| 1.9531 | 77100 | 0.0039 | - |
| 1.9557 | 77200 | 0.0027 | - |
| 1.9582 | 77300 | 0.0034 | - |
| 1.9607 | 77400 | 0.005 | - |
| 1.9633 | 77500 | 0.0022 | - |
| 1.9658 | 77600 | 0.0072 | - |
| 1.9683 | 77700 | 0.0025 | - |
| 1.9709 | 77800 | 0.0019 | - |
| 1.9734 | 77900 | 0.0034 | - |
| 1.9759 | 78000 | 0.0068 | 0.0060 |
| 1.9785 | 78100 | 0.0042 | - |
| 1.9810 | 78200 | 0.0041 | - |
| 1.9835 | 78300 | 0.0018 | - |
| 1.9861 | 78400 | 0.0019 | - |
| 1.9886 | 78500 | 0.0029 | - |
| 1.9911 | 78600 | 0.0039 | - |
| 1.9937 | 78700 | 0.0023 | - |
| 1.9962 | 78800 | 0.0092 | - |
| 1.9987 | 78900 | 0.0018 | - |
</details>
### Framework Versions
- Python: 3.11.10
- Sentence Transformers: 3.3.1
- Transformers: 4.46.3
- PyTorch: 2.5.1+cu124
- Accelerate: 1.1.1
- Datasets: 3.1.0
- Tokenizers: 0.20.3
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### MultipleNegativesRankingLoss
```bibtex
@misc{henderson2017efficient,
title={Efficient Natural Language Response Suggestion for Smart Reply},
author={Matthew Henderson and Rami Al-Rfou and Brian Strope and Yun-hsuan Sung and Laszlo Lukacs and Ruiqi Guo and Sanjiv Kumar and Balint Miklos and Ray Kurzweil},
year={2017},
eprint={1705.00652},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
khizarAI/whisper-tiny-en-US | khizarAI | 2024-11-23T12:52:11Z | 79 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:PolyAI/minds14",
"base_model:openai/whisper-tiny",
"base_model:finetune:openai/whisper-tiny",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-11-23T12:43:50Z | ---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
datasets:
- PolyAI/minds14
metrics:
- wer
model-index:
- name: whisper-tiny-en-US
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: PolyAI/minds14
type: PolyAI/minds14
config: en-US
split: train
args: en-US
metrics:
- name: Wer
type: wer
value: 0.33517835178351785
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-en-US
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the PolyAI/minds14 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6504
- Wer Ortho: 33.9125
- Wer: 0.3352
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-------:|:----:|:---------------:|:---------:|:------:|
| 0.0006 | 17.2414 | 500 | 0.6504 | 33.9125 | 0.3352 |
### Framework versions
- Transformers 4.46.2
- Pytorch 2.5.1+cu121
- Datasets 3.1.0
- Tokenizers 0.20.3
|
win10/SphinxMind-14B | win10 | 2024-11-23T12:46:12Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2",
"base_model:merge:EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2",
"base_model:shuttleai/shuttle-3-mini",
"base_model:merge:shuttleai/shuttle-3-mini",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-23T12:40:45Z | ---
base_model:
- shuttleai/shuttle-3-mini
- EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [shuttleai/shuttle-3-mini](https://huggingface.co/shuttleai/shuttle-3-mini) as a base.
### Models Merged
The following models were included in the merge:
* [EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2](https://huggingface.co/EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: shuttleai/shuttle-3-mini
parameters:
density: 1
weight: 1
- model: EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2
parameters:
density: 1
weight: 1
- model: shuttleai/shuttle-3-mini
parameters:
density: 1
weight: 1
- model: EVA-UNIT-01/EVA-Qwen2.5-14B-v0.2
parameters:
density: 1
weight: 1
merge_method: ties
base_model: shuttleai/shuttle-3-mini
parameters:
normalize: true
int8_mask: true
dtype: bfloat16
```
|
isspek/xlnet-base-cased_ebola_mistral_1_2e-5_16_undersampling_0.6 | isspek | 2024-11-23T12:41:44Z | 117 | 0 | transformers | [
"transformers",
"safetensors",
"xlnet",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-23T12:41:30Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
isspek/xlnet-base-cased_ebola_mistral_1_2e-5_16 | isspek | 2024-11-23T12:40:25Z | 134 | 0 | transformers | [
"transformers",
"safetensors",
"xlnet",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-23T12:40:09Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
elyx/batman-1 | elyx | 2024-11-23T12:33:58Z | 63 | 0 | transformers | [
"transformers",
"safetensors",
"ultravox",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"region:us"
] | feature-extraction | 2024-11-23T12:33:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
second-state/stable-diffusion-3.5-medium-GGUF | second-state | 2024-11-23T12:26:40Z | 2,243 | 5 | diffusers | [
"diffusers",
"gguf",
"text-to-image",
"stable-diffusion",
"en",
"base_model:stabilityai/stable-diffusion-3.5-medium",
"base_model:quantized:stabilityai/stable-diffusion-3.5-medium",
"license:other",
"region:us"
] | text-to-image | 2024-11-23T11:32:25Z | ---
base_model: stabilityai/stable-diffusion-3.5-medium
license: other
license_name: stabilityai-ai-community
license_link: LICENSE.md
model_creator: stabilityai
model_name: stable-diffusion-3.5-medium
quantized_by: Second State Inc.
tags:
- text-to-image
- stable-diffusion
- diffusers
inference: true
language:
- en
pipeline_tag: text-to-image
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# stable-diffusion-3.5-medium-GGUF
## Original Model
[stabilityai/stable-diffusion-3.5-medium](https://huggingface.co/stabilityai/stable-diffusion-3.5-medium)
## Run with `sd-api-server`
- Version: coming soon
## Quantized GGUF Models
| Name | Quant method | Bits | Size | Use case |
| ---- | ---- | ---- | ---- | ----- |
| [clip_g-Q4_0.gguf](https://huggingface.co/second-state/stable-diffusion-3.5-medium-GGUF/blob/main/clip_g-Q4_0.gguf) | Q4_0 | 4 | 391 MB | |
| [clip_g-Q4_1.gguf](https://huggingface.co/second-state/stable-diffusion-3.5-medium-GGUF/blob/main/clip_g-Q4_1.gguf) | Q4_1 | 4 | 435 MB | |
| [clip_g-Q5_0.gguf](https://huggingface.co/second-state/stable-diffusion-3.5-medium-GGUF/blob/main/clip_g-Q5_0.gguf) | Q5_0 | 5 | 478 MB | |
| [clip_g-Q5_1.gguf](https://huggingface.co/second-state/stable-diffusion-3.5-medium-GGUF/blob/main/clip_g-Q5_1.gguf) | Q5_1 | 5 | 522 MB | |
| [clip_g-Q8_0.gguf](https://huggingface.co/second-state/stable-diffusion-3.5-medium-GGUF/blob/main/clip_g-Q8_0.gguf) | Q8_0 | 8 | 739 MB | |
| [clip_g-f16.gguf](https://huggingface.co/second-state/stable-diffusion-3.5-medium-GGUF/blob/main/clip_g.safetensors) | f16 | 16 | 1.39 GB | |
| [clip_g.safetensors](https://huggingface.co/second-state/stable-diffusion-3.5-medium-GGUF/blob/main/clip_g.safetensors) | f16 | 16 | 1.39 GB | |
| [clip_l-Q4_0.gguf](https://huggingface.co/second-state/stable-diffusion-3.5-medium-GGUF/blob/main/clip_l-Q4_0.gguf) | Q4_0 | 4 | 69.4 MB | |
| [clip_l-Q4_1.gguf](https://huggingface.co/second-state/stable-diffusion-3.5-medium-GGUF/blob/main/clip_l-Q4_1.gguf) | Q4_1 | 4 | 77.1 MB | |
| [clip_l-Q5_0.gguf](https://huggingface.co/second-state/stable-diffusion-3.5-medium-GGUF/blob/main/clip_l-Q5_0.gguf) | Q5_0 | 5 | 84.8 MB | |
| [clip_l-Q5_1.gguf](https://huggingface.co/second-state/stable-diffusion-3.5-medium-GGUF/blob/main/clip_l-Q5_1.gguf) | Q5_1 | 5 | 92.4 MB | |
| [clip_l-Q8_0.gguf](https://huggingface.co/second-state/stable-diffusion-3.5-medium-GGUF/blob/main/clip_l-Q8_0.gguf) | Q8_0 | 8 | 131 MB | |
| [clip_l-f16.gguf](https://huggingface.co/second-state/stable-diffusion-3.5-medium-GGUF/blob/main/clip_l-f16.gguf) | f16 | 16 | 246 MB | |
| [clip_l.safetensors](https://huggingface.co/second-state/stable-diffusion-3.5-medium-GGUF/blob/main/clip_l.safetensors) | f16 | 16 | 246 MB | |
| [sd3.5_medium-Q4_0.gguf](https://huggingface.co/second-state/stable-diffusion-3.5-medium-GGUF/blob/main/sd3.5_medium-Q4_0.gguf) | Q4_0 | 4 | 2.08 GB | |
| [sd3.5_medium-Q4_1.gguf](https://huggingface.co/second-state/stable-diffusion-3.5-medium-GGUF/blob/main/sd3.5_medium-Q4_1.gguf) | Q4_1 | 4 | 2.22 GB | |
| [sd3.5_medium-Q5_0.gguf](https://huggingface.co/second-state/stable-diffusion-3.5-medium-GGUF/blob/main/sd3.5_medium-Q5_0.gguf) | Q5_0 | 5 | 2.36 GB | |
| [sd3.5_medium-Q5_1.gguf](https://huggingface.co/second-state/stable-diffusion-3.5-medium-GGUF/blob/main/sd3.5_medium-Q5_1.gguf) | Q5_1 | 5 | 2.50 GB | |
| [sd3.5_medium-Q8_0.gguf](https://huggingface.co/second-state/stable-diffusion-3.5-medium-GGUF/blob/main/sd3.5_medium-Q8_0.gguf) | Q8_0 | 8 | 3.19 GB | |
| [sd3.5_medium.safetensors](https://huggingface.co/second-state/stable-diffusion-3.5-medium-GGUF/blob/main/sd3.5_medium.safetensors) | f16 | 16 | 5.11 GB | |
| [t5xxl-Q4_0.gguf](https://huggingface.co/second-state/stable-diffusion-3.5-medium-GGUF/blob/main/t5xxl-Q4_0.gguf) | Q4_0 | 4 | 2.75 GB | |
| [t5xxl-Q4_1.gguf](https://huggingface.co/second-state/stable-diffusion-3.5-medium-GGUF/blob/main/t5xxl-Q4_1.gguf) | Q4_1 | 4 | 3.06 GB | |
| [t5xxl-Q5_0.gguf](https://huggingface.co/second-state/stable-diffusion-3.5-medium-GGUF/blob/main/t5xxl-Q5_0.gguf) | Q5_0 | 5 | 3.36 GB | |
| [t5xxl-Q5_1.gguf](https://huggingface.co/second-state/stable-diffusion-3.5-medium-GGUF/blob/main/t5xxl-Q5_1.gguf) | Q5_1 | 5 | 3.67 GB | |
| [t5xxl-Q8_0.gguf](https://huggingface.co/second-state/stable-diffusion-3.5-medium-GGUF/blob/main/t5xxl-Q8_0.gguf) | Q8_0 | 8 | 5.20 GB | |
| [t5xxl_fp16.safetensors](https://huggingface.co/second-state/stable-diffusion-3.5-medium-GGUF/blob/main/t5xxl_fp16.safetensors) | f16 | 16 | 9.79 GB | |
**Quantized with stable-diffusion.cpp `master-c3eeb669`.** |
touhidulislam/BERTweet_retrain_2022_42 | touhidulislam | 2024-11-23T12:20:32Z | 178 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:vinai/bertweet-base",
"base_model:finetune:vinai/bertweet-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-11-23T12:20:06Z | ---
library_name: transformers
license: mit
base_model: vinai/bertweet-base
tags:
- generated_from_trainer
model-index:
- name: BERTweet_retrain_2022_42
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTweet_retrain_2022_42
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4725
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.7074 | 1.0 | 5957 | 2.5622 |
| 2.5566 | 2.0 | 11914 | 2.4895 |
| 2.7189 | 3.0 | 17871 | 2.4860 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.1.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
prithivMLmods/Canopus-LoRA-Flux-UltraRealism-2.0 | prithivMLmods | 2024-11-23T12:11:56Z | 12,548 | 89 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"flux-dev",
"ultra",
"realism",
"photorealism",
"hi-res",
"face",
"diffusion",
"UltraRealism",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-10-17T13:08:00Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
- flux-dev
- ultra
- realism
- photorealism
- hi-res
- face
- diffusion
- UltraRealism
widget:
- text: >-
Woman in a red jacket, snowy, in the style of hyper-realistic portraiture,
caninecore, mountainous vistas, timeless beauty, palewave, iconic,
distinctive noses --ar 72:101 --stylize 750 --v 6
output:
url: images/3.png
- text: >-
Photograph, candid shot, famous randomly couch and randomly finished with
randomly cats, center point for cat, Use camera is Canon EOS 5D Mark IV with
a Canon EF 24mm f/1.4L II USM lens, set at aperture f/2.8 for a depth of
field that highlights the furniture clean lines with rich and many detail,
randomly color and finished, soft ambient light, studio light setting, ultra
realistic, UHD, many details --chaos 1 --ar 9:16 --style raw --stylize 750
output:
url: images/5.png
- text: >-
High-resolution photograph, woman, UHD, photorealistic, shot on a Sony A7III
--chaos 20 --ar 1:2 --style raw --stylize 250
output:
url: images/XX.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Ultra realistic
license: creativeml-openrail-m
new_version: strangerzonehf/Flux-Super-Realism-LoRA
---
# Canopus-LoRA-Flux-UltraRealism-2.0
<Gallery />
- The new versions available: https://huggingface.co/prithivMLmods/Ton618-Epic-Realism-Flux-LoRA
- Also Available in Flux LoRA DLC: https://huggingface.co/spaces/prithivMLmods/FLUX-LoRA-DLC
**The model is still in the training phase. This is not the final version and may contain artifacts and perform poorly in some cases.**
## Model description
**prithivMLmods/Canopus-LoRA-Flux-UltraRealism-2.0**
Image Processing Parameters
| Parameter | Value | Parameter | Value |
|---------------------------|--------|---------------------------|--------|
| LR Scheduler | constant | Noise Offset | 0.03 |
| Optimizer | AdamW | Multires Noise Discount | 0.1 |
| Network Dim | 64 | Multires Noise Iterations | 10 |
| Network Alpha | 32 | Repeat & Steps | 30 & 3.8K+ |
| Epoch | 20 | Save Every N Epochs | 1 |
Labeling: florence2-en(natural language & English)
Total Images Used for Training : 70 [ Hi-RES ] & More ...............
## 🚀New Version Available Here🚀
Here's a table summarizing the relevant information about the **`Flux-Super-Realism-LoRA`** model on Hugging Face:
| **Feature** | **Details** |
|-------------------------|-----------------------------------------------------------------------------|
| **Model Name** | `Flux-Super-Realism-LoRA` |
| **Repository** | [strangerzonehf/Flux-Super-Realism-LoRA](https://huggingface.co/strangerzonehf/Flux-Super-Realism-LoRA) |
| **Author** | `strangerzonehf` |
| **Description** | Super-realism LoRA model designed to produce high-quality, hyper-realistic images using LoRA fine-tuning techniques. This model can generate lifelike textures, lighting, and intricate details. |
| **Model Type** | LoRA (Low-Rank Adaptation for Transformers) |
| **Use Cases** | - Photorealistic image generation<br>- High-fidelity art<br>- Texture detailing and enhancement |
| **Primary Language** | Not applicable (model is image-based) |
| **Base Model** | Model used as the foundation for LoRA fine-tuning (may vary per implementation) |
| **License** | Refer to Hugging Face model page for specific licensing information. |
| **Tags** | super-realism, LoRA, high-fidelity, hyper-realistic |
| **Usage** | This model is typically used with tools like Hugging Face's `Diffusers` or other libraries supporting LoRA fine-tuning for enhanced realism in image generation. |
| **Pipeline** | Use in `StableDiffusionPipeline` or compatible image generation pipelines. |
## Image Compare DLC.

---
| Image | Description |
|-------|-------------|
| **Image 1** | **Portrait of an attractive woman**: Late twenties, light brown hair, wearing a yellow sweater, looking directly at the camera. Standing outdoors near trees. Aspect ratio: 128:85. Style: Raw |
| **Image 2** | **Headshot of a handsome young man**: Dark gray sweater with buttons and shawl collar, brown hair, short beard. Serious expression on a black background, soft studio lighting. Aspect ratio: 85:128. Style: Raw |
| **Image 3** | **Fashion photo**: Model in a white bodysuit and beige trench coat, posing with hands on head in front of a train station at sunset. High resolution, 35mm lens, f/22, natural lighting. Aspect ratio: 85:128. Style: Raw |
| **Image 4** | **Rustic portrait**: Woman with fair skin and natural, wavy hair in soft curls. Wearing a red and navy plaid shirt with a white undershirt, sleeves rolled up. Leaning against a weathered blue door frame with a contemplative expression. |
## Other Sample
| Image | Prompt |
|-------|--------|
|  | Photography, envision an image steeped in urban nostalgia, a portrait of youthful ennui set against the backdrop of a timeless cityscape. A woman lounges languidly on an old, industrial metal staircase, her pose exuding a sense of introspection and quiet defiance. A plain black t-shirt, snug and slightly worn, subtly contours to her frame. Her black jeans embrace streetwear chic. Classic black and white shoes strike a stark contrast with the rusted steps, while her casual lace-up style hints at a readiness to spring into motion. Her hair is a cascade of dark waves, partly obscured by a black cap, its brim peeking out with a hint of youthful edge. Around her, the weathered brick walls whisper stories of the city's past, the windows reflecting fragmented visions of urban life. There's an air of contemplation as she rests her head on one arm, gazing distantly, perhaps lost in thought or simply enjoying a moment of solitude in the urban maze. Used camera: Sony α9 II with Sony FE 100-400mm f/4.5-5.6 GM OSS lens, emulating a high-contrast, ultra realistic style. --ar 9:16 --style raw --stylize 750 |
---
## Trigger words
You should use `Ultra realistic` to trigger the image generation.
## Other Versions
Here’s a table format for the Hugging Face model **"prithivMLmods/Canopus-LoRA-Flux-FaceRealism"**:
| **Attribute** | **Details** |
|---------------------------|--------------------------------------------------------------------------------------------------------------|
| **Model Name** | Canopus-LoRA-Flux-FaceRealism |
| **Model ID** | `prithivMLmods/Canopus-LoRA-Flux-FaceRealism` |
| **Hugging Face URL** | [Canopus-LoRA-Flux-FaceRealism](https://huggingface.co/prithivMLmods/Canopus-LoRA-Flux-FaceRealism) |
| **Model Type** | LoRA (Low-Rank Adaptation) |
| **Primary Use Case** | Face Realism image generation |
| **Supported Framework** | Hugging Face Diffusers |
| **Data Type** | `bfloat16`, `fp16`, `float32` |
| **Compatible Models** | Stable Diffusion, Flux models |
| **Model Author** | `prithivMLmods` |
| **LoRA Technique** | LoRA for image style transfer with a focus on generating realistic faces |
| **Model Version** | Latest |
| **License** | Open-Access |
| **Tags** | LoRA, Face Realism, Flux, Image Generation |
## Setting Up
```
import torch
from pipelines import DiffusionPipeline
base_model = "black-forest-labs/FLUX.1-dev"
pipe = DiffusionPipeline.from_pretrained(base_model, torch_dtype=torch.bfloat16)
lora_repo = "prithivMLmods/Canopus-LoRA-Flux-UltraRealism-2.0"
trigger_word = "Ultra realistic"
pipe.load_lora_weights(lora_repo)
device = torch.device("cuda")
pipe.to(device)
```
## App File Structure
/project-root/
│
├── .gitattributes
├── README.md
├── app.py
├── pythonproject.py
## Download model
Weights for this model are available in Safetensors format.
[Download](/prithivMLmods/Canopus-LoRA-Flux-UltraRealism-2.0/tree/main) them in the Files & versions tab
.
.
.🤗: https://hf.co/prithivmlmods |
Eugeoter/noob-sdxl-controlnet-manga_line | Eugeoter | 2024-11-23T12:02:53Z | 2,116 | 1 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-xl",
"controlnet",
"text-to-image",
"en",
"base_model:Laxhar/noobai-xl-EarlyAccess",
"base_model:adapter:Laxhar/noobai-xl-EarlyAccess",
"license:other",
"region:us"
] | text-to-image | 2024-11-22T10:47:57Z | ---
license: other
license_name: fair-ai-public-license-1.0-sd
license_link: https://freedevproject.org/faipl-1.0-sd/
library_name: diffusers
language:
- en
base_model:
- Laxhar/sdxl_noob
pipeline_tag: text-to-image
tags:
- stable-diffusion
- stable-diffusion-xl
- controlnet
- diffusers
--- |
strangerzonehf/Flux-Super-Capybara-HF | strangerzonehf | 2024-11-23T11:56:45Z | 17 | 10 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:apache-2.0",
"region:us"
] | text-to-image | 2024-11-23T10:54:29Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: 'capybara hf, A cartoon drawing of a brown bear sitting in front of a laptop. The bear is facing to the right and has a smiley face on the screen of the laptop. Above the bear is a white bear with a black nose and a black mouth. The background is a light beige color.'
output:
url: images/C1.png
- text: 'capybara hf, A cartoon drawing of a brown bear wearing sunglasses with a yellow circle at the top of the eyes. The bears eyes are tinted black, and the bears body is a light brown color. He is holding a pink money in his right hand. The money has a black border around it, and there are two yellow smiley faces on the eyes of the bear. The background is a solid white color.'
output:
url: images/C2.png
- text: 'capybara hf, A cartoon drawing of a brown bear with a black hat on its head. The bear is wearing a black shirt with a pink collar. The bears face is brown and the bears mouth is black. There is a smiley face in the bottom right corner of the image. There are white clouds in the background.'
output:
url: images/C3.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: capybara hf
license: apache-2.0
---

# Flux-Super-Capybara-HF
<Gallery />
**The model is still in the training phase. This is not the final version and may contain artifacts and perform poorly in some cases.**
## Model description
**prithivMLmods/Flux-Super-Capybara-HF**
Image Processing Parameters
| Parameter | Value | Parameter | Value |
|---------------------------|--------|---------------------------|--------|
| LR Scheduler | constant | Noise Offset | 0.03 |
| Optimizer | AdamW | Multires Noise Discount | 0.1 |
| Network Dim | 64 | Multires Noise Iterations | 10 |
| Network Alpha | 32 | Repeat & Steps | 22 & 2900 |
| Epoch | 15 | Save Every N Epochs | 1 |
Labeling: florence2-en(natural language & English)
Total Images Used for Training : 20
## Best Dimensions
- 768 x 1024 (Best)
- 1024 x 1024 (Default)
## Setting Up
```python
import torch
from pipelines import DiffusionPipeline
base_model = "black-forest-labs/FLUX.1-dev"
pipe = DiffusionPipeline.from_pretrained(base_model, torch_dtype=torch.bfloat16)
lora_repo = "strangerzonehf/Flux-Super-Capybara-HF"
trigger_word = "capybara hf"
pipe.load_lora_weights(lora_repo)
device = torch.device("cuda")
pipe.to(device)
```
## Trigger words
You should use `capybara hf` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/strangerzonehf/Flux-Super-Capybara-HF/tree/main) them in the Files & versions tab.
|
gdshaji/gd-tp-1.9k-v2 | gdshaji | 2024-11-23T11:46:27Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-23T11:43:57Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
touhidulislam/BERTweet_retrain_2022_11 | touhidulislam | 2024-11-23T11:42:28Z | 177 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:vinai/bertweet-base",
"base_model:finetune:vinai/bertweet-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-11-23T11:42:08Z | ---
library_name: transformers
license: mit
base_model: vinai/bertweet-base
tags:
- generated_from_trainer
model-index:
- name: BERTweet_retrain_2022_11
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTweet_retrain_2022_11
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5380
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.8663 | 1.0 | 6098 | 2.6189 |
| 2.693 | 2.0 | 12196 | 2.5695 |
| 2.4941 | 3.0 | 18294 | 2.5362 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.1.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
touhidulislam/BERTweet_retrain_2021_11 | touhidulislam | 2024-11-23T11:41:44Z | 168 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:vinai/bertweet-base",
"base_model:finetune:vinai/bertweet-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-11-23T11:41:23Z | ---
library_name: transformers
license: mit
base_model: vinai/bertweet-base
tags:
- generated_from_trainer
model-index:
- name: BERTweet_retrain_2021_11
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTweet_retrain_2021_11
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.8137 | 1.0 | 6081 | 2.5820 |
| 2.7966 | 2.0 | 12162 | 2.5134 |
| 2.7398 | 3.0 | 18243 | 2.4904 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.1.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
ritamsharma/tinyllama-finance-v0 | ritamsharma | 2024-11-23T11:40:00Z | 96 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-23T10:22:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
allknowingroger/Marco-01-slerp2-7B | allknowingroger | 2024-11-23T11:39:49Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:AIDC-AI/Marco-o1",
"base_model:merge:AIDC-AI/Marco-o1",
"base_model:allknowingroger/QwenSlerp12-7B",
"base_model:merge:allknowingroger/QwenSlerp12-7B",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-23T11:31:18Z | ---
base_model:
- allknowingroger/QwenSlerp12-7B
- AIDC-AI/Marco-o1
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [allknowingroger/QwenSlerp12-7B](https://huggingface.co/allknowingroger/QwenSlerp12-7B)
* [AIDC-AI/Marco-o1](https://huggingface.co/AIDC-AI/Marco-o1)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: AIDC-AI/Marco-o1
- model: allknowingroger/QwenSlerp12-7B
merge_method: slerp
base_model: AIDC-AI/Marco-o1
dtype: bfloat16
parameters:
t: [0, 0.5, 1, 0.5, 0] # V shaped curve: Hermes for input & output, WizardMath in the middle layers
``` |
weide0118/ddpm-celebahq-finetuned-butterflies-2epochs | weide0118 | 2024-11-23T11:38:11Z | 44 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2024-11-23T11:37:50Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Example Fine-Tuned Model for Unit 2 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
Describe your model here
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('weide0118/ddpm-celebahq-finetuned-butterflies-2epochs')
image = pipeline().images[0]
image
```
|
Marialab/finetuned-whisper-small-dr-ar | Marialab | 2024-11-23T11:34:14Z | 76 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"New_fine-tune_whisper_new_dataset",
"generated_from_trainer",
"ar",
"dataset:darija-c",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-11-23T11:33:19Z | ---
library_name: transformers
language:
- ar
license: apache-2.0
base_model: openai/whisper-small
tags:
- New_fine-tune_whisper_new_dataset
- generated_from_trainer
datasets:
- darija-c
metrics:
- bleu
model-index:
- name: 'Finetuned Whisper small darija translate '
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Finetuned Whisper small darija translate
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Darija-C dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Bleu: 0.7440
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 40
- training_steps: 200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu |
|:-------------:|:-------:|:----:|:---------------:|:------:|
| 3.975 | 0.6667 | 10 | 3.4363 | 0.0 |
| 2.1029 | 1.3333 | 20 | 1.5986 | 0.0262 |
| 1.7909 | 2.0 | 30 | 0.9239 | 0.1314 |
| 0.9837 | 2.6667 | 40 | 0.5086 | 0.3289 |
| 0.36 | 3.3333 | 50 | 0.4370 | 0.3911 |
| 0.6361 | 4.0 | 60 | 0.2622 | 0.4561 |
| 0.5227 | 4.6667 | 70 | 0.2506 | 0.5266 |
| 0.3307 | 5.3333 | 80 | 0.1299 | 0.6123 |
| 0.2438 | 6.0 | 90 | 0.1290 | 0.6057 |
| 0.2864 | 6.6667 | 100 | 0.0838 | 0.6623 |
| 0.073 | 7.3333 | 110 | 0.0965 | 0.6494 |
| 0.1924 | 8.0 | 120 | 0.0859 | 0.7237 |
| 0.0086 | 8.6667 | 130 | 0.0235 | 0.7174 |
| 0.003 | 9.3333 | 140 | 0.0335 | 0.7354 |
| 0.054 | 10.0 | 150 | 0.0096 | 0.7367 |
| 0.0008 | 10.6667 | 160 | 0.0190 | 0.7367 |
| 0.0002 | 11.3333 | 170 | 0.0003 | 0.7440 |
| 0.0001 | 12.0 | 180 | 0.0001 | 0.7440 |
| 0.0002 | 12.6667 | 190 | 0.0001 | 0.7440 |
| 0.0 | 13.3333 | 200 | 0.0000 | 0.7440 |
### Framework versions
- Transformers 4.46.3
- Pytorch 2.5.1+cu121
- Datasets 2.19.2
- Tokenizers 0.20.3
|
elyx/julien | elyx | 2024-11-23T11:29:10Z | 106 | 0 | transformers | [
"transformers",
"safetensors",
"ultravox",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"region:us"
] | feature-extraction | 2024-11-23T11:27:26Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Shivam365/sd-class-butterflies-32-new | Shivam365 | 2024-11-23T11:25:28Z | 44 | 0 | diffusers | [
"diffusers",
"safetensors",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2024-11-23T11:25:22Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card:
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('Shivam365/sd-class-butterflies-32-new')
image = pipeline().images[0]
image
```
|
boadisamson/Llama-3.1-8B-Qgis-7000G-q4_k_m-Instruct | boadisamson | 2024-11-23T11:14:21Z | 8 | 1 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-bnb-4bit",
"base_model:quantized:unsloth/Meta-Llama-3.1-8B-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-23T11:13:14Z | ---
base_model: unsloth/Meta-Llama-3.1-8B-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** boadisamson
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
amanpreetsingh459/gemma-2-2b-it-punjabi-finetuned | amanpreetsingh459 | 2024-11-23T11:07:09Z | 89 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gemma2",
"text-generation",
"generated_from_trainer",
"trl",
"sft",
"conversational",
"pa",
"base_model:google/gemma-2-2b-it",
"base_model:finetune:google/gemma-2-2b-it",
"license:gemma",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-23T07:09:15Z | ---
base_model: google/gemma-2-2b-it
language:
- pa
library_name: transformers
license: gemma
model_name: output_dir
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for output_dir
This model is a fine-tuned version of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="amanpreetsingh459/gemma-2-2b-it-punjabi-finetuned", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/amanpreetsingh459-myself/huggingface/runs/j9gmzhe0)
This model was trained with SFT.
### Framework versions
- TRL: 0.12.1
- Transformers: 4.46.3
- Pytorch: 2.4.0
- Datasets: 3.0.1
- Tokenizers: 0.20.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
isspek/xlnet-base-cased_ebola_mistral_2_2e-5_16_undersampling_0.6 | isspek | 2024-11-23T11:05:45Z | 117 | 0 | transformers | [
"transformers",
"safetensors",
"xlnet",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-23T11:05:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
isspek/xlnet-base-cased_ebola_mistral_2_2e-5_16 | isspek | 2024-11-23T11:05:26Z | 117 | 0 | transformers | [
"transformers",
"safetensors",
"xlnet",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-23T11:05:05Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
isspek/xlnet-base-cased_ebola_mistral_4_2e-5_16 | isspek | 2024-11-23T11:02:22Z | 116 | 0 | transformers | [
"transformers",
"safetensors",
"xlnet",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-23T11:02:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
coolcat0/Llama-3.1-70B-Instruct-Simplicity | coolcat0 | 2024-11-23T10:58:32Z | 9 | 1 | transformers | [
"transformers",
"safetensors",
"gguf",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-19T01:54:46Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tintnguyen/question-generation-vietnamese-v2-tin | tintnguyen | 2024-11-23T10:54:44Z | 103 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"base_model:nluai/question-generation-vietnamese-v2",
"base_model:finetune:nluai/question-generation-vietnamese-v2",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-11-23T10:53:39Z | ---
library_name: transformers
license: apache-2.0
base_model: nluai/question-generation-vietnamese-v2
tags:
- generated_from_trainer
model-index:
- name: question-generation-vietnamese-v2-tin
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# question-generation-vietnamese-v2-tin
This model is a fine-tuned version of [nluai/question-generation-vietnamese-v2](https://huggingface.co/nluai/question-generation-vietnamese-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.7425
- eval_runtime: 35.9438
- eval_samples_per_second: 27.71
- eval_steps_per_second: 1.753
- epoch: 0.9168
- step: 30000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
ombhojane/wellness-mini | ombhojane | 2024-11-23T10:49:56Z | 13 | 0 | null | [
"safetensors",
"llama",
"mental-health",
"healthcare",
"conversational",
"wellness",
"SmolLM-135M",
"fine-tuned",
"text-generation",
"en",
"license:mit",
"region:us"
] | text-generation | 2024-11-23T10:39:40Z | ---
language: en
tags:
- mental-health
- healthcare
- conversational
- wellness
- SmolLM-135M
- fine-tuned
license: mit
pipeline_tag: text-generation
inference: false
---
# Wellness-Mini: Mental Health Conversational AI
## Model Description
Wellness-Mini is a fine-tuned version of SmolLM-135M, specifically adapted for mental health conversations and assessments.
## Usage
### In Transformers
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
Load model and tokenizer
model = AutoModelForCausalLM.from_pretrained("ombhojane/wellness-mini", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("ombhojane/wellness-mini")
Example usage
messages = [{"role": "user", "content": "How are you feeling today?"}]
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(inputs, max_new_tokens=100)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
### Via Pipeline
```python
from transformers import pipeline
Create pipeline
pipe = pipeline(
"text-generation",
model="ombhojane/wellness-mini",
tokenizer="ombhojane/wellness-mini",
trust_remote_code=True
)
Generate text
response = pipe("How are you feeling today?", max_new_tokens=100)
print(response[0]['generated_text'])
```
## Training Details
- Base model: SmolLM-135M
- Fine-tuned using supervised fine-tuning (SFT)
- Trained with both mental health assessment capabilities and proper identity responses
## Limitations
- This is an AI assistant and not a replacement for professional medical advice
- Should be used as a supplementary tool only
- May not be suitable for emergency situations
- Responses should be verified by healthcare professionals
## Intended Use
This model is intended to be used as a conversational AI assistant focused on mental health support and assessment. It can:
- Provide supportive responses to mental health concerns
- Help identify potential mental health indicators
- Engage in wellness-focused conversations
## Bias and Risks
- May reflect biases present in training data
- Should not be used as sole diagnostic tool
- Responses should be reviewed by healthcare professionals
## Creator
This model was developed by Sentinet AI Systems.
|
fracapuano/moss-pen | fracapuano | 2024-11-23T10:43:40Z | 10 | 0 | lerobot | [
"lerobot",
"safetensors",
"act",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"robotics",
"region:us"
] | robotics | 2024-11-23T10:43:30Z | ---
library_name: lerobot
tags:
- act
- model_hub_mixin
- pytorch_model_hub_mixin
- robotics
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Library: https://github.com/huggingface/lerobot
- Docs: [More Information Needed] |
isspek/xlnet-base-cased_ebola_gpt4o_5_2e-5_16 | isspek | 2024-11-23T10:42:26Z | 117 | 0 | transformers | [
"transformers",
"safetensors",
"xlnet",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-23T10:42:08Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
isspek/xlnet-base-cased_ebola_gpt4o_3_2e-5_16 | isspek | 2024-11-23T10:41:36Z | 116 | 0 | transformers | [
"transformers",
"safetensors",
"xlnet",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-23T10:41:23Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
isspek/xlnet-base-cased_ebola_gpt4o_4_2e-5_16_undersampling_0.6 | isspek | 2024-11-23T10:41:21Z | 117 | 0 | transformers | [
"transformers",
"safetensors",
"xlnet",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-23T10:41:07Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
isspek/xlnet-base-cased_ebola_gpt4o_1_2e-5_16_undersampling_0.6 | isspek | 2024-11-23T10:40:35Z | 121 | 0 | transformers | [
"transformers",
"safetensors",
"xlnet",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-23T10:40:21Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
isspek/xlnet-base-cased_ebola_gpt4o_2_2e-5_16_undersampling_0.6 | isspek | 2024-11-23T10:39:49Z | 118 | 0 | transformers | [
"transformers",
"safetensors",
"xlnet",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-23T10:39:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
isspek/xlnet-base-cased_ebola_gpt4o_2_2e-5_16 | isspek | 2024-11-23T10:39:16Z | 116 | 0 | transformers | [
"transformers",
"safetensors",
"xlnet",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-23T10:39:04Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
touhidulislam/BERTweet_retrain_2022_39 | touhidulislam | 2024-11-23T10:32:09Z | 177 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:vinai/bertweet-base",
"base_model:finetune:vinai/bertweet-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-11-23T10:31:48Z | ---
library_name: transformers
license: mit
base_model: vinai/bertweet-base
tags:
- generated_from_trainer
model-index:
- name: BERTweet_retrain_2022_39
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTweet_retrain_2022_39
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4699
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.7008 | 1.0 | 6301 | 2.5605 |
| 2.6849 | 2.0 | 12602 | 2.4851 |
| 2.7378 | 3.0 | 18903 | 2.4627 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.1.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
circlelee/gemma-2-2b-it-arena.bak2 | circlelee | 2024-11-23T10:31:45Z | 88 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"gemma2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"base_model:unsloth/gemma-2-2b-it-bnb-4bit",
"base_model:finetune:unsloth/gemma-2-2b-it-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-23T10:28:38Z | ---
base_model: unsloth/gemma-2-2b-it-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- gemma2
- trl
- sft
---
# Uploaded model
- **Developed by:** circlelee
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-2-2b-it-bnb-4bit
This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
touhidulislam/BERTweet_retrain_2021_09 | touhidulislam | 2024-11-23T10:28:38Z | 167 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:vinai/bertweet-base",
"base_model:finetune:vinai/bertweet-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-11-23T10:28:11Z | ---
library_name: transformers
license: mit
base_model: vinai/bertweet-base
tags:
- generated_from_trainer
model-index:
- name: BERTweet_retrain_2021_09
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTweet_retrain_2021_09
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5955
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.8868 | 1.0 | 5925 | 2.6722 |
| 2.7945 | 2.0 | 11850 | 2.6265 |
| 2.6973 | 3.0 | 17775 | 2.5918 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.1.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
touhidulislam/BERTweet_retrain_2022_09 | touhidulislam | 2024-11-23T10:28:19Z | 178 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:vinai/bertweet-base",
"base_model:finetune:vinai/bertweet-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-11-23T10:27:57Z | ---
library_name: transformers
license: mit
base_model: vinai/bertweet-base
tags:
- generated_from_trainer
model-index:
- name: BERTweet_retrain_2022_09
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTweet_retrain_2022_09
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4388
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.6503 | 1.0 | 6222 | 2.5467 |
| 2.5711 | 2.0 | 12444 | 2.4678 |
| 2.5673 | 3.0 | 18666 | 2.4445 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.1.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
DopeorNope/gs-llama3-1b-llama-maskver | DopeorNope | 2024-11-23T10:18:11Z | 84 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-23T10:14:42Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
briannlongzhao/dog_textual_inversion | briannlongzhao | 2024-11-23T10:15:49Z | 4 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:adapter:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-11-15T09:54:52Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - briannlongzhao/dog_textual_inversion
These are textual inversion adaption weights for stabilityai/stable-diffusion-2-1. You can find some example images in the following.
|
prithivMLmods/Flux-Product-Ad-Backdrop | prithivMLmods | 2024-11-23T10:13:04Z | 740 | 36 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"Product-Ad",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-11-21T11:58:04Z | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
- Product-Ad
widget:
- text: >-
Product Ad, Captured at eye-level, a close-up shot captures a pile of fried
chicken wings in a white paper cup. The chicken wings are a vibrant brown
color, adding a pop of color to the scene. The cup is placed on a light
brown wooden table, creating a stark contrast with the vibrant blue sky in
the background. To the right of the chicken wings, a slice of lemon, a red
onion, and a red radish are placed on the table. The radish, and red onions
are arranged in a circular pattern, adding depth to the composition. The
backdrop is blurred, suggesting a fair day.
output:
url: images/PA1.png
- text: >-
Product Ad, a blue and silver electric razor is soaring through the air. The
razor is positioned in the middle of a grassy field, with a backdrop of a
mountain range that is covered in snow. The sky is a deep blue, dotted with
white clouds, adding a pop of color to the scene. The blades of the razor
are splashing in the air, adding texture to the image.
output:
url: images/PA3.png
- text: >-
Product Ad,Vegan Snacks, in the style of an outdoors product hero shot in
motion, dynamic magazine ad image, photorealism, 4k Raw, --v6
output:
url: images/PA4.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: Product Ad
license: creativeml-openrail-m
---
# Flux-Product-Ad-Backdrop
<Gallery />
**The model is still in the training phase. This is not the final version and may contain artifacts and perform poorly in some cases.**
## Model description
**prithivMLmods/Flux-Product-Ad-Backdrop**
Image Processing Parameters
| Parameter | Value | Parameter | Value |
|---------------------------|--------|---------------------------|--------|
| LR Scheduler | constant | Noise Offset | 0.03 |
| Optimizer | AdamW | Multires Noise Discount | 0.1 |
| Network Dim | 64 | Multires Noise Iterations | 10 |
| Network Alpha | 32 | Repeat & Steps | 19 & 2970 |
| Epoch | 15 | Save Every N Epochs | 1 |
Labeling: florence2-en(natural language & English)
Total Images Used for Training : 19
## Best Dimensions
- 768 x 1024 (Best)
- 1024 x 1024 (Default)
## Setting Up
```python
import torch
from pipelines import DiffusionPipeline
base_model = "black-forest-labs/FLUX.1-dev"
pipe = DiffusionPipeline.from_pretrained(base_model, torch_dtype=torch.bfloat16)
lora_repo = "prithivMLmods/Flux-Product-Ad-Backdrop"
trigger_word = "Product Ad"
pipe.load_lora_weights(lora_repo)
device = torch.device("cuda")
pipe.to(device)
```
## Trigger words
You should use `Product Ad` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/prithivMLmods/Flux-Product-Ad-Backdrop/tree/main) them in the Files & versions tab. |
glif-loradex-trainer/fabian3000_henrymajor | glif-loradex-trainer | 2024-11-23T10:08:11Z | 39 | 0 | diffusers | [
"diffusers",
"text-to-image",
"template:sd-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:finetune:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us",
"flux",
"lora",
"base_model:adapter:black-forest-labs/FLUX.1-dev"
] | text-to-image | 2024-11-23T10:07:52Z | ---
tags:
- diffusers
- text-to-image
- template:sd-lora
- base_model:black-forest-labs/FLUX.1-dev
- base_model:finetune:black-forest-labs/FLUX.1-dev
- license:other
- region:us
- flux
- lora
widget:
- output:
url: samples/1732356405402__000001500_0.jpg
text: a cthulhu monster drinking coffee henrymajorstyle
- output:
url: samples/1732356429409__000001500_1.jpg
text: a portrait of a woman working on a rocket henrymajorstyle
- output:
url: samples/1732356453445__000001500_2.jpg
text: vampire sword henrymajorstyle
base_model: black-forest-labs/FLUX.1-dev
trigger: henrymajorstyle
instance_prompt: henrymajorstyle
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# henrymajor
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit) under the [Glif Loradex program](https://huggingface.co/glif-loradex-trainer) by [Glif](https://glif.app) user `fabian3000`.
<Gallery />
## Trigger words
You should use `henrymajorstyle` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/glif-loradex-trainer/fabian3000_henrymajor/tree/main) them in the Files & versions tab.
## License
This model is licensed under the [flux-1-dev-non-commercial-license](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md).
|
elyx/banana1234 | elyx | 2024-11-23T10:03:54Z | 103 | 0 | transformers | [
"transformers",
"safetensors",
"ultravox",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"region:us"
] | feature-extraction | 2024-11-23T09:58:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
touhidulislam/BERTweet_retrain_2022_38 | touhidulislam | 2024-11-23T09:55:12Z | 169 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:vinai/bertweet-base",
"base_model:finetune:vinai/bertweet-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-11-23T09:54:51Z | ---
library_name: transformers
license: mit
base_model: vinai/bertweet-base
tags:
- generated_from_trainer
model-index:
- name: BERTweet_retrain_2022_38
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTweet_retrain_2022_38
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5456
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.6585 | 1.0 | 6105 | 2.6322 |
| 2.6867 | 2.0 | 12210 | 2.5551 |
| 2.657 | 3.0 | 18315 | 2.5440 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.1.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
touhidulislam/BERTweet_retrain_2022_08 | touhidulislam | 2024-11-23T09:51:06Z | 171 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:vinai/bertweet-base",
"base_model:finetune:vinai/bertweet-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-11-23T09:50:39Z | ---
library_name: transformers
license: mit
base_model: vinai/bertweet-base
tags:
- generated_from_trainer
model-index:
- name: BERTweet_retrain_2022_08
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTweet_retrain_2022_08
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5480
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.7683 | 1.0 | 5972 | 2.6260 |
| 2.8097 | 2.0 | 11944 | 2.5590 |
| 2.5256 | 3.0 | 17916 | 2.5344 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.1.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
luojunyu/Llama-3.1-8B-SemiEvol-MMLU | luojunyu | 2024-11-23T09:51:04Z | 71 | 2 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-classification",
"en",
"dataset:luojunyu/SemiEvol",
"arxiv:2410.14745",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-10-17T07:19:06Z | ---
license: cc-by-4.0
datasets:
- luojunyu/SemiEvol
language:
- en
metrics:
- accuracy
base_model:
- meta-llama/Llama-3.1-8B-Instruct
pipeline_tag: text-classification
library_name: transformers
---
Release model for paper [SemiEvol](https://arxiv.org/abs/2410.14745).
|
erhj3eh3ehweg/Midstral-166M | erhj3eh3ehweg | 2024-11-23T09:42:59Z | 176 | 0 | transformers | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-23T09:42:34Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
aswathshakthi/distilbert-emotions-clf-m1 | aswathshakthi | 2024-11-23T09:42:18Z | 6 | 1 | null | [
"safetensors",
"distilbert",
"en",
"license:apache-2.0",
"region:us"
] | null | 2024-11-23T08:52:23Z | ---
license: apache-2.0
language:
- en
---
# Model Card for My Fine-Tuned Model
---
license: apache-2.0
language:
- en
---
## Model Description
- **Purpose**: This model is fine-tuned to perform multi-class emotion classification. It can identify various emotions in text, such as joy, sadness, love, anger, fear, and surprise.
- **Model architecture**: The model is based on the `distilbert-base-uncased` architecture, a distilled version of the BERT model which is smaller and faster but retains most of its predictive power.
- **Training data**: The model was trained on the `emotion` dataset from Hugging Face's datasets library. This dataset includes text labeled with different emotions. During preprocessing, texts were tokenized, and padding and truncation were applied to standardize their lengths.
## Intended Use
- **Intended users**: This model is intended for developers and researchers interested in emotion analysis in text, including applications in social media sentiment analysis, customer feedback interpretation, and mental health assessment.
- **Use cases**: Potential use cases include analyzing social media posts for emotional content, enhancing chatbots to understand user emotions, and helping mental health professionals in identifying emotional states from text-based communications.
## Limitations
- **Known limitations**: The model's accuracy may vary depending on the context and the dataset's representativeness. It may not perform equally well on texts from domains significantly different from the training data.
## Hardware
- **Training Platform**: The model was trained on Apple M1 . The training completed in under 23 minutes, demonstrating the efficiency of Apple hardware optimizations.
## Ethical Considerations
- **Ethical concerns**: Care should be taken to ensure that the model is not used in sensitive applications without proper ethical considerations, especially in scenarios that could impact individual privacy or mental health.
## More Information
- **Model Name on Hugging Face**: `aswathshakthi/distilbert-emotions-clf-m1` |
P00j4n/fine-tuned-gemma-2-2b-quantized | P00j4n | 2024-11-23T09:39:47Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-23T09:32:55Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
longisland3/anime-whisper-faster | longisland3 | 2024-11-23T09:17:37Z | 6 | 0 | null | [
"license:cc-by-nc-4.0",
"region:us"
] | null | 2024-11-23T06:41:51Z | ---
license: cc-by-nc-4.0
---
|
touhidulislam/BERTweet_retrain_2021_07 | touhidulislam | 2024-11-23T09:17:03Z | 170 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:vinai/bertweet-base",
"base_model:finetune:vinai/bertweet-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-11-23T09:16:42Z | ---
library_name: transformers
license: mit
base_model: vinai/bertweet-base
tags:
- generated_from_trainer
model-index:
- name: BERTweet_retrain_2021_07
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTweet_retrain_2021_07
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5825
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.7371 | 1.0 | 5927 | 2.6662 |
| 2.7009 | 2.0 | 11854 | 2.6087 |
| 2.6045 | 3.0 | 17781 | 2.5793 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.1.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
touhidulislam/BERTweet_retrain_2021_51 | touhidulislam | 2024-11-23T09:12:17Z | 177 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:vinai/bertweet-base",
"base_model:finetune:vinai/bertweet-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-11-23T09:11:56Z | ---
library_name: transformers
license: mit
base_model: vinai/bertweet-base
tags:
- generated_from_trainer
model-index:
- name: BERTweet_retrain_2021_51
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTweet_retrain_2021_51
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4925
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.8054 | 1.0 | 5999 | 2.5611 |
| 2.6829 | 2.0 | 11998 | 2.5308 |
| 2.5699 | 3.0 | 17997 | 2.4981 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.1.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
Carick/roberta-base-wordnet_dataset_two-fine-tuned | Carick | 2024-11-23T09:01:22Z | 107 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-22T07:31:57Z | ---
library_name: transformers
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: roberta-base-wordnet_dataset_two-fine-tuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-wordnet_dataset_two-fine-tuned
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2831
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.4241 | 1.0 | 7938 | 0.3626 |
| 0.3768 | 2.0 | 15876 | 0.3164 |
| 0.3227 | 3.0 | 23814 | 0.2831 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
sakuraumi/Sakura-13B-Galgame | sakuraumi | 2024-11-23T09:00:48Z | 135 | 114 | transformers | [
"transformers",
"pytorch",
"baichuan",
"text-generation",
"custom_code",
"zh",
"ja",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-08-26T16:28:53Z | ---
license: apache-2.0
language:
- zh
- ja
pipeline_tag: text-generation
---
<!-- markdownlint-disable first-line-h1 -->
<!-- markdownlint-disable html -->
<div align="center">
<h1>
SakuraLLM
</h1>
<center>
<b>Sakura</b>: <b><ins>S</ins></b>FT <ins><b>A</b></ins>nd RLHF models using <ins><b>K</b></ins>nowledge of <ins><b>U</b></ins>niversal Character and <ins><b>R</b></ins>elationship <ins><b>A</b></ins>ttributes for Japanese to Chinese Translation in Light Novel & Galgame Domain.
</center>
</div>
<p align="center">
🤗 <a href="https://huggingface.co/sakuraumi/Sakura-13B-Galgame" target="_blank">Hugging Face</a> • 🤖 <a href="https://www.modelscope.cn/models/sakuraumi/Sakura-13B-Galgame" target="_blank">ModelScope</a>
</p>
# 目前Sakura发布的所有模型均采用[CC BY-NC-SA 4.0协议](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.zh-hans),Sakura所有模型与其衍生模型均禁止任何形式的商用!Sakura系列所有模型皆仅供学习交流使用,开发者对使用Sakura模型造成的问题不负任何责任。
# 介绍
- 基于一系列开源大模型构建,在通用日文语料与轻小说/Galgame等领域的中日语料上进行继续预训练与微调,旨在提供开源可控可离线自部署的、ACGN风格的日中翻译模型。
- 新建了[TG交流群](https://t.me/+QMDKZyO9GV1kNDA1),欢迎交流讨论。
**对于其他适配本模型的项目如使用非本项目提供的prompt格式进行翻译,不保证会获得与README中的说明一致的质量!**
**如果使用模型翻译并发布,请在最显眼的位置标注机翻!!!!!开发者对于滥用本模型造成的一切后果不负任何责任。**
> 由于模型一直在更新,请同时注明使用的模型版本等信息,方便进行质量评估和更新翻译。
**对于模型翻译的人称代词问题(错用,乱加,主宾混淆,男女不分等)和上下文理解问题,如果有好的想法或建议,欢迎提issue!**
### TODO:见[#42](https://github.com/SakuraLLM/Sakura-13B-Galgame/issues/42)
## 快速开始
### 教程:
详见[本仓库Wiki](https://github.com/SakuraLLM/Sakura-13B-Galgame/wiki).
部分使用方法:[usage.md](https://github.com/SakuraLLM/SakuraLLM/blob/main/usage.md)
> **请注意,如果给轻小说机翻站使用,请参见[机翻站站内教程](https://books.fishhawk.top/forum?category=Guide&page=1),本 repo 不适用。**
### 模型下载:
| 参数量 | 发布时间-底模-版本 | 模型 |
|:-------:|:-------|:-------|
| 32B | 20240508-Qwen1.5-32B-v0.9 | 🤗 [Sakura-32B-Qwen2beta-v0.9-GGUF](https://huggingface.co/SakuraLLM/Sakura-32B-Qwen2beta-v0.9-GGUF) |
| | 20240508-Qwen1.5-32B-v0.10pre1 | 🤗 [Sakura-32B-Qwen2beta-v0.10pre1-GGUF](https://huggingface.co/SakuraLLM/Sakura-32B-Qwen2beta-v0.10pre1-GGUF) |
| 14B | 20240111-Qwen-14B-v0.9 | 🤗 [Sakura-13B-LNovel-v0.9b-GGUF](https://huggingface.co/SakuraLLM/Sakura-13B-LNovel-v0.9b-GGUF) |
| | 20240213-Qwen1.5-14B-v0.9 | 🤗 [Sakura-14B-Qwen2beta-v0.9-GGUF](https://huggingface.co/SakuraLLM/Sakura-14B-Qwen2beta-v0.9-GGUF) |
| | 20240516-Qwen1.5-14B-v0.9.2 | 🤗 [Sakura-14B-Qwen2beta-v0.9.2-GGUF](https://huggingface.co/SakuraLLM/Sakura-14B-Qwen2beta-v0.9.2-GGUF)
|(最新)| **20241008-Qwen2.5-14B-v1.0** | 🤗 [Sakura-14B-Qwen2.5-v1.0-GGUF](https://huggingface.co/SakuraLLM/Sakura-14B-Qwen2.5-v1.0-GGUF)
| 7B | 20240116-Qwen-7B-v0.9 | 🤗 [Sakura-7B-LNovel-v0.9-GGUF](https://huggingface.co/SakuraLLM/Sakura-7B-LNovel-v0.9-GGUF) |
| | 20240531-Qwen1.5-7B-Galtransl-v2.6 | 🤗 [Galtransl-v2.6](https://huggingface.co/SakuraLLM/GalTransl-7B-v2.6) |
| ~2B | 20240214-Qwen1.5-1.8B-v0.9.1 | 🤗 [Sakura-1B8-Qwen2beta-v0.9.1-GGUF](https://huggingface.co/SakuraLLM/Sakura-1B8-Qwen2beta-v0.9.1-GGUF) |
| | **20241012-Qwen2.5-1.5B-v1.0** | 🤗 [Sakura-1.5B-Qwen2.5-v1.0-GGUF](https://huggingface.co/SakuraLLM/Sakura-1.5B-Qwen2.5-v1.0-GGUF) |
p.s. 如果无法连接到HuggingFace服务器,可将链接中的`huggingface.co`改成`hf-mirror.com`,使用hf镜像站下载。
## News
1. **更新了基于Qwen2.5-14B的v1.0正式版模型[Sakura-14B-Qwen2.5-v1.0](https://huggingface.co/SakuraLLM/Sakura-14B-Qwen2.5-v1.0-GGUF)和基于Qwen2.5-1.5B的v1.0正式版模型[Qwen2.5-1.5B-v1.0](https://huggingface.co/SakuraLLM/Sakura-1.5B-Qwen2.5-v1.0-GGUF),prompt格式参见[下方说明](https://github.com/SakuraLLM/SakuraLLM#%E6%8E%A8%E7%90%86)。主要改进:**
- 改善翻译质量,提高翻译准确率,尤其是人称的准确率。
- 支持术语表(GPT字典),以保持专有名词和人称的一致性。
- 提高部分简单控制符的保留能力,尤其是单行内存在`\n`的情况下保留`\n`的能力。降低行数与原文不一致的概率。
- 由于底模使用GQA,推理速度和显存占用显著改善,可实现更快的多线程推理。关于多线程推理,可参考[Sakura启动器GUI使用教程](https://books.fishhawk.top/forum/656d60530286f15e3384fcf8)或[SakuraLLMServer](https://github.com/neavo/SakuraLLMServer)。
1. **更新了基于Qwen1.5-7B的[Galtransl](https://huggingface.co/SakuraLLM/GalTransl-v1)模型,为视觉小说翻译任务专项优化。对视觉小说脚本中的行内换行、控制符、ruby注音等符号具有较好的保留能力。适配[GalTransl视觉小说翻译工具](https://github.com/xd2333/GalTransl)并调优,支持GPT字典([字典写法见此](https://github.com/xd2333/GalTransl/wiki/GPT%E5%AD%97%E5%85%B8%E2%80%90sakura-galtransl))。**
1. **增加了vllm模型后端的支持,详见**[#40](https://github.com/SakuraLLM/Sakura-13B-Galgame/pull/40)
1. <del>感谢[Isotr0py](https://github.com/Isotr0py)提供运行模型的NoteBook仓库[SakuraLLM-Notebooks](https://github.com/Isotr0py/SakuraLLM-Notebooks),可在[Colab](https://colab.research.google.com/)(免费T4\*1)与[Kaggle](https://www.kaggle.com/)(免费P100\*1或T4\*2)平台使用。**已经更新Kaggle平台的[使用教程](https://github.com/SakuraLLM/Sakura-13B-Galgame/wiki/%E7%99%BD%E5%AB%96Kaggle%E5%B9%B3%E5%8F%B0%E9%83%A8%E7%BD%B2%E6%95%99%E7%A8%8B),可以白嫖一定时间的T4\*2。**</del>
警告,Kaggle 官方已经采取措施封禁 SakuraLLM 仓库,[参见](https://github.com/SakuraLLM/SakuraLLM/issues/115) ,在 Kaggle 上克隆 SakuraLLM 仓库可能将会导致永久性封号。请换用其他项目或转移至租卡平台使用。
1. **Sakura API已经支持OpenAI格式,现在可以通过OpenAI库或者OpenAI API Reference上的请求形式与Server交互。**
一个使用OpenAI库与Sakura模型交互的例子详见[openai_example.py](https://github.com/SakuraLLM/Sakura-13B-Galgame/blob/main/tests/example_openai.py)。
## 已经接入模型的工具
1. 网站:[轻小说机翻机器人](https://books.fishhawk.top/)已接入Sakura模型(v0.8-4bit),站内有大量模型翻译结果可供参考。你也可以自行部署模型并使用该网站生成机翻,目前已经支持v0.8与v0.9模型,且提供了llama.cpp一键包。
轻小说机翻机器人网站是一个自动生成轻小说机翻并分享的网站。你可以浏览日文网络小说,或者上传Epub/Txt文件,并生成机翻。
1. [LunaTranslator](https://github.com/HIllya51/LunaTranslator)已经支持Sakura API,可以通过本地部署API后端,并在LunaTranslator中配置Sakura API来使用Sakura模型进行Galgame实时翻译。
~~使用[KurikoMoe](https://github.com/kurikomoe/LunaTranslator/releases/latest)的版本可以支持流式输出。~~ 目前官方版本已经支持流式输出,只需在翻译设置界面勾选流式输出即可。
LunaTranslator是一个Galgame翻译工具,支持剪贴板、OCR、HOOK,支持40余种翻译引擎。
1. [GalTransl](https://github.com/XD2333/GalTransl)已经支持Sakura API,可以通过本地部署API后端,在GalTransl中配置使用Sakura模型来翻译Galgame,制作内嵌式翻译补丁。
GalTransl是一个galgame自动化翻译工具,用于制作内嵌式翻译补丁。一个使用GalTransl和Sakura模型翻译的[示例](https://www.ai2moe.org/files/file/2271-%E6%88%AF%E7%94%BBgaltranslsakuragpt35%E7%88%B1%E4%B9%8B%E5%90%BB3-sexy-gpt%E7%BF%BB%E8%AF%91%E8%A1%A5%E4%B8%81uploadee5-mb/)
1. 翻译Unity引擎游戏的工具[SakuraTranslator](https://github.com/fkiliver/SakuraTranslator)。感谢[fkiliver](https://github.com/fkiliver)提供。
1. 翻译RPGMaker引擎游戏的工具[RPGMaker_LLaMA_Translator](https://github.com/fkiliver/RPGMaker_LLaMA_Translator)。感谢[fkiliver](https://github.com/fkiliver)提供。
1. [AiNiee](https://github.com/NEKOparapa/AiNiee-chatgpt)已经支持Sakura API,可以通过本地部署API后端,在AiNiee中使用Sakura模型进行翻译。
AiNiee是一款基于【mtool】或【Translator++】,chatgpt自动批量翻译工具,主要是用来翻译各种RPG游戏。
1. [manga-image-translator](https://github.com/zyddnys/manga-image-translator)已经支持Sakura API,可以通过本地部署API后端,使用Sakura自动翻译漫画。
1. [BallonsTranslator](https://github.com/dmMaze/BallonsTranslator)已经支持Sakura API,可以通过本地部署API后端,使用Sakura翻译漫画。
# 显存需求
下面的表格显示了使用不同量化和不同格式的模型时显存占用的大小。如果你的显卡显存不满足上述需求,可以尝试同时使用CPU与GPU进行推理。
- llama.cpp GGUF模型(使用Qwen-14B v0.9模型进行测试)
| 模型量化类型 | 模型大小 | 推荐显存大小 |
|:-------:|:-------:|:-------:|
| fp16 | 26.3G | 超出游戏显卡显存范围 |
| Q8_0 | 14G | 24G |
| Q6_K | 11.4G | 20G |
| Q5_K_M | 10.1G | 16G |
| Q4_K_M | 8.8G | 16G |
| Q3_K_M | 7.2G | 16G |
| Q2_K | 6.1G | 12G |
# 模型详情
## 描述
- Finetuned by [SakuraUmi](https://github.com/pipixia244)
- Finetuned on [Baichuan2-13B-Chat](https://huggingface.co/baichuan-inc/Baichuan2-13B-Chat)
- Continual Pre-trained on [Qwen model series](https://github.com/QwenLM/Qwen)
- Continual Pre-trained on [Qwen1.5 model series](https://github.com/QwenLM/Qwen1.5)
- Finetuned on Sakura-Base model series
- Languages: Chinese/Japanese
## 效果
- Galgame
[一个例子](https://www.ai2moe.org/files/file/2271-%E6%88%AF%E7%94%BBgaltranslsakuragpt35%E7%88%B1%E4%B9%8B%E5%90%BB3-sexy-gpt%E7%BF%BB%E8%AF%91%E8%A1%A5%E4%B8%81uploadee5-mb/)
- 轻小说
网站:[轻小说机翻机器人](https://books.fishhawk.top/)已接入Sakura模型(v0.9),站内有大量模型翻译的轻小说可供参考。
- PPL/BLEU/Human
TBD
# 推理
- openai api messages格式:
- v0.9
使用代码处理如下:
```python
input_text_list = ['a', 'bb', 'ccc', ...] # 一系列上下文文本,每个元素代表一行的文本
raw_text = "\n".join(input_text_list)
messages=[
{
"role": "system",
"content": "你是一个轻小说翻译模型,可以流畅通顺地以日本轻小说的风格将日文翻译成简体中文,并联系上下文正确使用人称代词,不擅自添加原文中没有的代词。"
},
{
"role": "user",
"content": "将下面的日文文本翻译成中文:" + raw_text
}
]
```
- prompt格式:
- v0.10pre1
代码处理如下:
```python
gpt_dict = [{
"src": "原文1",
"dst": "译文1",
"info": "注释信息1",
},]
gpt_dict_text_list = []
for gpt in gpt_dict:
src = gpt['src']
dst = gpt['dst']
info = gpt['info'] if "info" in gpt.keys() else None
if info:
single = f"{src}->{dst} #{info}"
else:
single = f"{src}->{dst}"
gpt_dict_text_list.append(single)
gpt_dict_raw_text = "\n".join(gpt_dict_text_list)
user_prompt = "根据以下术语表(可以为空):\n" + gpt_dict_raw_text + "\n\n" + "将下面的日文文本根据上述术语表的对应关系和备注翻译成中文:" + japanese
prompt = "<|im_start|>system\n你是一个轻小说翻译模型,可以流畅通顺地使用给定的术语表以日本轻小说的风格将日文翻译成简体中文,并联系上下文正确使用人称代词,注意不要混淆使役态和被动态的主语和宾语,不要擅自添加原文中没有的代词,也不要擅自增加或减少换行。<|im_end|>\n" \ # system prompt
+ "<|im_start|>user\n" + user_prompt + "<|im_end|>\n" \ # user prompt
+ "<|im_start|>assistant\n" # assistant prompt start
```
- v0.9
文本格式如下:
```
<|im_start|>system
你是一个轻小说翻译模型,可以流畅通顺地以日本轻小说的风格将日文翻译成简体中文,并联系上下文正确使用人称代词,不擅自添加原文中没有的代词。<|im_end|>
<|im_start|>user
将下面的日文文本翻译成中文:日文第一行
日文第二行
日文第三行
...
日文第n行<|im_end|>
<|im_start|>assistant
```
使用代码处理如下:
```python
input_text_list = ['a', 'bb', 'ccc', ...] # 一系列上下文文本,每个元素代表一行的文本
raw_text = "\n".join(input_text_list)
prompt = "<|im_start|>system\n你是一个轻小说翻译模型,可以流畅通顺地以日本轻小说的风格将日文翻译成简体中文,并联系上下文正确使用人称代词,不擅自添加原文中没有的代词。<|im_end|>\n" \ # system prompt
+ "<|im_start|>user\n将下面的日文文本翻译成中文:" + raw_text + "<|im_end|>\n" \ # user prompt
+ "<|im_start|>assistant\n" # assistant prompt start
```
- prompt构建:
- v0.8
```python
input_text = "" # 要翻译的日文
query = "将下面的日文文本翻译成中文:" + input_text
prompt = "<reserved_106>" + query + "<reserved_107>"
```
- v0.9
```python
input_text = "" # 要翻译的日文
query = "将下面的日文文本翻译成中文:" + input_text
prompt = "<|im_start|>system\n你是一个轻小说翻译模型,可以流畅通顺地以日本轻小说的风格将日文翻译成简体中文,并联系上下文正确使用人称代词,不擅自添加原文中没有的代词。<|im_end|>\n<|im_start|>user\n" + query + "<|im_end|>\n<|im_start|>assistant\n"
```
- 推理与解码参数:
| 参数 | 值 |
| ---- | ---- |
| temperature | 0.1 |
| top p | 0.3 |
| do sample | True |
| beams number | 1 |
| repetition penalty | 1 |
| max new token | 512 |
| min new token | 1 |
**如出现退化(退化的例子可参见[#35](https://github.com/SakuraLLM/Sakura-13B-Galgame/issues/35)与[#36](https://github.com/SakuraLLM/Sakura-13B-Galgame/issues/36)),可增加`frequency_penalty`参数,并设置为大于0的某值,一般设置0.1~0.2即可。**
# 微调
模型微调框架参考[BELLE](https://github.com/LianjiaTech/BELLE)或[LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory),prompt构造参考推理部分。
# 相关项目
- [轻小说机翻机器人](https://books.fishhawk.top/):轻小说翻译
- [LunaTranslator](https://github.com/HIllya51/LunaTranslator):Galgame在线翻译
- [GalTransl](https://github.com/XD2333/GalTransl):Galgame离线翻译,制作补丁
- [AiNiee](https://github.com/NEKOparapa/AiNiee-chatgpt):RPG游戏翻译
# 致谢
- [CjangCjengh](https://github.com/CjangCjengh)
- [ryank231231](https://github.com/ryank231231)
- [KurikoMoe](https://github.com/kurikomoe)
- [FishHawk](https://github.com/FishHawk)
- [K024](https://github.com/K024)
- [minaduki-sora](https://github.com/minaduki-sora)
- [Kimagure7](https://github.com/Kimagure7)
- [YYF233333](https://github.com/YYF233333)
- [Isotr0py](https://github.com/Isotr0py)
- [XD2333](https://github.com/XD2333)
# Copyright Notice
v0.8版本模型的使用须遵守[Apache 2.0](https://github.com/baichuan-inc/Baichuan2/blob/main/LICENSE)、[《Baichuan 2 模型社区许可协议》](https://huggingface.co/baichuan-inc/Baichuan2-7B-Base/resolve/main/Baichuan%202%E6%A8%A1%E5%9E%8B%E7%A4%BE%E5%8C%BA%E8%AE%B8%E5%8F%AF%E5%8D%8F%E8%AE%AE.pdf)和[CC BY-NC-SA 4.0协议](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.zh-hans)。
v0.9版本模型的使用须遵守[Qwen模型许可协议](https://github.com/QwenLM/Qwen/blob/main/Tongyi%20Qianwen%20LICENSE%20AGREEMENT)和[CC BY-NC-SA 4.0协议](https://creativecommons.org/licenses/by-nc-sa/4.0/deed.zh-hans)。 |
touhidulislam/BERTweet_retrain_2021_06 | touhidulislam | 2024-11-23T08:41:38Z | 177 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:vinai/bertweet-base",
"base_model:finetune:vinai/bertweet-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-11-23T08:41:19Z | ---
library_name: transformers
license: mit
base_model: vinai/bertweet-base
tags:
- generated_from_trainer
model-index:
- name: BERTweet_retrain_2021_06
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTweet_retrain_2021_06
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5775
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.6492 | 1.0 | 5855 | 2.6543 |
| 2.6772 | 2.0 | 11710 | 2.6140 |
| 2.7103 | 3.0 | 17565 | 2.5723 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.1.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
touhidulislam/BERTweet_retrain_2022_06 | touhidulislam | 2024-11-23T08:41:01Z | 178 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:vinai/bertweet-base",
"base_model:finetune:vinai/bertweet-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-11-23T08:40:40Z | ---
library_name: transformers
license: mit
base_model: vinai/bertweet-base
tags:
- generated_from_trainer
model-index:
- name: BERTweet_retrain_2022_06
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTweet_retrain_2022_06
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5204
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.6983 | 1.0 | 6012 | 2.5991 |
| 2.4763 | 2.0 | 12024 | 2.5499 |
| 2.4619 | 3.0 | 18036 | 2.5269 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.1.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
rinabuoy/nllb-200-600M-En-Ar-finetuned2 | rinabuoy | 2024-11-23T08:39:01Z | 159 | 0 | transformers | [
"transformers",
"safetensors",
"m2m_100",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-11-23T08:37:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Mantis-VL/mantis-8b-idefics2-video-eval_5184_regression | Mantis-VL | 2024-11-23T08:36:39Z | 7 | 0 | transformers | [
"transformers",
"safetensors",
"idefics2",
"text-classification",
"generated_from_trainer",
"base_model:HuggingFaceM4/idefics2-8b",
"base_model:finetune:HuggingFaceM4/idefics2-8b",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-11-21T16:15:12Z | ---
library_name: transformers
license: apache-2.0
base_model: HuggingFaceM4/idefics2-8b
tags:
- generated_from_trainer
model-index:
- name: mantis-8b-idefics2-video-eval_5184_regression
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mantis-8b-idefics2-video-eval_5184_regression
This model is a fine-tuned version of [HuggingFaceM4/idefics2-8b](https://huggingface.co/HuggingFaceM4/idefics2-8b) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1.0
### Training results
### Framework versions
- Transformers 4.45.0
- Pytorch 2.5.1+cu124
- Datasets 2.18.0
- Tokenizers 0.20.3
|
touhidulislam/BERTweet_retrain_2021_50 | touhidulislam | 2024-11-23T08:36:07Z | 166 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:vinai/bertweet-base",
"base_model:finetune:vinai/bertweet-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-11-23T08:35:45Z | ---
library_name: transformers
license: mit
base_model: vinai/bertweet-base
tags:
- generated_from_trainer
model-index:
- name: BERTweet_retrain_2021_50
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTweet_retrain_2021_50
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4821
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.7593 | 1.0 | 6047 | 2.5543 |
| 2.6949 | 2.0 | 12094 | 2.5029 |
| 2.6966 | 3.0 | 18141 | 2.4801 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.1.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
susmitabhatt/whisper-a-nomimo-ls | susmitabhatt | 2024-11-23T08:32:23Z | 13 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-11-23T05:57:47Z | ---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-a-nomimo-ls
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-a-nomimo-ls
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0311
- Wer: 32.3302
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 132
- num_epochs: 11
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:---------:|
| 0.9712 | 1.0 | 104 | 0.0786 | 25.0 |
| 3.0599 | 2.0 | 208 | 0.7840 | 3312.3457 |
| 0.3714 | 3.0 | 312 | 0.2694 | 98.3796 |
| 0.2625 | 4.0 | 416 | 0.2197 | 90.9722 |
| 0.2298 | 5.0 | 520 | 0.2035 | 84.9537 |
| 0.1929 | 6.0 | 624 | 0.1519 | 66.0494 |
| 0.1086 | 7.0 | 728 | 0.0641 | 41.6667 |
| 0.0401 | 8.0 | 832 | 0.0421 | 41.6667 |
| 0.0231 | 9.0 | 936 | 0.0324 | 34.8765 |
| 0.0119 | 10.0 | 1040 | 0.0310 | 34.7222 |
| 0.0057 | 10.8986 | 1133 | 0.0311 | 32.3302 |
### Framework versions
- Transformers 4.47.0.dev0
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
gnmskel/ToBeOrNotToBE | gnmskel | 2024-11-23T08:29:05Z | 173 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:KETI-AIR/ke-t5-base-ko",
"base_model:finetune:KETI-AIR/ke-t5-base-ko",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-11-22T17:33:04Z | ---
library_name: transformers
license: apache-2.0
base_model: KETI-AIR/ke-t5-base-ko
tags:
- generated_from_trainer
model-index:
- name: ToBeOrNotToBE
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ToBeOrNotToBE
This model is a fine-tuned version of [KETI-AIR/ke-t5-base-ko](https://huggingface.co/KETI-AIR/ke-t5-base-ko) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 39.4877
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 265 | 84.3180 |
| 127.3935 | 2.0 | 530 | 61.3429 |
| 127.3935 | 3.0 | 795 | 46.6663 |
| 56.6333 | 4.0 | 1060 | 41.3475 |
| 56.6333 | 5.0 | 1325 | 39.4877 |
### Framework versions
- Transformers 4.46.1
- Pytorch 2.5.0
- Datasets 3.0.2
- Tokenizers 0.20.1
|
mav23/vicuna-13b-v1.3-GGUF | mav23 | 2024-11-23T08:28:15Z | 19 | 1 | null | [
"gguf",
"arxiv:2302.13971",
"arxiv:2306.05685",
"region:us"
] | null | 2024-11-23T07:01:55Z | ---
inference: false
---
**NOTE: New version available**
Please check out a newer version of the weights [here](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md).
<br>
# Vicuna Model Card
## Model Details
Vicuna is a chat assistant trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT.
- **Developed by:** [LMSYS](https://lmsys.org/)
- **Model type:** An auto-regressive language model based on the transformer architecture.
- **License:** Non-commercial license
- **Finetuned from model:** [LLaMA](https://arxiv.org/abs/2302.13971).
### Model Sources
- **Repository:** https://github.com/lm-sys/FastChat
- **Blog:** https://lmsys.org/blog/2023-03-30-vicuna/
- **Paper:** https://arxiv.org/abs/2306.05685
- **Demo:** https://chat.lmsys.org/
## Uses
The primary use of Vicuna is research on large language models and chatbots.
The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence.
## How to Get Started with the Model
- Command line interface: https://github.com/lm-sys/FastChat#vicuna-weights.
- APIs (OpenAI API, Huggingface API): https://github.com/lm-sys/FastChat/tree/main#api.
## Training Details
Vicuna v1.3 is fine-tuned from LLaMA with supervised instruction fine-tuning.
The training data is around 125K conversations collected from ShareGPT.com.
See more details in the "Training Details of Vicuna Models" section in the appendix of this [paper](https://arxiv.org/pdf/2306.05685.pdf).
## Evaluation
Vicuna is evaluated with standard benchmarks, human preference, and LLM-as-a-judge. See more details in this [paper](https://arxiv.org/pdf/2306.05685.pdf) and [leaderboard](https://huggingface.co/spaces/lmsys/chatbot-arena-leaderboard).
## Difference between different versions of Vicuna
See [vicuna_weights_version.md](https://github.com/lm-sys/FastChat/blob/main/docs/vicuna_weights_version.md) |
atlimited/llm-jp-3-3_7b-aio-retriever | atlimited | 2024-11-23T08:28:11Z | 74 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-11-23T08:24:01Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
susmitabhatt/whisper-a-clp-ls | susmitabhatt | 2024-11-23T08:25:13Z | 10 | 0 | transformers | [
"transformers",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-11-23T07:26:11Z | ---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-a-clp-ls
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-a-clp-ls
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0240
- Wer: 10.0629
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0004
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 132
- num_epochs: 11
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-------:|:----:|:---------------:|:-------:|
| No log | 1.0 | 40 | 0.1713 | 46.9602 |
| No log | 2.0 | 80 | 0.0920 | 28.3019 |
| 1.2158 | 3.0 | 120 | 0.1828 | 31.2369 |
| 1.2158 | 4.0 | 160 | 0.2743 | 42.3480 |
| 0.1604 | 5.0 | 200 | 0.1326 | 62.8931 |
| 0.1604 | 6.0 | 240 | 0.0734 | 25.7862 |
| 0.1604 | 7.0 | 280 | 0.0510 | 15.7233 |
| 0.0502 | 8.0 | 320 | 0.0262 | 10.4822 |
| 0.0502 | 9.0 | 360 | 0.0320 | 11.9497 |
| 0.0202 | 10.0 | 400 | 0.0229 | 7.1279 |
| 0.0202 | 10.7342 | 429 | 0.0240 | 10.0629 |
### Framework versions
- Transformers 4.47.0.dev0
- Pytorch 2.4.0
- Datasets 3.0.1
- Tokenizers 0.20.0
|
chatpdflocal/Qwen2.5.1-Coder-14B-Instruct-GGUF | chatpdflocal | 2024-11-23T08:13:27Z | 55 | 2 | null | [
"gguf",
"Qwen2.5.1",
"Qwen2.5.1-Coder",
"Qwen2.5.1-14B-Instruct",
"GGUF",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-11-12T05:16:12Z | ---
license: apache-2.0
tags:
- Qwen2.5.1
- Qwen2.5.1-Coder
- Qwen2.5.1-14B-Instruct
- GGUF
---
# This is Qwen2.5.1-Coder-14B-Instruct GGUF format model, which can be easily runned on PCs, mobile phones or devices with llama.cpp.
# If you are a Mac user, the following free wonderful AI tools can help you to read and understand PDFs effectively:
- If you are using Zotero for managing and reading your personal PDFs, [PapersGPT](https://www.papersgpt.com) is a free plugin which can assist you to chat PDFs effectively by your local SmolLM2-135M-Instruct.
- you can directly download the beautiful ChatPDFLocal MacOS app from [here](https://www.chatpdflocal.com), load one or batch PDF files at will, and quickly experience the effect of the model through chat reading. PS. Click [here](https://awa-ai.lemonsqueezy.com/buy/89be07f8-060d-4a8f-a758-f25352773168) to subscribe or invite friends in ChatPDFLocal MacOS app can earn credits to use freely. |
touhidulislam/BERTweet_retrain_2021_05 | touhidulislam | 2024-11-23T08:06:46Z | 177 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"fill-mask",
"generated_from_trainer",
"base_model:vinai/bertweet-base",
"base_model:finetune:vinai/bertweet-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-11-23T08:06:26Z | ---
library_name: transformers
license: mit
base_model: vinai/bertweet-base
tags:
- generated_from_trainer
model-index:
- name: BERTweet_retrain_2021_05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERTweet_retrain_2021_05
This model is a fine-tuned version of [vinai/bertweet-base](https://huggingface.co/vinai/bertweet-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5689
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 2.8078 | 1.0 | 5959 | 2.6497 |
| 2.6178 | 2.0 | 11918 | 2.6055 |
| 2.6726 | 3.0 | 17877 | 2.5775 |
### Framework versions
- Transformers 4.45.1
- Pytorch 2.1.0+cu121
- Datasets 3.0.1
- Tokenizers 0.20.0
|
SAISON17/polyglot-ko-12.8b_qlora_v0.0 | SAISON17 | 2024-11-23T07:45:26Z | 74 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-11-23T07:36:20Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Subsets and Splits