modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-27 18:27:39
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 500
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-27 18:23:41
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
wdika/REC_LPDNet_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM | wdika | 2024-03-06T10:49:46Z | 0 | 0 | atommic | [
"atommic",
"image-reconstruction",
"LPDNet",
"ATOMMIC",
"pytorch",
"en",
"dataset:fastMRIBrainsMulticoil",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:50:32Z | ---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- fastMRIBrainsMulticoil
thumbnail: null
tags:
- image-reconstruction
- LPDNet
- ATOMMIC
- pytorch
model-index:
- name: REC_LPDNet_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM
results: []
---
## Model Overview
Learned Primal Dual Network (LPDNet) for 4x & 8x accelerated MRI Reconstruction on the fastMRIBrainsMulticoil dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/fastMRIBrainsMulticoil/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_LPDNet_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM/blob/main/REC_LPDNet_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM.atommic
mode: test
```
### Usage
You need to download the fastMRI Brains dataset to effectively use this model. Check the [fastMRIBrainsMulticoil](https://github.com/wdika/atommic/blob/main/projects/REC/fastMRIBrainsMulticoil/README.md) page for more information.
## Model Architecture
```base
model:
model_name: LPDNet
num_primal: 5
num_dual: 5
num_iter: 5
primal_model_architecture: UNET
primal_in_channels: 2
primal_out_channels: 2
primal_unet_num_filters: 16
primal_unet_num_pool_layers: 2
primal_unet_dropout_probability: 0.0
primal_unet_padding_size: 11
primal_unet_normalize: true
dual_model_architecture: UNET
dual_in_channels: 2
dual_out_channels: 2
dual_unet_num_filters: 16
dual_unet_num_pool_layers: 2
dual_unet_dropout_probability: 0.0
dual_unet_padding_size: 11
dual_unet_normalize: true
dimensionality: 2
reconstruction_loss:
l1: 0.1
ssim: 0.9
estimate_coil_sensitivity_maps_with_nn: true
```
## Training
```base
optim:
name: adam
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/fastMRIBrainsMulticoil/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against RSS targets
------------------------------
4x: MSE = 0.000939 +/- 0.004162 NMSE = 0.02527 +/- 0.09819 PSNR = 32.6 +/- 6.781 SSIM = 0.8815 +/- 0.2009
8x: MSE = 0.001548 +/- 0.00446 NMSE = 0.04132 +/- 0.1069 PSNR = 29.51 +/- 5.934 SSIM = 0.8401 +/- 0.2084
## Limitations
This model was trained on the fastMRIBrainsMulticoil batch0 dataset using a UNet coil sensitivity maps estimation and Geometric Decomposition Coil-Compressions to 1-coil, and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Muckley MJ, Riemenschneider B, Radmanesh A, Kim S, Jeong G, Ko J, Jun Y, Shin H, Hwang D, Mostapha M, Arberet S, Nickel D, Ramzi Z, Ciuciu P, Starck JL, Teuwen J, Karkalousos D, Zhang C, Sriram A, Huang Z, Yakubova N, Lui YW, Knoll F. Results of the 2020 fastMRI Challenge for Machine Learning MR Image Reconstruction. IEEE Trans Med Imaging. 2021 Sep;40(9):2306-2317. doi: 10.1109/TMI.2021.3075856. Epub 2021 Aug 31. PMID: 33929957; PMCID: PMC8428775. |
wdika/REC_MoDL_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM | wdika | 2024-03-06T10:48:20Z | 0 | 0 | atommic | [
"atommic",
"image-reconstruction",
"MoDL",
"ATOMMIC",
"pytorch",
"en",
"dataset:fastMRIBrainsMulticoil",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:50:46Z | ---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- fastMRIBrainsMulticoil
thumbnail: null
tags:
- image-reconstruction
- MoDL
- ATOMMIC
- pytorch
model-index:
- name: REC_MoDL_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM
results: []
---
## Model Overview
MoDL: Model Based Deep Learning Architecture for Inverse Problems for 4x & 8x accelerated MRI Reconstruction on the fastMRIBrainsMulticoil dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/fastMRIBrainsMulticoil/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_MoDL_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM/blob/main/REC_MoDL_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM.atommic
mode: test
```
### Usage
You need to download the fastMRI Brains dataset to effectively use this model. Check the [fastMRIBrainsMulticoil](https://github.com/wdika/atommic/blob/main/projects/REC/fastMRIBrainsMulticoil/README.md) page for more information.
## Model Architecture
```base
model:
model_name: MoDL
unrolled_iterations: 5
residual_blocks: 5
channels: 64
regularization_factor: 0.1
penalization_weight: 1.0
conjugate_gradient_dc: false
conjugate_gradient_iterations: 1
dimensionality: 2
reconstruction_loss:
l1: 0.1
ssim: 0.9
estimate_coil_sensitivity_maps_with_nn: true
```
## Training
```base
optim:
name: adam
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/fastMRIBrainsMulticoil/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against RSS targets
------------------------------
4x: MSE = 0.0009811 +/- 0.003791 NMSE = 0.02496 +/- 0.0693 PSNR = 31.44 +/- 5.655 SSIM = 0.8703 +/- 0.1877
8x: MSE = 0.002104 +/- 0.004177 NMSE = 0.05376 +/- 0.09522 PSNR = 27.81 +/- 5.862 SSIM = 0.8133 +/- 0.1925
## Limitations
This model was trained on the fastMRIBrainsMulticoil batch0 dataset using a UNet coil sensitivity maps estimation and Geometric Decomposition Coil-Compressions to 1-coil, and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Muckley MJ, Riemenschneider B, Radmanesh A, Kim S, Jeong G, Ko J, Jun Y, Shin H, Hwang D, Mostapha M, Arberet S, Nickel D, Ramzi Z, Ciuciu P, Starck JL, Teuwen J, Karkalousos D, Zhang C, Sriram A, Huang Z, Yakubova N, Lui YW, Knoll F. Results of the 2020 fastMRI Challenge for Machine Learning MR Image Reconstruction. IEEE Trans Med Imaging. 2021 Sep;40(9):2306-2317. doi: 10.1109/TMI.2021.3075856. Epub 2021 Aug 31. PMID: 33929957; PMCID: PMC8428775. |
wdika/REC_RIM_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM | wdika | 2024-03-06T10:48:10Z | 0 | 0 | atommic | [
"atommic",
"image-reconstruction",
"RIM",
"ATOMMIC",
"pytorch",
"en",
"dataset:fastMRIBrainsMulticoil",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:51:15Z | ---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- fastMRIBrainsMulticoil
thumbnail: null
tags:
- image-reconstruction
- RIM
- ATOMMIC
- pytorch
model-index:
- name: REC_RIM_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM
results: []
---
## Model Overview
Recurrent Inference Machines (RIM) for 4x & 8x accelerated MRI Reconstruction on the fastMRIBrainsMulticoil dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/fastMRIBrainsMulticoil/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_RIM_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM/blob/main/REC_RIM_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM.atommic
mode: test
```
### Usage
You need to download the fastMRI Brains dataset to effectively use this model. Check the [fastMRIBrainsMulticoil](https://github.com/wdika/atommic/blob/main/projects/REC/fastMRIBrainsMulticoil/README.md) page for more information.
## Model Architecture
```base
model:
model_name: CIRIM
recurrent_layer: GRU
conv_filters:
- 64
- 64
- 2
conv_kernels:
- 5
- 3
- 3
conv_dilations:
- 1
- 2
- 1
conv_bias:
- true
- true
- false
recurrent_filters:
- 64
- 64
- 0
recurrent_kernels:
- 1
- 1
- 0
recurrent_dilations:
- 1
- 1
- 0
recurrent_bias:
- true
- true
- false
depth: 2
time_steps: 8
conv_dim: 2
num_cascades: 1
no_dc: true
keep_prediction: true
accumulate_predictions: true
dimensionality: 2
reconstruction_loss:
l1: 0.1
ssim: 0.9
estimate_coil_sensitivity_maps_with_nn: true
```
## Training
```base
optim:
name: adam
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/fastMRIBrainsMulticoil/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against RSS targets
------------------------------
4x: MSE = 0.0007147 +/- 0.00289 NMSE = 0.01907 +/- 0.06354 PSNR = 33.24 +/- 6.153 SSIM = 0.8847 +/- 0.19
8x: MSE = 0.001466 +/- 0.003407 NMSE = 0.03833 +/- 0.0846 PSNR = 29.45 +/- 5.578 SSIM = 0.8382 +/- 0.199
## Limitations
This model was trained on the fastMRIBrainsMulticoil batch0 dataset using a UNet coil sensitivity maps estimation and Geometric Decomposition Coil-Compressions to 1-coil, and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Muckley MJ, Riemenschneider B, Radmanesh A, Kim S, Jeong G, Ko J, Jun Y, Shin H, Hwang D, Mostapha M, Arberet S, Nickel D, Ramzi Z, Ciuciu P, Starck JL, Teuwen J, Karkalousos D, Zhang C, Sriram A, Huang Z, Yakubova N, Lui YW, Knoll F. Results of the 2020 fastMRI Challenge for Machine Learning MR Image Reconstruction. IEEE Trans Med Imaging. 2021 Sep;40(9):2306-2317. doi: 10.1109/TMI.2021.3075856. Epub 2021 Aug 31. PMID: 33929957; PMCID: PMC8428775. |
wdika/REC_VarNet_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM | wdika | 2024-03-06T10:47:54Z | 0 | 0 | atommic | [
"atommic",
"image-reconstruction",
"VarNet",
"ATOMMIC",
"pytorch",
"en",
"dataset:fastMRIBrainsMulticoil",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:51:55Z | ---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- fastMRIBrainsMulticoil
thumbnail: null
tags:
- image-reconstruction
- VarNet
- ATOMMIC
- pytorch
model-index:
- name: REC_VarNet_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM
results: []
---
## Model Overview
Variational Network (VarNet) for 4x & 8x accelerated MRI Reconstruction on the fastMRIBrainsMulticoil dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/fastMRIBrainsMulticoil/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_VarNet_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM/blob/main/REC_VarNet_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM.atommic
mode: test
```
### Usage
You need to download the fastMRI Brains dataset to effectively use this model. Check the [fastMRIBrainsMulticoil](https://github.com/wdika/atommic/blob/main/projects/REC/fastMRIBrainsMulticoil/README.md) page for more information.
## Model Architecture
```base
model:
model_name: VN
num_cascades: 8
channels: 18
pooling_layers: 4
padding_size: 11
normalize: true
no_dc: false
dimensionality: 2
reconstruction_loss:
l1: 0.1
ssim: 0.9
estimate_coil_sensitivity_maps_with_nn: true
```
## Training
```base
optim:
name: adam
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/fastMRIBrainsMulticoil/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against RSS targets
------------------------------
4x: MSE = 0.000647 +/- 0.003424 NMSE = 0.01882 +/- 0.08376 PSNR = 34 +/- 6.302 SSIM = 0.8925 +/- 0.1981
8x: MSE = 0.00121 +/- 0.004349 NMSE = 0.03456 +/- 0.1321 PSNR = 30.73 +/- 5.936 SSIM = 0.8561 +/- 0.2161
## Limitations
This model was trained on the fastMRIBrainsMulticoil batch0 dataset using a UNet coil sensitivity maps estimation and Geometric Decomposition Coil-Compressions to 1-coil, and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Muckley MJ, Riemenschneider B, Radmanesh A, Kim S, Jeong G, Ko J, Jun Y, Shin H, Hwang D, Mostapha M, Arberet S, Nickel D, Ramzi Z, Ciuciu P, Starck JL, Teuwen J, Karkalousos D, Zhang C, Sriram A, Huang Z, Yakubova N, Lui YW, Knoll F. Results of the 2020 fastMRI Challenge for Machine Learning MR Image Reconstruction. IEEE Trans Med Imaging. 2021 Sep;40(9):2306-2317. doi: 10.1109/TMI.2021.3075856. Epub 2021 Aug 31. PMID: 33929957; PMCID: PMC8428775. |
wdika/REC_CIRIM_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM | wdika | 2024-03-06T10:47:34Z | 0 | 0 | atommic | [
"atommic",
"image-reconstruction",
"CIRIM",
"ATOMMIC",
"pytorch",
"en",
"dataset:StanfordKnees2019",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:52:58Z | ---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- StanfordKnees2019
thumbnail: null
tags:
- image-reconstruction
- CIRIM
- ATOMMIC
- pytorch
model-index:
- name: REC_CIRIM_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM
results: []
---
## Model Overview
Cascades of Independently Recurrent Inference Machines (CIRIM) for 12x accelerated MRI Reconstruction on the StanfordKnees2019 dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/StanfordKnees2019/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_CIRIM_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM/blob/main/REC_CIRIM_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM.atommic
mode: test
```
### Usage
You need to download the Stanford Knees 2019 dataset to effectively use this model. Check the [StanfordKnees2019](https://github.com/wdika/atommic/blob/main/projects/REC/StanfordKnees2019/README.md) page for more information.
## Model Architecture
```base
model:
model_name: CIRIM
recurrent_layer: IndRNN
conv_filters:
- 64
- 64
- 2
conv_kernels:
- 5
- 3
- 3
conv_dilations:
- 1
- 2
- 1
conv_bias:
- true
- true
- false
recurrent_filters:
- 64
- 64
- 0
recurrent_kernels:
- 1
- 1
- 0
recurrent_dilations:
- 1
- 1
- 0
recurrent_bias:
- true
- true
- false
depth: 2
time_steps: 8
conv_dim: 2
num_cascades: 5
no_dc: true
keep_prediction: true
accumulate_predictions: true
dimensionality: 2
reconstruction_loss:
wasserstein: 1.0
```
## Training
```base
optim:
name: adamw
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/StanfordKnees2019/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against SENSE targets
--------------------------------
12x: MSE = 0.001081 +/- 0.005786 NMSE = 0.03494 +/- 0.09865 PSNR = 32.77 +/- 7.234 SSIM = 0.7955 +/- 0.311
## Limitations
This model was trained on the StanfordKnees2019 batch0 using a UNet coil sensitivity maps estimation and Geometric Decomposition Coil-Compressions to 1-coil, and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Epperson K, Rt R, Sawyer AM, et al. Creation of Fully Sampled MR Data Repository for Compressed SENSEing of the Knee. SMRT Conference 2013;2013:1 |
wdika/REC_VarNet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM | wdika | 2024-03-06T10:46:52Z | 0 | 0 | atommic | [
"atommic",
"image-reconstruction",
"VarNet",
"ATOMMIC",
"pytorch",
"en",
"dataset:StanfordKnees2019",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:55:03Z | ---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- StanfordKnees2019
thumbnail: null
tags:
- image-reconstruction
- VarNet
- ATOMMIC
- pytorch
model-index:
- name: REC_VarNet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM
results: []
---
## Model Overview
Variational Network (VarNet) for 12x accelerated MRI Reconstruction on the StanfordKnees2019 dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/StanfordKnees2019/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_VarNet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM/blob/main/REC_VarNet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM.atommic
mode: test
```
### Usage
You need to download the Stanford Knees 2019 dataset to effectively use this model. Check the [StanfordKnees2019](https://github.com/wdika/atommic/blob/main/projects/REC/StanfordKnees2019/README.md) page for more information.
## Model Architecture
```base
model:
model_name: VN
num_cascades: 8
channels: 18
pooling_layers: 4
padding_size: 11
normalize: true
no_dc: false
dimensionality: 2
reconstruction_loss:
wasserstein: 1.0
```
## Training
```base
optim:
name: adamw
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/StanfordKnees2019/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against SENSE targets
--------------------------------
12x: MSE = 0.001261 +/- 0.005865 NMSE = 0.04287 +/- 0.101 PSNR = 31.5 +/- 6.696 SSIM = 0.7635 +/- 0.3022
## Limitations
This model was trained on the StanfordKnees2019 batch0 using a UNet coil sensitivity maps estimation and Geometric Decomposition Coil-Compressions to 1-coil, and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Epperson K, Rt R, Sawyer AM, et al. Creation of Fully Sampled MR Data Repository for Compressed SENSEing of the Knee. SMRT Conference 2013;2013:1 |
wdika/REC_XPDNet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM | wdika | 2024-03-06T10:46:41Z | 0 | 0 | atommic | [
"atommic",
"image-reconstruction",
"XPDNet",
"ATOMMIC",
"pytorch",
"en",
"dataset:StanfordKnees2019",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:55:37Z | ---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- StanfordKnees2019
thumbnail: null
tags:
- image-reconstruction
- XPDNet
- ATOMMIC
- pytorch
model-index:
- name: REC_XPDNet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM
results: []
---
## Model Overview
XPDNet for 12x accelerated MRI Reconstruction on the StanfordKnees2019 dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/StanfordKnees2019/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_XPDNet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM/blob/main/REC_XPDNet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM.atommic
mode: test
```
### Usage
You need to download the Stanford Knees 2019 dataset to effectively use this model. Check the [StanfordKnees2019](https://github.com/wdika/atommic/blob/main/projects/REC/StanfordKnees2019/README.md) page for more information.
## Model Architecture
```base
model:
model_name: XPDNet
num_primal: 5
num_dual: 1
num_iter: 10
use_primal_only: true
kspace_model_architecture: CONV
kspace_in_channels: 2
kspace_out_channels: 2
dual_conv_hidden_channels: 16
dual_conv_num_dubs: 2
dual_conv_batchnorm: false
image_model_architecture: MWCNN
imspace_in_channels: 2
imspace_out_channels: 2
mwcnn_hidden_channels: 16
mwcnn_num_scales: 0
mwcnn_bias: true
mwcnn_batchnorm: false
normalize_image: true
dimensionality: 2
reconstruction_loss:
wasserstein: 1.0
```
## Training
```base
optim:
name: adamw
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/StanfordKnees2019/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against SENSE targets
--------------------------------
12x: MSE = 0.002691 +/- 0.008089 NMSE = 0.1117 +/- 0.1955 PSNR = 27.18 +/- 5.768 SSIM = 0.6544 +/- 0.2702
## Limitations
This model was trained on the StanfordKnees2019 batch0 using a UNet coil sensitivity maps estimation and Geometric Decomposition Coil-Compressions to 1-coil, and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Epperson K, Rt R, Sawyer AM, et al. Creation of Fully Sampled MR Data Repository for Compressed SENSEing of the Knee. SMRT Conference 2013;2013:1 |
wdika/SEG_DynUNet_BraTS2023AdultGlioma | wdika | 2024-03-06T10:46:31Z | 0 | 0 | atommic | [
"atommic",
"image-segmentation",
"DynUNet",
"ATOMMIC",
"pytorch",
"en",
"dataset:BraTS2023AdultGlioma",
"license:apache-2.0",
"region:us"
] | image-segmentation | 2024-03-05T17:56:18Z | ---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- BraTS2023AdultGlioma
thumbnail: null
tags:
- image-segmentation
- DynUNet
- ATOMMIC
- pytorch
model-index:
- name: SEG_DynUNet_BraTS2023AdultGlioma
results: []
---
## Model Overview
AttentionUNet for MRI Segmentation on the BraTS2023AdultGlioma dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/SEG/BraTS2023AdultGlioma/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/SEG_DynUNet_BraTS2023AdultGlioma/blob/main/SEG_DynUNet_BraTS2023AdultGlioma.atommic
mode: test
```
### Usage
You need to download the BraTS 2023 Adult Glioma dataset to effectively use this model. Check the [BraTS2023AdultGlioma](https://github.com/wdika/atommic/blob/main/projects/SEG/BraTS2023AdultGlioma/README.md) page for more information.
## Model Architecture
```base
model:
model_name: SEGMENTATIONDYNUNET
segmentation_module: DYNUNet
segmentation_module_input_channels: 4
segmentation_module_output_channels: 4
segmentation_module_channels:
- 32
- 64
- 128
- 256
- 512
segmentation_module_kernel_size:
- 3
- 3
- 3
- 3
- 1
segmentation_module_strides:
- 1
- 1
- 1
- 1
- 1
segmentation_module_dropout: 0.0
segmentation_module_norm: instance
segmentation_module_activation: leakyrelu
segmentation_module_deep_supervision: true
segmentation_module_deep_supervision_levels: 2
segmentation_module_normalize: false
segmentation_module_norm_groups: 2
segmentation_loss:
dice: 1.0
dice_loss_include_background: true # always set to true if the background is removed
dice_loss_to_onehot_y: false
dice_loss_sigmoid: false
dice_loss_softmax: false
dice_loss_other_act: none
dice_loss_squared_pred: false
dice_loss_jaccard: false
dice_loss_flatten: false
dice_loss_reduction: mean_batch
dice_loss_smooth_nr: 1e-5
dice_loss_smooth_dr: 1e-5
dice_loss_batch: true
dice_metric_include_background: true # always set to true if the background is removed
dice_metric_to_onehot_y: false
dice_metric_sigmoid: false
dice_metric_softmax: false
dice_metric_other_act: none
dice_metric_squared_pred: false
dice_metric_jaccard: false
dice_metric_flatten: false
dice_metric_reduction: mean_batch
dice_metric_smooth_nr: 1e-5
dice_metric_smooth_dr: 1e-5
dice_metric_batch: true
segmentation_classes_thresholds: [ 0.5, 0.5, 0.5, 0.5 ]
segmentation_activation: sigmoid
magnitude_input: true
log_multiple_modalities: true # log all modalities in the same image, e.g. T1, T2, T1ce, FLAIR will be concatenated
normalization_type: minmax
normalize_segmentation_output: true
complex_data: false
```
## Training
```base
optim:
name: adam
lr: 1e-4
betas:
- 0.9
- 0.98
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 10
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
Evaluation can be performed using the segmentation [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/segmentation.py) script for the segmentation task, with --evaluation_type per_slice.
Results
-------
Evaluation
----------
DICE = 0.8061 +/- 0.276 F1 = 0.1045 +/- 0.5801 HD95 = 5.119 +/- 5.411 IOU = 0.06959 +/- 0.4187
## Limitations
This model was trained on the BraTS2023AdultGlioma dataset with stacked T1c, T1n, T2f, T2w images and might differ in performance compared to the leaderboard results.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Kazerooni AF, Khalili N, Liu X, et al. The Brain Tumor Segmentation (BraTS) Challenge 2023: Focus on Pediatrics (CBTN-CONNECT-DIPGR-ASNR-MICCAI BraTS-PEDs). 2023 |
wdika/SEG_UNet3D_BraTS2023AdultGlioma | wdika | 2024-03-06T10:46:19Z | 0 | 0 | atommic | [
"atommic",
"image-segmentation",
"UNet3D",
"ATOMMIC",
"pytorch",
"en",
"dataset:BraTS2023AdultGlioma",
"license:apache-2.0",
"region:us"
] | image-segmentation | 2024-03-05T17:56:58Z | ---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- BraTS2023AdultGlioma
thumbnail: null
tags:
- image-segmentation
- UNet3D
- ATOMMIC
- pytorch
model-index:
- name: SEG_UNet3D_BraTS2023AdultGlioma
results: []
---
## Model Overview
AttentionUNet for MRI Segmentation on the BraTS2023AdultGlioma dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/SEG/BraTS2023AdultGlioma/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/SEG_UNet3D_BraTS2023AdultGlioma/blob/main/SEG_UNet3D_BraTS2023AdultGlioma.atommic
mode: test
```
### Usage
You need to download the BraTS 2023 Adult Glioma dataset to effectively use this model. Check the [BraTS2023AdultGlioma](https://github.com/wdika/atommic/blob/main/projects/SEG/BraTS2023AdultGlioma/README.md) page for more information.
## Model Architecture
```base
model:
model_name: SEGMENTATION3DUNET
segmentation_module: UNet
segmentation_module_input_channels: 4
segmentation_module_output_channels: 4
segmentation_module_channels: 32
segmentation_module_pooling_layers: 5
segmentation_module_dropout: 0.0
segmentation_module_normalize: false
segmentation_loss:
dice: 1.0
dice_loss_include_background: true # always set to true if the background is removed
dice_loss_to_onehot_y: false
dice_loss_sigmoid: false
dice_loss_softmax: false
dice_loss_other_act: none
dice_loss_squared_pred: false
dice_loss_jaccard: false
dice_loss_flatten: false
dice_loss_reduction: mean_batch
dice_loss_smooth_nr: 1e-5
dice_loss_smooth_dr: 1e-5
dice_loss_batch: true
dice_metric_include_background: true # always set to true if the background is removed
dice_metric_to_onehot_y: false
dice_metric_sigmoid: false
dice_metric_softmax: false
dice_metric_other_act: none
dice_metric_squared_pred: false
dice_metric_jaccard: false
dice_metric_flatten: false
dice_metric_reduction: mean_batch
dice_metric_smooth_nr: 1e-5
dice_metric_smooth_dr: 1e-5
dice_metric_batch: true
segmentation_classes_thresholds: [ 0.5, 0.5, 0.5, 0.5 ]
segmentation_activation: sigmoid
magnitude_input: true
log_multiple_modalities: true # log all modalities in the same image, e.g. T1, T2, T1ce, FLAIR will be concatenated
normalization_type: minmax
normalize_segmentation_output: true
complex_data: false
```
## Training
```base
optim:
name: adam
lr: 1e-4
betas:
- 0.9
- 0.98
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 10
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
Evaluation can be performed using the segmentation [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/segmentation.py) script for the segmentation task, with --evaluation_type per_slice.
Results
-------
Evaluation
----------
DICE = 0.9359 +/- 0.1334 F1 = 0.6735 +/- 0.782 HD95 = 3.55 +/- 2.162 IOU = 0.5279 +/- 0.6518
## Limitations
This model was trained on the BraTS2023AdultGlioma dataset with stacked T1c, T1n, T2f, T2w images and might differ in performance compared to the leaderboard results.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Kazerooni AF, Khalili N, Liu X, et al. The Brain Tumor Segmentation (BraTS) Challenge 2023: Focus on Pediatrics (CBTN-CONNECT-DIPGR-ASNR-MICCAI BraTS-PEDs). 2023 |
wdika/REC_UNet_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM | wdika | 2024-03-06T10:45:10Z | 0 | 0 | atommic | [
"atommic",
"image-reconstruction",
"UNet",
"ATOMMIC",
"pytorch",
"en",
"dataset:fastMRIBrainsMulticoil",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:51:31Z | ---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- fastMRIBrainsMulticoil
thumbnail: null
tags:
- image-reconstruction
- UNet
- ATOMMIC
- pytorch
model-index:
- name: REC_UNet_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM
results: []
---
## Model Overview
UNet for 4x & 8x accelerated MRI Reconstruction on the fastMRIBrainsMulticoil dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/fastMRIBrainsMulticoil/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_UNet_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM/blob/main/REC_UNet_fastMRIBrainsMulticoil_equispaced_4x_8x_GDCC_1_coil_NNEstimationCSM.atommic
mode: test
```
### Usage
You need to download the fastMRI Brains dataset to effectively use this model. Check the [fastMRIBrainsMulticoil](https://github.com/wdika/atommic/blob/main/projects/REC/fastMRIBrainsMulticoil/README.md) page for more information.
## Model Architecture
```base
model:
model_name: UNet
channels: 64
pooling_layers: 4
in_channels: 2
out_channels: 2
padding_size: 11
dropout: 0.0
normalize: true
norm_groups: 2
dimensionality: 2
reconstruction_loss:
l1: 0.1
ssim: 0.9
estimate_coil_sensitivity_maps_with_nn: true
```
## Training
```base
optim:
name: adam
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/fastMRIBrainsMulticoil/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against RSS targets
------------------------------
4x: MSE = 0.000723 +/- 0.003086 NMSE = 0.01924 +/- 0.0629 PSNR = 33.09 +/- 6.023 SSIM = 0.8853 +/- 0.1817
8x: MSE = 0.001353 +/- 0.00366 NMSE = 0.03587 +/- 0.08282 PSNR = 29.87 +/- 5.676 SSIM = 0.847 +/- 0.1972
## Limitations
This model was trained on the fastMRIBrainsMulticoil batch0 dataset using a UNet coil sensitivity maps estimation and Geometric Decomposition Coil-Compressions to 1-coil, and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Muckley MJ, Riemenschneider B, Radmanesh A, Kim S, Jeong G, Ko J, Jun Y, Shin H, Hwang D, Mostapha M, Arberet S, Nickel D, Ramzi Z, Ciuciu P, Starck JL, Teuwen J, Karkalousos D, Zhang C, Sriram A, Huang Z, Yakubova N, Lui YW, Knoll F. Results of the 2020 fastMRI Challenge for Machine Learning MR Image Reconstruction. IEEE Trans Med Imaging. 2021 Sep;40(9):2306-2317. doi: 10.1109/TMI.2021.3075856. Epub 2021 Aug 31. PMID: 33929957; PMCID: PMC8428775. |
wdika/REC_UNet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM | wdika | 2024-03-06T10:43:57Z | 0 | 0 | atommic | [
"atommic",
"image-reconstruction",
"UNet",
"ATOMMIC",
"pytorch",
"en",
"dataset:StanfordKnees2019",
"license:apache-2.0",
"region:us"
] | null | 2024-03-05T17:54:40Z | ---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- StanfordKnees2019
thumbnail: null
tags:
- image-reconstruction
- UNet
- ATOMMIC
- pytorch
model-index:
- name: REC_UNet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM
results: []
---
## Model Overview
UNet for 12x accelerated MRI Reconstruction on the StanfordKnees2019 dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/REC/StanfordKnees2019/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/REC_UNet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM/blob/main/REC_UNet_StanfordKnees2019_gaussian2d_12x_AutoEstimationCSM.atommic
mode: test
```
### Usage
You need to download the Stanford Knees 2019 dataset to effectively use this model. Check the [StanfordKnees2019](https://github.com/wdika/atommic/blob/main/projects/REC/StanfordKnees2019/README.md) page for more information.
## Model Architecture
```base
model:
model_name: UNet
channels: 64
pooling_layers: 4
in_channels: 2
out_channels: 2
padding_size: 11
dropout: 0.0
normalize: true
norm_groups: 2
dimensionality: 2
reconstruction_loss:
wasserstein: 1.0
```
## Training
```base
optim:
name: adamw
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
To compute the targets using the raw k-space and the chosen coil combination method, accompanied with the chosen coil sensitivity maps estimation method, you can use [targets](https://github.com/wdika/atommic/tree/main/projects/REC/StanfordKnees2019/conf/targets) configuration files.
Evaluation can be performed using the [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/reconstruction.py) script for the reconstruction task, with --evaluation_type per_slice.
Results
-------
Evaluation against SENSE targets
--------------------------------
12x: MSE = 0.001251 +/- 0.005686 NMSE = 0.04254 +/- 0.09148 PSNR = 31.4 +/- 6.554 SSIM = 0.7705 +/- 0.2946
## Limitations
This model was trained on the StanfordKnees2019 batch0 using a UNet coil sensitivity maps estimation and Geometric Decomposition Coil-Compressions to 1-coil, and might differ from the results reported on the challenge leaderboard.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Epperson K, Rt R, Sawyer AM, et al. Creation of Fully Sampled MR Data Repository for Compressed SENSEing of the Knee. SMRT Conference 2013;2013:1 |
wdika/SEG_UNet_BraTS2023AdultGlioma | wdika | 2024-03-06T10:43:12Z | 0 | 0 | atommic | [
"atommic",
"image-segmentation",
"UNet",
"ATOMMIC",
"pytorch",
"en",
"dataset:BraTS2023AdultGlioma",
"license:apache-2.0",
"region:us"
] | image-segmentation | 2024-03-05T17:56:33Z | ---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- BraTS2023AdultGlioma
thumbnail: null
tags:
- image-segmentation
- UNet
- ATOMMIC
- pytorch
model-index:
- name: SEG_UNet_BraTS2023AdultGlioma
results: []
---
## Model Overview
AttentionUNet for MRI Segmentation on the BraTS2023AdultGlioma dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/SEG/BraTS2023AdultGlioma/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/SEG_UNet_BraTS2023AdultGlioma/blob/main/SEG_UNet_BraTS2023AdultGlioma.atommic
mode: test
```
### Usage
You need to download the BraTS 2023 Adult Glioma dataset to effectively use this model. Check the [BraTS2023AdultGlioma](https://github.com/wdika/atommic/blob/main/projects/SEG/BraTS2023AdultGlioma/README.md) page for more information.
## Model Architecture
```base
model:
model_name: SEGMENTATIONUNET
segmentation_module: UNet
segmentation_module_input_channels: 4
segmentation_module_output_channels: 4
segmentation_module_channels: 32
segmentation_module_pooling_layers: 5
segmentation_module_dropout: 0.0
segmentation_module_normalize: false
segmentation_loss:
dice: 1.0
dice_loss_include_background: true # always set to true if the background is removed
dice_loss_to_onehot_y: false
dice_loss_sigmoid: false
dice_loss_softmax: false
dice_loss_other_act: none
dice_loss_squared_pred: false
dice_loss_jaccard: false
dice_loss_flatten: false
dice_loss_reduction: mean_batch
dice_loss_smooth_nr: 1e-5
dice_loss_smooth_dr: 1e-5
dice_loss_batch: true
dice_metric_include_background: true # always set to true if the background is removed
dice_metric_to_onehot_y: false
dice_metric_sigmoid: false
dice_metric_softmax: false
dice_metric_other_act: none
dice_metric_squared_pred: false
dice_metric_jaccard: false
dice_metric_flatten: false
dice_metric_reduction: mean_batch
dice_metric_smooth_nr: 1e-5
dice_metric_smooth_dr: 1e-5
dice_metric_batch: true
segmentation_classes_thresholds: [ 0.5, 0.5, 0.5, 0.5 ]
segmentation_activation: sigmoid
magnitude_input: true
log_multiple_modalities: true # log all modalities in the same image, e.g. T1, T2, T1ce, FLAIR will be concatenated
normalization_type: minmax
normalize_segmentation_output: true
complex_data: false
```
## Training
```base
optim:
name: adam
lr: 1e-4
betas:
- 0.9
- 0.98
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 10
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
Evaluation can be performed using the segmentation [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/segmentation.py) script for the segmentation task, with --evaluation_type per_slice.
Results
-------
Evaluation
----------
DICE = 0.9372 +/- 0.1175 F1 = 0.6713 +/- 0.7867 HD95 = 3.504 +/- 2.089 IOU = 0.5346 +/- 0.6628
## Limitations
This model was trained on the BraTS2023AdultGlioma dataset with stacked T1c, T1n, T2f, T2w images and might differ in performance compared to the leaderboard results.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Kazerooni AF, Khalili N, Liu X, et al. The Brain Tumor Segmentation (BraTS) Challenge 2023: Focus on Pediatrics (CBTN-CONNECT-DIPGR-ASNR-MICCAI BraTS-PEDs). 2023 |
wdika/SEG_VNet_BraTS2023AdultGlioma | wdika | 2024-03-06T10:42:42Z | 0 | 0 | atommic | [
"atommic",
"image-segmentation",
"VNet",
"ATOMMIC",
"pytorch",
"en",
"dataset:BraTS2023AdultGlioma",
"license:apache-2.0",
"region:us"
] | image-segmentation | 2024-03-05T17:57:43Z | ---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- BraTS2023AdultGlioma
thumbnail: null
tags:
- image-segmentation
- VNet
- ATOMMIC
- pytorch
model-index:
- name: SEG_VNet_BraTS2023AdultGlioma
results: []
---
## Model Overview
AttentionUNet for MRI Segmentation on the BraTS2023AdultGlioma dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/SEG/BraTS2023AdultGlioma/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/SEG_VNet_BraTS2023AdultGlioma/blob/main/SEG_VNet_BraTS2023AdultGlioma.atommic
mode: test
```
### Usage
You need to download the BraTS 2023 Adult Glioma dataset to effectively use this model. Check the [BraTS2023AdultGlioma](https://github.com/wdika/atommic/blob/main/projects/SEG/BraTS2023AdultGlioma/README.md) page for more information.
## Model Architecture
```base
model:
model_name: SEGMENTATIONVNET
segmentation_module: VNet
segmentation_module_input_channels: 4
segmentation_module_output_channels: 4
segmentation_module_activation: elu
segmentation_module_dropout: 0.0
segmentation_module_bias: False
segmentation_module_padding_size: 15
segmentation_loss:
dice: 1.0
dice_loss_include_background: true # always set to true if the background is removed
dice_loss_to_onehot_y: false
dice_loss_sigmoid: false
dice_loss_softmax: false
dice_loss_other_act: none
dice_loss_squared_pred: false
dice_loss_jaccard: false
dice_loss_flatten: false
dice_loss_reduction: mean_batch
dice_loss_smooth_nr: 1e-5
dice_loss_smooth_dr: 1e-5
dice_loss_batch: true
dice_metric_include_background: true # always set to true if the background is removed
dice_metric_to_onehot_y: false
dice_metric_sigmoid: false
dice_metric_softmax: false
dice_metric_other_act: none
dice_metric_squared_pred: false
dice_metric_jaccard: false
dice_metric_flatten: false
dice_metric_reduction: mean_batch
dice_metric_smooth_nr: 1e-5
dice_metric_smooth_dr: 1e-5
dice_metric_batch: true
segmentation_classes_thresholds: [ 0.5, 0.5, 0.5, 0.5 ]
segmentation_activation: sigmoid
magnitude_input: true
log_multiple_modalities: true # log all modalities in the same image, e.g. T1, T2, T1ce, FLAIR will be concatenated
normalization_type: minmax
normalize_segmentation_output: true
complex_data: false
```
## Training
```base
optim:
name: adam
lr: 1e-4
betas:
- 0.9
- 0.98
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 10
precision: 16-mixed
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
Evaluation can be performed using the segmentation [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/segmentation.py) script for the segmentation task, with --evaluation_type per_slice.
Results
-------
Evaluation
----------
DICE = 0.7331 +/- 0.4374 F1 = 0.01428 +/- 0.2341 HD95 = 6.01 +/- 6.097 IOU = 0.0001576 +/- 0.004287
## Limitations
This model was trained on the BraTS2023AdultGlioma dataset with stacked T1c, T1n, T2f, T2w images and might differ in performance compared to the leaderboard results.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Kazerooni AF, Khalili N, Liu X, et al. The Brain Tumor Segmentation (BraTS) Challenge 2023: Focus on Pediatrics (CBTN-CONNECT-DIPGR-ASNR-MICCAI BraTS-PEDs). 2023 |
wdika/SEG_UNet_ISLES2022SubAcuteStroke | wdika | 2024-03-06T10:42:17Z | 0 | 0 | atommic | [
"atommic",
"image-segmentation",
"UNet",
"ATOMMIC",
"pytorch",
"en",
"dataset:ISLES2022SubAcuteStroke",
"license:apache-2.0",
"region:us"
] | image-segmentation | 2024-03-05T17:58:40Z | ---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- ISLES2022SubAcuteStroke
thumbnail: null
tags:
- image-segmentation
- UNet
- ATOMMIC
- pytorch
model-index:
- name: SEG_UNet_ISLES2022SubAcuteStroke
results: []
---
## Model Overview
AttentionUNet for MRI Segmentation on the ISLES2022SubAcuteStroke dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/SEG/ISLES2022SubAcuteStroke/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/SEG_UNet_ISLES2022SubAcuteStroke/blob/main/SEG_UNet_ISLES2022SubAcuteStroke.atommic
mode: test
```
### Usage
You need to download the ISLES 2022 Sub Acute Stroke dataset to effectively use this model. Check the [ISLES2022SubAcuteStroke](https://github.com/wdika/atommic/blob/main/projects/SEG/ISLES2022SubAcuteStroke/README.md) page for more information.
## Model Architecture
```base
model:
model_name: SEGMENTATIONUNET
segmentation_module: UNet
segmentation_module_input_channels: 3
segmentation_module_output_channels: 1
segmentation_module_channels: 32
segmentation_module_pooling_layers: 5
segmentation_module_dropout: 0.0
segmentation_module_normalize: false
segmentation_loss:
dice: 1.0
dice_loss_include_background: true # always set to true if the background is removed
dice_loss_to_onehot_y: false
dice_loss_sigmoid: false
dice_loss_softmax: false
dice_loss_other_act: none
dice_loss_squared_pred: false
dice_loss_jaccard: false
dice_loss_flatten: false
dice_loss_reduction: mean_batch
dice_loss_smooth_nr: 1e-5
dice_loss_smooth_dr: 1e-5
dice_loss_batch: true
dice_metric_include_background: true # always set to true if the background is removed
dice_metric_to_onehot_y: false
dice_metric_sigmoid: false
dice_metric_softmax: false
dice_metric_other_act: none
dice_metric_squared_pred: false
dice_metric_jaccard: false
dice_metric_flatten: false
dice_metric_reduction: mean_batch
dice_metric_smooth_nr: 1e-5
dice_metric_smooth_dr: 1e-5
dice_metric_batch: true
segmentation_classes_thresholds: [ 0.5 ]
segmentation_activation: sigmoid
magnitude_input: true
log_multiple_modalities: true # log all modalities in the same image, e.g. T1, T2, T1ce, FLAIR will be concatenated
normalization_type: minmax
normalize_segmentation_output: true
complex_data: false
```
## Training
```base
optim:
name: adamw
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: CosineAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 50
precision: 16-mixed # '16-mixed', 'bf16-mixed', '32-true', '64-true', '64', '32', '16', 'bf16'
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
Evaluation can be performed using the segmentation [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/segmentation.py) script for the segmentation task, with --evaluation_type per_slice.
Results
-------
Evaluation
----------
ALD = 0.9088 +/- 3.953 AVD = 0.5439 +/- 3.921 DICE = 0.6946 +/- 0.5589 L-F1 = 0.7859 +/- 0.5848
## Limitations
This model was trained on the ISLES2022SubAcuteStroke dataset with stacked ADC, DWI, FLAIR images and might differ in performance compared to the leaderboard results.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Petzsche MRH, Rosa E de la, Hanning U, et al. ISLES 2022: A multi-center magnetic resonance imaging stroke lesion segmentation dataset. Scientific Data 1 2022;9 |
wdika/SEG_VNet_ISLES2022SubAcuteStroke | wdika | 2024-03-06T10:42:03Z | 0 | 0 | atommic | [
"atommic",
"image-segmentation",
"VNet",
"ATOMMIC",
"pytorch",
"en",
"dataset:ISLES2022SubAcuteStroke",
"license:apache-2.0",
"region:us"
] | image-segmentation | 2024-03-05T17:59:41Z | ---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- ISLES2022SubAcuteStroke
thumbnail: null
tags:
- image-segmentation
- VNet
- ATOMMIC
- pytorch
model-index:
- name: SEG_VNet_ISLES2022SubAcuteStroke
results: []
---
## Model Overview
AttentionUNet for MRI Segmentation on the ISLES2022SubAcuteStroke dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/SEG/ISLES2022SubAcuteStroke/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/SEG_VNet_ISLES2022SubAcuteStroke/blob/main/SEG_VNet_ISLES2022SubAcuteStroke.atommic
mode: test
```
### Usage
You need to download the ISLES 2022 Sub Acute Stroke dataset to effectively use this model. Check the [ISLES2022SubAcuteStroke](https://github.com/wdika/atommic/blob/main/projects/SEG/ISLES2022SubAcuteStroke/README.md) page for more information.
## Model Architecture
```base
model:
model_name: SEGMENTATIONVNET
segmentation_module: VNet
segmentation_module_input_channels: 3
segmentation_module_output_channels: 1
segmentation_module_activation: elu
segmentation_module_dropout: 0.0
segmentation_module_bias: False
segmentation_module_padding_size: 15
segmentation_loss:
dice: 1.0
dice_loss_include_background: true # always set to true if the background is removed
dice_loss_to_onehot_y: false
dice_loss_sigmoid: false
dice_loss_softmax: false
dice_loss_other_act: none
dice_loss_squared_pred: false
dice_loss_jaccard: false
dice_loss_flatten: false
dice_loss_reduction: mean_batch
dice_loss_smooth_nr: 1e-5
dice_loss_smooth_dr: 1e-5
dice_loss_batch: true
dice_metric_include_background: true # always set to true if the background is removed
dice_metric_to_onehot_y: false
dice_metric_sigmoid: false
dice_metric_softmax: false
dice_metric_other_act: none
dice_metric_squared_pred: false
dice_metric_jaccard: false
dice_metric_flatten: false
dice_metric_reduction: mean_batch
dice_metric_smooth_nr: 1e-5
dice_metric_smooth_dr: 1e-5
dice_metric_batch: true
segmentation_classes_thresholds: [ 0.5 ]
segmentation_activation: sigmoid
magnitude_input: true
log_multiple_modalities: true # log all modalities in the same image, e.g. T1, T2, T1ce, FLAIR will be concatenated
normalization_type: minmax
normalize_segmentation_output: true
complex_data: false
```
## Training
```base
optim:
name: adamw
lr: 1e-4
betas:
- 0.9
- 0.999
weight_decay: 0.0
sched:
name: CosineAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 50
precision: 16-mixed # '16-mixed', 'bf16-mixed', '32-true', '64-true', '64', '32', '16', 'bf16'
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
Evaluation can be performed using the segmentation [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/segmentation.py) script for the segmentation task, with --evaluation_type per_slice.
Results
-------
Evaluation
----------
ALD = 2.281 +/- 10.72 AVD = 3.257 +/- 27.43 DICE = 0.4903 +/- 0.694 L-F1 = 0.5998 +/- 0.6866
## Limitations
This model was trained on the ISLES2022SubAcuteStroke dataset with stacked ADC, DWI, FLAIR images and might differ in performance compared to the leaderboard results.
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Petzsche MRH, Rosa E de la, Hanning U, et al. ISLES 2022: A multi-center magnetic resonance imaging stroke lesion segmentation dataset. Scientific Data 1 2022;9 |
wdika/SEG_DynUNet_SKMTEA | wdika | 2024-03-06T10:41:53Z | 0 | 0 | atommic | [
"atommic",
"image-segmentation",
"DynUNet",
"ATOMMIC",
"pytorch",
"en",
"dataset:SKMTEA",
"license:apache-2.0",
"region:us"
] | image-segmentation | 2024-03-05T18:00:27Z | ---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- SKMTEA
thumbnail: null
tags:
- image-segmentation
- DynUNet
- ATOMMIC
- pytorch
model-index:
- name: SEG_DynUNet_SKMTEA
results: []
---
## Model Overview
AttentionUNet for MRI Segmentation on the SKMTEA dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/SEG/SKMTEA/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/SEG_DynUNet_SKMTEA/blob/main/SEG_DynUNet_SKMTEA.atommic
mode: test
```
### Usage
You need to download the SKM-TEA dataset to effectively use this model. Check the [SKMTEA](https://github.com/wdika/atommic/blob/main/projects/SEG/SKMTEA/README.md) page for more information.
## Model Architecture
```base
model:
model_name: SEGMENTATIONDYNUNET
segmentation_module: DYNUNet
segmentation_module_input_channels: 1
segmentation_module_output_channels: 4
segmentation_module_channels:
- 32
- 64
- 128
- 256
- 512
segmentation_module_kernel_size:
- 3
- 3
- 3
- 3
- 1
segmentation_module_strides:
- 1
- 1
- 1
- 1
- 1
segmentation_module_dropout: 0.0
segmentation_module_norm: instance
segmentation_module_activation: leakyrelu
segmentation_module_deep_supervision: true
segmentation_module_deep_supervision_levels: 2
segmentation_module_normalize: false
segmentation_module_norm_groups: 2
segmentation_loss:
dice: 1.0
dice_loss_include_background: true # always set to true if the background is removed
dice_loss_to_onehot_y: false
dice_loss_sigmoid: false
dice_loss_softmax: false
dice_loss_other_act: none
dice_loss_squared_pred: false
dice_loss_jaccard: false
dice_loss_flatten: false
dice_loss_reduction: mean_batch
dice_loss_smooth_nr: 1e-5
dice_loss_smooth_dr: 1e-5
dice_loss_batch: true
dice_metric_include_background: true # always set to true if the background is removed
dice_metric_to_onehot_y: false
dice_metric_sigmoid: false
dice_metric_softmax: false
dice_metric_other_act: none
dice_metric_squared_pred: false
dice_metric_jaccard: false
dice_metric_flatten: false
dice_metric_reduction: mean_batch
dice_metric_smooth_nr: 1e-5
dice_metric_smooth_dr: 1e-5
dice_metric_batch: true
segmentation_classes_thresholds: [0.5, 0.5, 0.5, 0.5]
segmentation_activation: sigmoid
magnitude_input: true
log_multiple_modalities: false # log all modalities in the same image, e.g. T1, T2, T1ce, FLAIR will be concatenated
normalization_type: minmax
normalize_segmentation_output: true
complex_data: false
```
## Training
```base
optim:
name: adam
lr: 1e-4
betas:
- 0.9
- 0.98
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed # '16-mixed', 'bf16-mixed', '32-true', '64-true', '64', '32', '16', 'bf16'
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
Evaluation can be performed using the segmentation [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/segmentation.py) script for the segmentation task, with --evaluation_type per_slice.
Results
-------
Evaluation
----------
DICE = 0.6888 +/- 0.1359 F1 = 0.05911 +/- 0.2638 HD95 = 8.973 +/- 4.507 IOU = 0.01517 +/- 0.06638
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Desai AD, Schmidt AM, Rubin EB, et al. SKM-TEA: A Dataset for Accelerated MRI Reconstruction with Dense Image Labels for Quantitative Clinical Evaluation. 2022 |
wdika/SEG_UNet3D_SKMTEA | wdika | 2024-03-06T10:41:37Z | 0 | 0 | atommic | [
"atommic",
"image-segmentation",
"UNet3D",
"ATOMMIC",
"pytorch",
"en",
"dataset:SKMTEA",
"license:apache-2.0",
"region:us"
] | image-segmentation | 2024-03-05T18:01:04Z | ---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- SKMTEA
thumbnail: null
tags:
- image-segmentation
- UNet3D
- ATOMMIC
- pytorch
model-index:
- name: SEG_UNet3D_SKMTEA
results: []
---
## Model Overview
AttentionUNet for MRI Segmentation on the SKMTEA dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/SEG/SKMTEA/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/SEG_UNet3D_SKMTEA/blob/main/SEG_UNet3D_SKMTEA.atommic
mode: test
```
### Usage
You need to download the SKM-TEA dataset to effectively use this model. Check the [SKMTEA](https://github.com/wdika/atommic/blob/main/projects/SEG/SKMTEA/README.md) page for more information.
## Model Architecture
```base
model:
model_name: SEGMENTATION3DUNET
segmentation_module: UNet
segmentation_module_input_channels: 1
segmentation_module_output_channels: 4
segmentation_module_channels: 32
segmentation_module_pooling_layers: 5
segmentation_module_dropout: 0.0
segmentation_module_normalize: false
segmentation_loss:
dice: 1.0
dice_loss_include_background: true # always set to true if the background is removed
dice_loss_to_onehot_y: false
dice_loss_sigmoid: false
dice_loss_softmax: false
dice_loss_other_act: none
dice_loss_squared_pred: false
dice_loss_jaccard: false
dice_loss_flatten: false
dice_loss_reduction: mean_batch
dice_loss_smooth_nr: 1e-5
dice_loss_smooth_dr: 1e-5
dice_loss_batch: true
dice_metric_include_background: true # always set to true if the background is removed
dice_metric_to_onehot_y: false
dice_metric_sigmoid: false
dice_metric_softmax: false
dice_metric_other_act: none
dice_metric_squared_pred: false
dice_metric_jaccard: false
dice_metric_flatten: false
dice_metric_reduction: mean_batch
dice_metric_smooth_nr: 1e-5
dice_metric_smooth_dr: 1e-5
dice_metric_batch: true
segmentation_classes_thresholds: [0.5, 0.5, 0.5, 0.5]
segmentation_activation: sigmoid
magnitude_input: true
log_multiple_modalities: false # log all modalities in the same image, e.g. T1, T2, T1ce, FLAIR will be concatenated
normalization_type: minmax
normalize_segmentation_output: true
complex_data: false
```
## Training
```base
optim:
name: adam
lr: 1e-4
betas:
- 0.9
- 0.98
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed # '16-mixed', 'bf16-mixed', '32-true', '64-true', '64', '32', '16', 'bf16'
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
Evaluation can be performed using the segmentation [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/segmentation.py) script for the segmentation task, with --evaluation_type per_slice.
Results
-------
Evaluation
----------
DICE = 0.9175 +/- 0.06793 F1 = 0.7889 +/- 0.404 HD95 = 5.893 +/- 2.995 IOU = 0.5301 +/- 0.347
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Desai AD, Schmidt AM, Rubin EB, et al. SKM-TEA: A Dataset for Accelerated MRI Reconstruction with Dense Image Labels for Quantitative Clinical Evaluation. 2022 |
wdika/SEG_UNet_SKMTEA | wdika | 2024-03-06T10:40:44Z | 0 | 0 | atommic | [
"atommic",
"image-segmentation",
"UNet",
"ATOMMIC",
"pytorch",
"en",
"dataset:SKMTEA",
"license:apache-2.0",
"region:us"
] | image-segmentation | 2024-03-05T18:00:43Z | ---
language:
- en
license: apache-2.0
library_name: atommic
datasets:
- SKMTEA
thumbnail: null
tags:
- image-segmentation
- UNet
- ATOMMIC
- pytorch
model-index:
- name: SEG_UNet_SKMTEA
results: []
---
## Model Overview
AttentionUNet for MRI Segmentation on the SKMTEA dataset.
## ATOMMIC: Training
To train, fine-tune, or test the model you will need to install [ATOMMIC](https://github.com/wdika/atommic). We recommend you install it after you've installed latest Pytorch version.
```
pip install atommic['all']
```
## How to Use this Model
The model is available for use in ATOMMIC, and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
Corresponding configuration YAML files can be found [here](https://github.com/wdika/atommic/tree/main/projects/SEG/SKMTEA/conf).
### Automatically instantiate the model
```base
pretrained: true
checkpoint: https://huggingface.co/wdika/SEG_UNet_SKMTEA/blob/main/SEG_UNet_SKMTEA.atommic
mode: test
```
### Usage
You need to download the SKM-TEA dataset to effectively use this model. Check the [SKMTEA](https://github.com/wdika/atommic/blob/main/projects/SEG/SKMTEA/README.md) page for more information.
## Model Architecture
```base
model:
model_name: SEGMENTATIONUNET
segmentation_module: UNet
segmentation_module_input_channels: 1
segmentation_module_output_channels: 4
segmentation_module_channels: 32
segmentation_module_pooling_layers: 5
segmentation_module_dropout: 0.0
segmentation_module_normalize: false
segmentation_loss:
dice: 1.0
dice_loss_include_background: true # always set to true if the background is removed
dice_loss_to_onehot_y: false
dice_loss_sigmoid: false
dice_loss_softmax: false
dice_loss_other_act: none
dice_loss_squared_pred: false
dice_loss_jaccard: false
dice_loss_flatten: false
dice_loss_reduction: mean_batch
dice_loss_smooth_nr: 1e-5
dice_loss_smooth_dr: 1e-5
dice_loss_batch: true
dice_metric_include_background: true # always set to true if the background is removed
dice_metric_to_onehot_y: false
dice_metric_sigmoid: false
dice_metric_softmax: false
dice_metric_other_act: none
dice_metric_squared_pred: false
dice_metric_jaccard: false
dice_metric_flatten: false
dice_metric_reduction: mean_batch
dice_metric_smooth_nr: 1e-5
dice_metric_smooth_dr: 1e-5
dice_metric_batch: true
segmentation_classes_thresholds: [0.5, 0.5, 0.5, 0.5]
segmentation_activation: sigmoid
magnitude_input: true
log_multiple_modalities: false # log all modalities in the same image, e.g. T1, T2, T1ce, FLAIR will be concatenated
normalization_type: minmax
normalize_segmentation_output: true
complex_data: false
```
## Training
```base
optim:
name: adam
lr: 1e-4
betas:
- 0.9
- 0.98
weight_decay: 0.0
sched:
name: InverseSquareRootAnnealing
min_lr: 0.0
last_epoch: -1
warmup_ratio: 0.1
trainer:
strategy: ddp_find_unused_parameters_false
accelerator: gpu
devices: 1
num_nodes: 1
max_epochs: 20
precision: 16-mixed # '16-mixed', 'bf16-mixed', '32-true', '64-true', '64', '32', '16', 'bf16'
enable_checkpointing: false
logger: false
log_every_n_steps: 50
check_val_every_n_epoch: -1
max_steps: -1
```
## Performance
Evaluation can be performed using the segmentation [evaluation](https://github.com/wdika/atommic/blob/main/tools/evaluation/segmentation.py) script for the segmentation task, with --evaluation_type per_slice.
Results
-------
Evaluation
----------
DICE = 0.9123 +/- 0.05847 F1 = 0.6509 +/- 0.4487 HD95 = 6.618 +/- 1.793 IOU = 0.5158 +/- 0.3499
## References
[1] [ATOMMIC](https://github.com/wdika/atommic)
[2] Desai AD, Schmidt AM, Rubin EB, et al. SKM-TEA: A Dataset for Accelerated MRI Reconstruction with Dense Image Labels for Quantitative Clinical Evaluation. 2022 |
velocity-engg/model2 | velocity-engg | 2024-03-06T10:29:35Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/llama-2-7b-bnb-4bit",
"base_model:finetune:unsloth/llama-2-7b-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T10:29:22Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
base_model: unsloth/llama-2-7b-bnb-4bit
---
# Uploaded model
- **Developed by:** velocity-engg
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-2-7b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
AlignmentResearch/robust_llm_pythia-imdb-1b-mz-test-1gpu | AlignmentResearch | 2024-03-06T10:24:14Z | 6 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-1b-deduped",
"base_model:finetune:EleutherAI/pythia-1b-deduped",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-06T10:17:46Z | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: EleutherAI/pythia-1b-deduped
model-index:
- name: robust_llm_pythia-imdb-1b-mz-test-1gpu
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_pythia-imdb-1b-mz-test-1gpu
This model is a fine-tuned version of [EleutherAI/pythia-1b-deduped](https://huggingface.co/EleutherAI/pythia-1b-deduped) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 32
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 8
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.17.0
- Tokenizers 0.15.2
|
souvikcmsa019/MixtralGDPR | souvikcmsa019 | 2024-03-06T10:21:28Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-31T08:16:00Z | Model for GDPR Compliance Checking
|
ChaimaMess/llama-2-7b-QLORA | ChaimaMess | 2024-03-06T10:21:06Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"trl",
"sft",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-02-27T14:00:29Z | ---
library_name: transformers
tags:
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
MoMonir/MiniChat-2-3B-GGUF | MoMonir | 2024-03-06T10:20:32Z | 5 | 0 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2024-01-28T17:48:26Z | ---
license: apache-2.0
---
Original Model: <a href="https://huggingface.co/GeneZC/MiniChat-2-3B">GeneZC/MiniChat-2-3B</a></br>
GGUF fp16 Version</br>
Quantized Version Q8_0</br>
Note: This is an Experiment and not Tested
|
DMetaSoul/nl2sql-chinese-basic | DMetaSoul | 2024-03-06T10:19:11Z | 6 | 2 | transformers | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-03-06T08:20:11Z | ---
license: apache-2.0
---
## 简介
这是一款根据自然语言生成 SQL 的模型(NL2SQL/Text2SQL),是我们自研众多 NL2SQL 模型中最为基础的一版,其它高级版模型后续将陆续进行开源。
该模型基于 BART 架构,我们将 NL2SQL 问题建模为类似机器翻译的 Seq2Seq 形式,该模型的优势特点:参数规模较小、但 SQL 生成准确性也较高。
## 用法
NL2SQL 任务中输入参数含有用户查询文本+数据库表信息,目前按照以下格式拼接模型的输入文本:
```
Question: 名人堂一共有多少球员 <sep> Tables: hall_of_fame: player_id, yearid, votedby, ballots, needed, votes, inducted, category, needed_note ; player_award: player_id, award_id, year, league_id, tie, notes <sep>
```
具体使用方法参考以下示例:
```python
import torch
from transformers import AutoModelForSeq2SeqLM, MBartForConditionalGeneration, AutoTokenizer
device = 'cuda'
model_path = 'DMetaSoul/nl2sql-chinese-basic'
sampling = False
tokenizer = AutoTokenizer.from_pretrained(model_path, src_lang='zh_CN')
#model = MBartForConditionalGeneration.from_pretrained(model_path)
model = AutoModelForSeq2SeqLM.from_pretrained(model_path)
model = model.half()
model.to(device)
input_texts = [
"Question: 所有章节的名称和描述是什么? <sep> Tables: sections: section id , course id , section name , section description , other details <sep>",
"Question: 名人堂一共有多少球员 <sep> Tables: hall_of_fame: player_id, yearid, votedby, ballots, needed, votes, inducted, category, needed_note ; player_award: player_id, award_id, year, league_id, tie, notes ; player_award_vote: award_id, year, league_id, player_id, points_won, points_max, votes_first ; salary: year, team_id, league_id, player_id, salary ; player: player_id, birth_year, birth_month, birth_day, birth_country, birth_state, birth_city, death_year, death_month, death_day, death_country, death_state, death_city, name_first, name_last, name_given, weight <sep>"
]
inputs = tokenizer(input_texts, max_length=512, return_tensors="pt",
padding=True, truncation=True)
inputs = {k:v.to(device) for k,v in inputs.items() if k not in ["token_type_ids"]}
with torch.no_grad():
if sampling:
outputs = model.generate(**inputs, do_sample=True, top_k=50, top_p=0.95,
temperature=1.0, num_return_sequences=1,
max_length=512, return_dict_in_generate=True, output_scores=True)
else:
outputs = model.generate(**inputs, num_beams=4, num_return_sequences=1,
max_length=512, return_dict_in_generate=True, output_scores=True)
output_ids = outputs.sequences
results = tokenizer.batch_decode(output_ids, skip_special_tokens=True,
clean_up_tokenization_spaces=True)
for question, sql in zip(input_texts, results):
print(question)
print('SQL: {}'.format(sql))
print()
```
输入结果如下:
```
Question: 所有章节的名称和描述是什么? <sep> Tables: sections: section id , course id , section name , section description , other details <sep>
SQL: SELECT section name, section description FROM sections
Question: 名人堂一共有多少球员 <sep> Tables: hall_of_fame: player_id, yearid, votedby, ballots, needed, votes, inducted, category, needed_note ; player_award: player_id, award_id, year, league_id, tie, notes ; player_award_vote: award_id, year, league_id, player_id, points_won, points_max, votes_first ; salary: year, team_id, league_id, player_id, salary ; player: player_id, birth_year, birth_month, birth_day, birth_country, birth_state, birth_city, death_year, death_month, death_day, death_country, death_state, death_city, name_first, name_last, name_given, weight <sep>
SQL: SELECT count(*) FROM hall_of_fame
```
|
Mayank1999/bert-finetuned-ner | Mayank1999 | 2024-03-06T10:13:57Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"token-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-cased",
"base_model:finetune:google-bert/bert-base-cased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-03-06T10:03:51Z | ---
license: apache-2.0
base_model: bert-base-cased
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Leelakrish/my-pet-lion-xzg | Leelakrish | 2024-03-06T10:12:19Z | 0 | 0 | null | [
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-03-06T10:10:10Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Lion-XZG Dreambooth model trained by Leelakrish following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 21BRS1638
Sample pictures of this concept:

|
Hemg/Brain-Tumor-Classification | Hemg | 2024-03-06T10:11:06Z | 38 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"vit",
"image-classification",
"generated_from_trainer",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-03-06T05:51:46Z | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Brain-Tumor-Classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Brain-Tumor-Classification
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0872
- Accuracy: 0.9758
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2074 | 1.0 | 44 | 0.8060 | 0.8128 |
| 0.4897 | 2.0 | 88 | 0.3008 | 0.9274 |
| 0.2462 | 3.0 | 132 | 0.2464 | 0.9331 |
| 0.1937 | 4.0 | 176 | 0.1918 | 0.9502 |
| 0.1523 | 5.0 | 220 | 0.1699 | 0.9502 |
| 0.1371 | 6.0 | 264 | 0.1372 | 0.9644 |
| 0.1104 | 7.0 | 308 | 0.1121 | 0.9708 |
| 0.1097 | 8.0 | 352 | 0.1220 | 0.9651 |
| 0.1015 | 9.0 | 396 | 0.1053 | 0.9737 |
| 0.0841 | 10.0 | 440 | 0.1142 | 0.9708 |
| 0.0839 | 11.0 | 484 | 0.1073 | 0.9708 |
| 0.0771 | 12.0 | 528 | 0.1156 | 0.9665 |
| 0.074 | 13.0 | 572 | 0.1203 | 0.9644 |
| 0.0652 | 14.0 | 616 | 0.0706 | 0.9858 |
| 0.0694 | 15.0 | 660 | 0.0984 | 0.9744 |
| 0.0596 | 16.0 | 704 | 0.0872 | 0.9758 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2
|
slukas99/tex_inv_af_dress | slukas99 | 2024-03-06T10:07:28Z | 10 | 0 | diffusers | [
"diffusers",
"tensorboard",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-03-06T08:47:39Z | ---
license: creativeml-openrail-m
library_name: diffusers
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
base_model: runwayml/stable-diffusion-v1-5
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# Textual inversion text2image fine-tuning - slukas99/tex_inv_af_dress
These are textual inversion adaption weights for runwayml/stable-diffusion-v1-5. You can find some example images in the following.




## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
Skhaled99/Mistral-7b-PDO-GHC-Merged | Skhaled99 | 2024-03-06T10:06:31Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-03-06T10:04:43Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
s14pe/Qlearning_Taxi_v3 | s14pe | 2024-03-06T10:02:42Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-06T09:50:54Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Qlearning_Taxi_v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="s14pe/Qlearning_Taxi_v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
mii-llm/maestrale-chat-v0.3-beta-sft | mii-llm | 2024-03-06T10:00:53Z | 14 | 2 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"sft",
"it",
"chatml",
"axolotl",
"conversational",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-09T09:26:06Z | ---
language:
- it
license: cc-by-nc-4.0
tags:
- sft
- it
- mistral
- chatml
- axolotl
prompt_template: <|im_start|>system {system_message}<|im_end|> <|im_start|>user {prompt}<|im_end|>
<|im_start|>assistant
model-index:
- name: maestrale-chat-v0.3-beta
results: []
---
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/dgSNbTl.jpg" alt="Mii-LLM" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://buy.stripe.com/8wM00Sf3vb3H3pmfYY">Want to contribute? Please donate! This will let us work on better datasets and models!</a></p>
</div>
</div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Maestrale chat beta ༄
By @efederici and @mferraretto
## Model description
- **Language Model**: Mistral-7b for the Italian language, continued pre-training for Italian on a curated large-scale high-quality corpus.
- **Fine-Tuning**: SFT performed on convs/instructions for three epochs.
**v0.3**
- Function calling
- Reduced default system prompt to avoid wasting tokens (pre-alignment)
This model uses ChatML prompt format:
```
<|im_start|>system
Sei un assistente utile.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
```
## Usage:
```python
from transformers import (
AutoTokenizer,
AutoModelForCausalLM,
GenerationConfig,
TextStreamer
)
import torch
tokenizer = AutoTokenizer.from_pretrained("mii-llm/maestrale-chat-v0.3-beta")
model = AutoModelForCausalLM.from_pretrained("mii-llm/maestrale-chat-v0.3-beta", load_in_8bit=True, device_map="auto")
gen = GenerationConfig(
do_sample=True,
temperature=0.7,
repetition_penalty=1.2,
top_k=50,
top_p=0.95,
max_new_tokens=500,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.convert_tokens_to_ids("<|im_end|>")
)
messages = [
{"role": "system", "content": "Sei un assistente utile."},
{"role": "user", "content": "{prompt}"}
]
with torch.no_grad(), torch.backends.cuda.sdp_kernel(
enable_flash=True,
enable_math=False,
enable_mem_efficient=False
):
temp = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(temp, return_tensors="pt").to("cuda")
streamer = TextStreamer(tokenizer, skip_prompt=True)
_ = model.generate(
**inputs,
streamer=streamer,
generation_config=gen
)
```
## Intended uses & limitations
It's a beta sft version, but it's not `aligned`. It's a first test. We are working on alignment data and evals.
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) |
nyunai/OpenHathi-7B-Hi-v0.1-Base-AWQ-samvaad-hi-v1-chat-format | nyunai | 2024-03-06T10:00:46Z | 8 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"awq",
"region:us"
] | text-generation | 2024-03-06T09:34:58Z | ---
library_name: transformers
tags: []
---
## Model Description
This model is a compressed version of the OpenHathi-7B-Hi base model, optimized for chat format text data in the Hindi language. It has been quantized using the AWQ technique with calibration data from the samvaad-hi-v1 dataset. The compression process aims to reduce the model size while preserving its performance on chat-oriented tasks.
## Model Usage:
The compressed model can be utilized for various natural language processing tasks, particularly those involving chat format text data in Hindi. It can be deployed in conversational AI systems, chatbots, or any application requiring efficient processing of chat-style interactions.
## Performance Metrics:
- **Model Size:** 4.15 GB
- **Compression Technique:** AWQ
- **Calibration Data:** [samvaad-hi-v1 chat format](https://huggingface.co/datasets/shwubham/samvaad-hi-v1-chat-format) dataset
- **Tokenization Model Size:** 968 KB
- **Performance:** The compressed model's performance has been evaluated on various chat-oriented tasks, demonstrating efficiency in handling conversational text data while maintaining comparable performance to the original base model.
**Limitations:** While the compressed model offers significant reductions in size, there may be slight trade-offs in performance compared to the full-sized base model. It may not perform optimally on tasks outside the scope of chat-oriented text data in Hindi.
|
joshus/esg_base_pos_3 | joshus | 2024-03-06T09:57:24Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-03-06T09:57:07Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# joshus/esg_base_pos_3
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('joshus/esg_base_pos_3')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=joshus/esg_base_pos_3)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 360 with parameters:
```
{'batch_size': 5, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 108,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
OmarHaroon01/compressed_byt5_pretrained | OmarHaroon01 | 2024-03-06T09:53:56Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-03-06T09:53:47Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
s14pe/q-FrozenLake-v1-4x4-noSlippery | s14pe | 2024-03-06T09:47:04Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-06T09:47:02Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="s14pe/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jelldps/malaysian-mistral-7b-32k-instructions-v4-gguf | jelldps | 2024-03-06T09:41:56Z | 6 | 3 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation",
"conversational",
"ms",
"base_model:mesolitica/malaysian-mistral-7b-32k-instructions-v3.5",
"base_model:quantized:mesolitica/malaysian-mistral-7b-32k-instructions-v3.5",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-02-27T10:32:08Z | ---
base_model: mesolitica/malaysian-mistral-7b-32k-instructions-v3.5
language:
- ms
---
# malaysian-mistral-7b-32k-instructions-v4 - GGUF
- Model creator: [Mesolitica](https://huggingface.co/mesolitica)
- Original model: [malaysian-mistral-7b-32k-instructions-v4](https://huggingface.co/mesolitica/malaysian-mistral-7b-32k-instructions-v4) |
vidhi0206/setfit-paraphrase-mpnet-emotion | vidhi0206 | 2024-03-06T09:41:22Z | 4 | 0 | setfit | [
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"model-index",
"region:us"
] | text-classification | 2024-02-28T12:34:57Z | ---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
metrics:
- accuracy
widget:
- text: i honestly thought impossible at this point i feel pretty
- text: i feel convinced that im going to shy away from whatever is really good for
me
- text: i feel guilt that i should be more caring and im not
- text: i found myself feeling nostalgic as i thought about the temporarily abandoned
little bishop chronicles
- text: i am feeling very indecisive and spontaneous
pipeline_tag: text-classification
inference: true
base_model: sentence-transformers/paraphrase-mpnet-base-v2
model-index:
- name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: Unknown
type: unknown
split: test
metrics:
- type: accuracy
value: 0.5225
name: Accuracy
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 6 classes
<!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1 | <ul><li>'i feel so much better about that number'</li><li>'i feel like i have reached a plateau where im not buying as much as i use to and feeling more satisfied with my wardrobe and personal style'</li><li>'i feel especially thankful'</li></ul> |
| 3 | <ul><li>'i feel so violent just want to break some glass'</li><li>'i always feel rushed on the way to visit no comments'</li><li>'i think maybe about how strongly she feels about him and being there for him but brad looks really distracted'</li></ul> |
| 5 | <ul><li>'i feel like when i was a kid it was constantly impressed upon me how awesome ants are'</li><li>'i feel like it s a boy i would be pretty shocked if it was so somewhere in there my gut or my brain is saying girl'</li><li>'i feel like every day i walk around with so much stress and sadness that im literally amazed im still here that i still function that im still basically a friendly stable person'</li></ul> |
| 0 | <ul><li>'i would feel that a few words would be not only inadequate but a travesty'</li><li>'i attributed this depression to feeling inadequate against the unrealistic ideals of the lds church and while i still hold those ideals somewhat responsible i recognize this pattern of behavior'</li><li>'ive been resting and feeling generally unpleasant and queasy but in that frustrating background way where you dont feel right but cant place an exact cause'</li></ul> |
| 4 | <ul><li>'i was starting to feel scared for both of their safety and i wish those officers hadn t left no matter how much i hated them'</li><li>'i am already feeling frantic'</li><li>'i believe in you moment we all feel til then it s one more skeptical song'</li></ul> |
| 2 | <ul><li>'i do feel sympathetic to the parties involved now that their careers are down the drain'</li><li>'i like frappes and shit when im feeling naughty but i drink tea daily'</li><li>'i will pay a month for months and feel shame every time i grill a hot dog from that point on'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.5225 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("vidhi0206/setfit-paraphrase-mpnet-emotion")
# Run inference
preds = model("i am feeling very indecisive and spontaneous")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 4 | 19.3333 | 48 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 8 |
| 1 | 8 |
| 2 | 8 |
| 3 | 8 |
| 4 | 8 |
| 5 | 8 |
### Training Hyperparameters
- batch_size: (8, 8)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:------:|:----:|:-------------:|:---------------:|
| 0.0042 | 1 | 0.3009 | - |
| 0.2083 | 50 | 0.1916 | - |
| 0.4167 | 100 | 0.0393 | - |
| 0.625 | 150 | 0.0129 | - |
| 0.8333 | 200 | 0.0034 | - |
### Framework Versions
- Python: 3.8.10
- SetFit: 1.0.3
- Sentence Transformers: 2.3.1
- Transformers: 4.37.2
- PyTorch: 2.2.0+cu121
- Datasets: 2.17.0
- Tokenizers: 0.15.1
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
aparna-01/my-pet-cat-sdf | aparna-01 | 2024-03-06T09:32:54Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-03-06T09:28:45Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### My-Pet-Cat-SDF Dreambooth model trained by aparna-01 following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: 23/CSE/111
Sample pictures of this concept:

|
AlignmentResearch/robust_llm_z5ph5m7h_from_EleutherAI_pythia-14m | AlignmentResearch | 2024-03-06T09:24:30Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-14m",
"base_model:finetune:EleutherAI/pythia-14m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-06T09:24:23Z | ---
tags:
- generated_from_trainer
base_model: EleutherAI/pythia-14m
model-index:
- name: robust_llm_z5ph5m7h_from_EleutherAI_pythia-14m
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_z5ph5m7h_from_EleutherAI_pythia-14m
This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 2
- total_eval_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.17.0
- Tokenizers 0.15.2
|
s14pe/ppo-LunarLander-v2 | s14pe | 2024-03-06T09:23:50Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-05T14:14:49Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 279.00 +/- 15.83
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
AlignmentResearch/robust_llm_m857mz1i_from_EleutherAI_pythia-14m | AlignmentResearch | 2024-03-06T09:23:12Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-14m",
"base_model:finetune:EleutherAI/pythia-14m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-06T09:23:05Z | ---
tags:
- generated_from_trainer
base_model: EleutherAI/pythia-14m
model-index:
- name: robust_llm_m857mz1i_from_EleutherAI_pythia-14m
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_m857mz1i_from_EleutherAI_pythia-14m
This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 2
- total_eval_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.17.0
- Tokenizers 0.15.2
|
AlignmentResearch/robust_llm_w9a5ielg_from_EleutherAI_pythia-14m | AlignmentResearch | 2024-03-06T09:22:05Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"gpt_neox",
"text-classification",
"generated_from_trainer",
"base_model:EleutherAI/pythia-14m",
"base_model:finetune:EleutherAI/pythia-14m",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-06T09:21:58Z | ---
tags:
- generated_from_trainer
base_model: EleutherAI/pythia-14m
model-index:
- name: robust_llm_w9a5ielg_from_EleutherAI_pythia-14m
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robust_llm_w9a5ielg_from_EleutherAI_pythia-14m
This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0
- Datasets 2.17.0
- Tokenizers 0.15.2
|
alfredplpl/gemma-2b-it-ja-poc-2 | alfredplpl | 2024-03-06T09:21:13Z | 2 | 2 | peft | [
"peft",
"safetensors",
"ja",
"en",
"license:other",
"region:us"
] | null | 2024-03-05T12:17:24Z | ---
language:
- ja
- en
license: other
library_name: peft
license_name: gemma-terms-of-use
license_link: https://www.kaggle.com/models/google/gemma/license/consent
---
# はじめに
なんか日本語が話せる商用利用可能なAIです。
[Google Colab](https://colab.research.google.com/drive/1AZ3oW1RJ8JDi4DGh3_z__aAd1lUVlswi?usp=sharing)
# Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
from peft import PeftModel
# トークナイザーとモデルの準備
tokenizer = AutoTokenizer.from_pretrained("alfredplpl/ja-aozora-wikipedia-gemmba-2b")
model = AutoModelForCausalLM.from_pretrained("alfredplpl/ja-aozora-wikipedia-gemmba-2b")
model = PeftModel.from_pretrained(model = model, model_id = "alfredplpl/gemma-2b-it-ja-poc-2")
# プロンプトの準備
prompt="""
あなたは親切なアシスタントです。英語は喋らず、日本語だけ喋ってください。
<start_of_turn>user
人生で大切なことはなんですか?<end_of_turn>
<start_of_turn>model
"""
# 推論の実行
input_ids = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**input_ids,
max_new_tokens=128,
do_sample=True,
top_p=0.95,
temperature=0.2,
repetition_penalty=1.1,
)
print(tokenizer.decode(outputs[0]))
```
## Result
```bash
<bos>
あなたは親切なアシスタントです。英語は喋らず、日本語だけ喋ってください。
<start_of_turn>user
人生で大切なことはなんですか?<end_of_turn>
<start_of_turn>model
人生で大切なのは、幸せになることです。<end_of_turn>
<eos>
```
# Chat Templete
```bash
<bos>
{{system prompt}}
<start_of_turn>user
{{prompt}}<end_of_turn>
<start_of_turn>model
{{response}}<end_of_turn>
<eos>
```
# Base model
- free-ai-ltd/ja-aozora-wikipedia-gemmba-2b (private)
# Dataset for Instruction tuning
- llm-jp/databricks-dolly-15k-ja
- llm-jp/oasst1-21k-ja
- kunishou/oasst1-chat-44k-ja
- kunishou/oasst2-chat-68k-ja
- kunishou/cnn-dailymail-27k-ja
- kunishou/databricks-dolly-69k-ja-en-translation
- kunishou/databricks-dolly-15k-ja
- shi3z/OpenOrcaJapanese
# How to make this model
- [LoRA](https://gist.github.com/alfredplpl/e20cad036c151f38645a1abc87f56a2f) |
Bajiyo/Transliteration_from_malayalam_to_english | Bajiyo | 2024-03-06T09:17:20Z | 3 | 0 | tf-keras | [
"tf-keras",
"license:other",
"region:us"
] | null | 2024-03-06T09:15:23Z | ---
license: other
license_name: other
license_link: LICENSE
---
|
DhairyaSarin/promotional-text-analyser-v2 | DhairyaSarin | 2024-03-06T09:11:17Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:mistralai/Mistral-7B-Instruct-v0.1",
"base_model:adapter:mistralai/Mistral-7B-Instruct-v0.1",
"region:us"
] | null | 2024-03-06T09:10:46Z | ---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.1
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.9.0 |
alibaba-pai/pai-bloom-1b1-text2prompt-sd | alibaba-pai | 2024-03-06T09:07:42Z | 124 | 35 | transformers | [
"transformers",
"pytorch",
"safetensors",
"bloom",
"text-generation",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-08T08:55:33Z | ---
license: apache-2.0
widget:
- text: "Instruction: Give a simple description of the image to generate a drawing prompt.\nInput: 1 girl\nOutput:"
tags:
- pytorch
- transformers
- text-generation
---
# BeautifulPrompt
## 简介 Brief Introduction
我们开源了一个自动Prompt生成模型,您可以直接输入一个极其简单的Prompt,就可以得到经过语言模型优化过的Prompt,帮助您更简单地生成高颜值图像。
We release an automatic Prompt generation model, you can directly enter an extremely simple Prompt and get a Prompt optimized by the language model to help you generate more beautiful images simply.
* Github: [EasyNLP](https://github.com/alibaba/EasyNLP)
## 使用 Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained('alibaba-pai/pai-bloom-1b1-text2prompt-sd')
model = AutoModelForCausalLM.from_pretrained('alibaba-pai/pai-bloom-1b1-text2prompt-sd').eval().cuda()
raw_prompt = '1 girl'
input = f'Instruction: Give a simple description of the image to generate a drawing prompt.\nInput: {raw_prompt}\nOutput:'
input_ids = tokenizer.encode(input, return_tensors='pt').cuda()
outputs = model.generate(
input_ids,
max_length=384,
do_sample=True,
temperature=1.0,
top_k=50,
top_p=0.95,
repetition_penalty=1.2,
num_return_sequences=5)
prompts = tokenizer.batch_decode(outputs[:, input_ids.size(1):], skip_special_tokens=True)
prompts = [p.strip() for p in prompts]
print(prompts)
```
## 作品展示 Gallery
<style>
table th:first-of-type {
width: 50%;
}
table th:nth-of-type(2) {
width: 50%;
}
</style>
| Original | BeautifulPrompt |
| ---------------------------------------- | ---------------------------------- |
| prompt: taylor swift, country, golden, fearless,wavehair | prompt: portrait of taylor swift as a beautiful woman, long hair, country, golden ratio, intricate, symmetrical, cinematic lighting, highly detailed, digital painting, artstation, concept art, smooth, sharp focus, illustration |
|  |  |
| Original | BeautifulPrompt |
| ---------------------------------------- | ---------------------------------- |
| prompt: A majestic sailing ship | prompt: a massive sailing ship, epic, cinematic, artstation, greg rutkowski, james gurney, sparth |
|  |  |
## 使用须知 Notice for Use
使用上述模型需遵守[AIGC模型开源特别条款](https://terms.alicdn.com/legal-agreement/terms/common_platform_service/20230505180457947/20230505180457947.html)。
If you want to use this model, please read this [document](https://terms.alicdn.com/legal-agreement/terms/common_platform_service/20230505180457947/20230505180457947.html) carefully and abide by the terms.
## Paper Citation
If you find the model useful, please consider cite the paper:
```
@inproceedings{emnlp2023a,
author = {Tingfeng Cao and
Chengyu Wang and
Bingyan Liu and
Ziheng Wu and
Jinhui Zhu and
Jun Huang},
title = {BeautifulPrompt: Towards Automatic Prompt Engineering for Text-to-Image Synthesis},
booktitle = {Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track},
pages = {1--11},
year = {2023}
}
```
|
zxhezexin/openlrm-mix-small-1.1 | zxhezexin | 2024-03-06T08:56:32Z | 31 | 1 | transformers | [
"transformers",
"pytorch",
"safetensors",
"image-to-3d",
"dataset:allenai/objaverse",
"arxiv:2311.04400",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | image-to-3d | 2024-03-04T07:05:06Z | ---
license: cc-by-nc-4.0
datasets:
- allenai/objaverse
pipeline_tag: image-to-3d
---
# Model Card for OpenLRM V1.1
## Overview
- This model card is for the [OpenLRM](https://github.com/3DTopia/OpenLRM) project, which is an open-source implementation of the paper [LRM](https://arxiv.org/abs/2311.04400).
- Information contained in this model card corresponds to [Version 1.1](https://github.com/3DTopia/OpenLRM/releases).
## Model Details
- Training data
| Model | Training Data |
| :---: | :---: |
| [openlrm-obj-small-1.1](https://huggingface.co/zxhezexin/openlrm-obj-small-1.1) | Objaverse |
| [openlrm-obj-base-1.1](https://huggingface.co/zxhezexin/openlrm-obj-base-1.1) | Objaverse |
| [openlrm-obj-large-1.1](https://huggingface.co/zxhezexin/openlrm-obj-large-1.1) | Objaverse |
| [openlrm-mix-small-1.1](https://huggingface.co/zxhezexin/openlrm-mix-small-1.1) | Objaverse + MVImgNet |
| [openlrm-mix-base-1.1](https://huggingface.co/zxhezexin/openlrm-mix-base-1.1) | Objaverse + MVImgNet |
| [openlrm-mix-large-1.1](https://huggingface.co/zxhezexin/openlrm-mix-large-1.1) | Objaverse + MVImgNet |
- Model architecture (version==1.1)
| Type | Layers | Feat. Dim | Attn. Heads | Triplane Dim. | Input Res. | Image Encoder | Size |
| :---: | :----: | :-------: | :---------: | :-----------: | :--------: | :---------------: | :---: |
| small | 12 | 512 | 8 | 32 | 224 | dinov2_vits14_reg | 446M |
| base | 12 | 768 | 12 | 48 | 336 | dinov2_vitb14_reg | 1.04G |
| large | 16 | 1024 | 16 | 80 | 448 | dinov2_vitb14_reg | 1.81G |
- Training settings
| Type | Rend. Res. | Rend. Patch | Ray Samples |
| :---: | :--------: | :---------: | :---------: |
| small | 192 | 64 | 96 |
| base | 288 | 96 | 96 |
| large | 384 | 128 | 128 |
## Notable Differences from the Original Paper
- We do not use the deferred back-propagation technique in the original paper.
- We used random background colors during training.
- The image encoder is based on the [DINOv2](https://github.com/facebookresearch/dinov2) model with register tokens.
- The triplane decoder contains 4 layers in our implementation.
## License
- The model weights are released under the [Creative Commons Attribution-NonCommercial 4.0 International License](LICENSE_WEIGHT).
- They are provided for research purposes only, and CANNOT be used commercially.
## Disclaimer
This model is an open-source implementation and is NOT the official release of the original research paper. While it aims to reproduce the original results as faithfully as possible, there may be variations due to model implementation, training data, and other factors.
### Ethical Considerations
- This model should be used responsibly and ethically, and should not be used for malicious purposes.
- Users should be aware of potential biases in the training data.
- The model should not be used under the circumstances that could lead to harm or unfair treatment of individuals or groups.
### Usage Considerations
- The model is provided "as is" without warranty of any kind.
- Users are responsible for ensuring that their use complies with all relevant laws and regulations.
- The developers and contributors of this model are not liable for any damages or losses arising from the use of this model.
---
*This model card is subject to updates and modifications. Users are advised to check for the latest version regularly.*
|
teknow/gemmaWithQuotes | teknow | 2024-03-06T08:56:22Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T08:38:14Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
aryachakraborty/DeepSeek-1.3B-IT-NL-SQL-V2 | aryachakraborty | 2024-03-06T08:49:27Z | 13 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T08:47:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ANWAR101/lora-bart-base-youtube-cnn | ANWAR101 | 2024-03-06T08:48:50Z | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T08:47:41Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
VinitRuparelia/mountain | VinitRuparelia | 2024-03-06T08:47:03Z | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"NxtWave-GenAI-Webinar",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2024-03-06T08:40:23Z | ---
license: creativeml-openrail-m
tags:
- NxtWave-GenAI-Webinar
- text-to-image
- stable-diffusion
---
### Mountain Dreambooth model trained by VinitRuparelia following the "Build your own Gen AI model" session by NxtWave.
Project Submission Code: RGIT_669
Sample pictures of this concept:


|
Kudod/bloom-560m_model_colab | Kudod | 2024-03-06T08:43:47Z | 5 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"bloom",
"text-generation",
"generated_from_trainer",
"base_model:bigscience/bloom-560m",
"base_model:finetune:bigscience/bloom-560m",
"license:bigscience-bloom-rail-1.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T08:31:53Z | ---
license: bigscience-bloom-rail-1.0
base_model: bigscience/bloom-560m
tags:
- generated_from_trainer
model-index:
- name: bloom-560m_model_colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloom-560m_model_colab
This model is a fine-tuned version of [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0976
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 102 | 1.4784 |
| No log | 2.0 | 204 | 1.5105 |
| No log | 3.0 | 306 | 0.7721 |
| No log | 4.0 | 408 | 0.4614 |
| 1.1878 | 5.0 | 510 | 0.2513 |
| 1.1878 | 6.0 | 612 | 0.0976 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
minhah/videomae-base-finetuned-ucf101-subset-finetuned-elder-UFC-prtuned | minhah | 2024-03-06T08:43:17Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"videomae",
"video-classification",
"generated_from_trainer",
"base_model:minhah/videomae-base-finetuned-ucf101-subset",
"base_model:finetune:minhah/videomae-base-finetuned-ucf101-subset",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2024-03-06T07:10:58Z | ---
license: cc-by-nc-4.0
base_model: minhah/videomae-base-finetuned-ucf101-subset
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101-subset-finetuned-elder-UFC-prtuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset-finetuned-elder-UFC-prtuned
This model is a fine-tuned version of [minhah/videomae-base-finetuned-ucf101-subset](https://huggingface.co/minhah/videomae-base-finetuned-ucf101-subset) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6593
- Accuracy: 0.3481
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 576
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.729 | 0.13 | 73 | 1.6346 | 0.3408 |
| 1.683 | 1.13 | 146 | 1.6505 | 0.3029 |
| 1.6889 | 2.13 | 219 | 1.6359 | 0.3408 |
| 1.6853 | 3.13 | 292 | 1.6739 | 0.2398 |
| 1.5793 | 4.13 | 365 | 1.6679 | 0.2588 |
| 1.5783 | 5.13 | 438 | 1.6091 | 0.3324 |
| 1.5745 | 6.13 | 511 | 1.6306 | 0.3072 |
| 1.5704 | 7.11 | 576 | 1.6573 | 0.2707 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Litzy619/V0305P2 | Litzy619 | 2024-03-06T08:39:00Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:yahma/llama-7b-hf",
"base_model:finetune:yahma/llama-7b-hf",
"license:other",
"region:us"
] | null | 2024-03-06T02:27:50Z | ---
license: other
base_model: yahma/llama-7b-hf
tags:
- generated_from_trainer
model-index:
- name: V0305P2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0305P2
This model is a fine-tuned version of [yahma/llama-7b-hf](https://huggingface.co/yahma/llama-7b-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0602
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 20
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.3061 | 0.09 | 10 | 0.1617 |
| 0.1712 | 0.17 | 20 | 0.1558 |
| 0.1564 | 0.26 | 30 | 0.1535 |
| 0.1526 | 0.34 | 40 | 0.1479 |
| 0.1503 | 0.43 | 50 | 0.1506 |
| 0.1563 | 0.51 | 60 | 0.1505 |
| 0.1517 | 0.6 | 70 | 0.1507 |
| 0.1533 | 0.68 | 80 | 0.1489 |
| 0.1491 | 0.77 | 90 | 0.1488 |
| 0.1523 | 0.85 | 100 | 0.1471 |
| 0.1522 | 0.94 | 110 | 0.1433 |
| 0.1381 | 1.02 | 120 | 0.1229 |
| 0.1303 | 1.11 | 130 | 0.1206 |
| 0.1155 | 1.19 | 140 | 0.1018 |
| 0.1095 | 1.28 | 150 | 0.0933 |
| 0.103 | 1.37 | 160 | 0.0906 |
| 0.1007 | 1.45 | 170 | 0.0904 |
| 0.0895 | 1.54 | 180 | 0.0887 |
| 0.0914 | 1.62 | 190 | 0.0840 |
| 0.0943 | 1.71 | 200 | 0.0808 |
| 0.0938 | 1.79 | 210 | 0.0757 |
| 0.0884 | 1.88 | 220 | 0.0666 |
| 0.0862 | 1.96 | 230 | 0.0733 |
| 0.0709 | 2.05 | 240 | 0.0748 |
| 0.0601 | 2.13 | 250 | 0.0730 |
| 0.0593 | 2.22 | 260 | 0.0632 |
| 0.059 | 2.3 | 270 | 0.0757 |
| 0.06 | 2.39 | 280 | 0.0620 |
| 0.0647 | 2.47 | 290 | 0.0605 |
| 0.0619 | 2.56 | 300 | 0.0624 |
| 0.0651 | 2.65 | 310 | 0.0605 |
| 0.0578 | 2.73 | 320 | 0.0597 |
| 0.0585 | 2.82 | 330 | 0.0598 |
| 0.0575 | 2.9 | 340 | 0.0601 |
| 0.0566 | 2.99 | 350 | 0.0602 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
AnonymousSub/FPDM_bertlarge_model | AnonymousSub | 2024-03-06T08:32:26Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"feature-extraction",
"arxiv:1910.09700",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2024-03-06T08:30:58Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
SenswiseData/berturk_cased_profanity | SenswiseData | 2024-03-06T08:22:01Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:dbmdz/bert-base-turkish-cased",
"base_model:finetune:dbmdz/bert-base-turkish-cased",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-06T08:21:29Z | ---
license: mit
base_model: dbmdz/bert-base-turkish-cased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1353
- Accuracy: 0.9635
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 338 | 0.1606 | 0.9502 |
| 0.3717 | 2.0 | 676 | 0.1353 | 0.9635 |
### Framework versions
- Transformers 4.35.0
- Pytorch 2.1.0+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
VRSneha/kamal_camembert_dummy | VRSneha | 2024-03-06T08:13:06Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-03-06T08:12:36Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
obudzecie/distilbert-base-uncased-finetuned-cola | obudzecie | 2024-03-06T07:58:21Z | 4 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-27T13:04:09Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4341
- Matthews Correlation: 0.4600
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9.881638457643646e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 37
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.45 | 1.0 | 1069 | 0.9061 | 0.2926 |
| 0.3901 | 2.0 | 2138 | 0.7333 | 0.3877 |
| 0.2976 | 3.0 | 3207 | 0.8140 | 0.3997 |
| 0.2158 | 4.0 | 4276 | 1.1014 | 0.4422 |
| 0.0857 | 5.0 | 5345 | 1.4341 | 0.4600 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
Dangurangu/my-awesome-setfit-model | Dangurangu | 2024-03-06T07:54:55Z | 6 | 0 | setfit | [
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"dataset:SetFit/SentEval-CR",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"model-index",
"region:us"
] | text-classification | 2024-03-06T07:54:02Z | ---
library_name: setfit
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
datasets:
- SetFit/SentEval-CR
metrics:
- accuracy
widget:
- text: you can take pic of your friends and the picture will pop up when they call
.
- text: the speakerphone , the radio , all features work perfectly .
- text: 'a ) the picture quality ( color and sharpness of focusing ) are so great
, it completely eliminated my doubt about digital imaging -- - how could one eat
rice one grain at a time : - ) )'
- text: so far the dvd works so i hope it does n 't break down like the reviews i
've read .
- text: i have a couple hundred contacts and the menu loads within a few seconds ,
no big deal .
pipeline_tag: text-classification
inference: true
base_model: sentence-transformers/paraphrase-mpnet-base-v2
model-index:
- name: SetFit with sentence-transformers/paraphrase-mpnet-base-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: SetFit/SentEval-CR
type: SetFit/SentEval-CR
split: test
metrics:
- type: accuracy
value: 0.8804780876494024
name: Accuracy
---
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model trained on the [SetFit/SentEval-CR](https://huggingface.co/datasets/SetFit/SentEval-CR) dataset that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Model Details
### Model Description
- **Model Type:** SetFit
- **Sentence Transformer body:** [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2)
- **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance
- **Maximum Sequence Length:** 512 tokens
- **Number of Classes:** 2 classes
- **Training Dataset:** [SetFit/SentEval-CR](https://huggingface.co/datasets/SetFit/SentEval-CR)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit)
- **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055)
- **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit)
### Model Labels
| Label | Examples |
|:------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1 | <ul><li>'* slick-looking design and improved interface'</li><li>'as for bluetooth , no problems at all .'</li><li>'2 ) storage capacity'</li></ul> |
| 0 | <ul><li>"the day finally arrived when i was sure i 'd leave sprint ."</li><li>"neither message was answered ( they ask for 24 hours before replying - i 've been waiting 27 days . )"</li><li>'only problem is that is a bit heavy .'</li></ul> |
## Evaluation
### Metrics
| Label | Accuracy |
|:--------|:---------|
| **all** | 0.8805 |
## Uses
### Direct Use for Inference
First install the SetFit library:
```bash
pip install setfit
```
Then you can load this model and run inference.
```python
from setfit import SetFitModel
# Download from the 🤗 Hub
model = SetFitModel.from_pretrained("dangurangu/my-awesome-setfit-model")
# Run inference
preds = model("the speakerphone , the radio , all features work perfectly .")
```
<!--
### Downstream Use
*List how someone could finetune this model on their own dataset.*
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Set Metrics
| Training set | Min | Median | Max |
|:-------------|:----|:--------|:----|
| Word count | 4 | 18.0625 | 44 |
| Label | Training Sample Count |
|:------|:----------------------|
| 0 | 7 |
| 1 | 9 |
### Training Hyperparameters
- batch_size: (16, 16)
- num_epochs: (1, 1)
- max_steps: -1
- sampling_strategy: oversampling
- num_iterations: 20
- body_learning_rate: (2e-05, 2e-05)
- head_learning_rate: 2e-05
- loss: CosineSimilarityLoss
- distance_metric: cosine_distance
- margin: 0.25
- end_to_end: False
- use_amp: False
- warmup_proportion: 0.1
- seed: 42
- eval_max_steps: -1
- load_best_model_at_end: False
### Training Results
| Epoch | Step | Training Loss | Validation Loss |
|:-----:|:----:|:-------------:|:---------------:|
| 0.025 | 1 | 0.2205 | - |
### Framework Versions
- Python: 3.10.12
- SetFit: 1.0.3
- Sentence Transformers: 2.5.1
- Transformers: 4.38.1
- PyTorch: 2.1.0+cu121
- Datasets: 2.18.0
- Tokenizers: 0.15.2
## Citation
### BibTeX
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
--> |
ise-uiuc/Magicoder-DS-6.7B | ise-uiuc | 2024-03-06T07:40:45Z | 203 | 38 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"dataset:ise-uiuc/Magicoder-OSS-Instruct-75K",
"arxiv:2312.02120",
"arxiv:2305.06161",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-03T19:29:41Z | ---
license: other
library_name: transformers
datasets:
- ise-uiuc/Magicoder-OSS-Instruct-75K
license_name: deepseek
pipeline_tag: text-generation
---
# 🎩 Magicoder: Source Code Is All You Need
> Refer to our GitHub repo [ise-uiuc/magicoder](https://github.com/ise-uiuc/magicoder/) for an up-to-date introduction to the Magicoder family!
* 🎩**Magicoder** is a model family empowered by 🪄**OSS-Instruct**, a novel approach to enlightening LLMs with open-source code snippets for generating *low-bias* and *high-quality* instruction data for code.
* 🪄**OSS-Instruct** mitigates the *inherent bias* of the LLM-synthesized instruction data by empowering them with *a wealth of open-source references* to produce more diverse, realistic, and controllable data.


## Model Details
### Model Description
* **Developed by:**
[Yuxiang Wei](https://yuxiang.cs.illinois.edu),
[Zhe Wang](https://github.com/zhewang2001),
[Jiawei Liu](https://jiawei-site.github.io),
[Yifeng Ding](https://yifeng-ding.com),
[Lingming Zhang](https://lingming.cs.illinois.edu)
* **License:** [DeepSeek](https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL)
* **Finetuned from model:** [deepseek-coder-6.7b-base](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base)
### Model Sources
* **Repository:** <https://github.com/ise-uiuc/magicoder>
* **Paper:** <https://arxiv.org/abs/2312.02120>
* **Demo (powered by [Gradio](https://www.gradio.app)):**
<https://github.com/ise-uiuc/magicoder/tree/main/demo>
### Training Data
* [Magicoder-OSS-Instruct-75K](https://huggingface.co/datasets/ise-uiuc/Magicoder_oss_instruct_75k): generated through **OSS-Instruct** using `gpt-3.5-turbo-1106` and used to train both Magicoder and Magicoder-S series.
## Uses
### Direct Use
Magicoders are designed and best suited for **coding tasks**.
### Out-of-Scope Use
Magicoders may not work well in non-coding tasks.
## Bias, Risks, and Limitations
Magicoders may sometimes make errors, producing misleading contents, or struggle to manage tasks that are not related to coding.
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
## How to Get Started with the Model
Use the code below to get started with the model. Make sure you installed the [transformers](https://huggingface.co/docs/transformers/index) library.
```python
from transformers import pipeline
import torch
MAGICODER_PROMPT = """You are an exceptionally intelligent coding assistant that consistently delivers accurate and reliable responses to user instructions.
@@ Instruction
{instruction}
@@ Response
"""
instruction = <Your code instruction here>
prompt = MAGICODER_PROMPT.format(instruction=instruction)
generator = pipeline(
model="ise-uiuc/Magicoder-DS-6.7B",
task="text-generation",
torch_dtype=torch.bfloat16,
device_map="auto",
)
result = generator(prompt, max_length=1024, num_return_sequences=1, temperature=0.0)
print(result[0]["generated_text"])
```
## Technical Details
Refer to our GitHub repo: [ise-uiuc/magicoder](https://github.com/ise-uiuc/magicoder/).
## 📝 Citation
```bibtex
@misc{magicoder,
title={Magicoder: Source Code Is All You Need},
author={Yuxiang Wei and Zhe Wang and Jiawei Liu and Yifeng Ding and Lingming Zhang},
year={2023},
eprint={2312.02120},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## 🙏 Acknowledgements
* [WizardCoder](https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder): Evol-Instruct
* [DeepSeek-Coder](https://github.com/deepseek-ai/DeepSeek-Coder): Base model for Magicoder-DS
* [CodeLlama](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/): Base model for Magicoder-CL
* [StarCoder](https://arxiv.org/abs/2305.06161): Data decontamination
## Important Note
Magicoder models are trained on the synthetic data generated by OpenAI models. Please pay attention to OpenAI's [terms of use](https://openai.com/policies/terms-of-use) when using the models and the datasets. Magicoders will not compete with OpenAI's commercial products.
|
ise-uiuc/Magicoder-S-DS-6.7B | ise-uiuc | 2024-03-06T07:40:23Z | 843 | 201 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"dataset:ise-uiuc/Magicoder-OSS-Instruct-75K",
"dataset:ise-uiuc/Magicoder-Evol-Instruct-110K",
"arxiv:2312.02120",
"arxiv:2305.06161",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-12-03T19:37:23Z | ---
license: other
library_name: transformers
datasets:
- ise-uiuc/Magicoder-OSS-Instruct-75K
- ise-uiuc/Magicoder-Evol-Instruct-110K
license_name: deepseek
pipeline_tag: text-generation
---
# 🎩 Magicoder: Source Code Is All You Need
> Refer to our GitHub repo [ise-uiuc/magicoder](https://github.com/ise-uiuc/magicoder/) for an up-to-date introduction to the Magicoder family!
* 🎩**Magicoder** is a model family empowered by 🪄**OSS-Instruct**, a novel approach to enlightening LLMs with open-source code snippets for generating *low-bias* and *high-quality* instruction data for code.
* 🪄**OSS-Instruct** mitigates the *inherent bias* of the LLM-synthesized instruction data by empowering them with *a wealth of open-source references* to produce more diverse, realistic, and controllable data.


## Model Details
### Model Description
* **Developed by:**
[Yuxiang Wei](https://yuxiang.cs.illinois.edu),
[Zhe Wang](https://github.com/zhewang2001),
[Jiawei Liu](https://jiawei-site.github.io),
[Yifeng Ding](https://yifeng-ding.com),
[Lingming Zhang](https://lingming.cs.illinois.edu)
* **License:** [DeepSeek](https://github.com/deepseek-ai/DeepSeek-Coder/blob/main/LICENSE-MODEL)
* **Finetuned from model:** [deepseek-coder-6.7b-base](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-base)
### Model Sources
* **Repository:** <https://github.com/ise-uiuc/magicoder>
* **Paper:** <https://arxiv.org/abs/2312.02120>
* **Demo (powered by [Gradio](https://www.gradio.app)):**
<https://github.com/ise-uiuc/magicoder/tree/main/demo>
### Training Data
* [Magicoder-OSS-Instruct-75K](https://huggingface.co/datasets/ise-uiuc/Magicoder_oss_instruct_75k): generated through **OSS-Instruct** using `gpt-3.5-turbo-1106` and used to train both Magicoder and Magicoder-S series.
* [Magicoder-Evol-Instruct-110K](https://huggingface.co/datasets/ise-uiuc/Magicoder_evol_instruct_110k): decontaminated and redistributed from [theblackcat102/evol-codealpaca-v1](https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1), used to further finetune Magicoder series and obtain Magicoder-S models.
## Uses
### Direct Use
Magicoders are designed and best suited for **coding tasks**.
### Out-of-Scope Use
Magicoders may not work well in non-coding tasks.
## Bias, Risks, and Limitations
Magicoders may sometimes make errors, producing misleading contents, or struggle to manage tasks that are not related to coding.
### Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.
## How to Get Started with the Model
Use the code below to get started with the model. Make sure you installed the [transformers](https://huggingface.co/docs/transformers/index) library.
```python
from transformers import pipeline
import torch
MAGICODER_PROMPT = """You are an exceptionally intelligent coding assistant that consistently delivers accurate and reliable responses to user instructions.
@@ Instruction
{instruction}
@@ Response
"""
instruction = <Your code instruction here>
prompt = MAGICODER_PROMPT.format(instruction=instruction)
generator = pipeline(
model="ise-uiuc/Magicoder-S-DS-6.7B",
task="text-generation",
torch_dtype=torch.bfloat16,
device_map="auto",
)
result = generator(prompt, max_length=1024, num_return_sequences=1, temperature=0.0)
print(result[0]["generated_text"])
```
## Technical Details
Refer to our GitHub repo: [ise-uiuc/magicoder](https://github.com/ise-uiuc/magicoder/).
## Citation
```bibtex
@misc{magicoder,
title={Magicoder: Source Code Is All You Need},
author={Yuxiang Wei and Zhe Wang and Jiawei Liu and Yifeng Ding and Lingming Zhang},
year={2023},
eprint={2312.02120},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Acknowledgements
* [WizardCoder](https://github.com/nlpxucan/WizardLM/tree/main/WizardCoder): Evol-Instruct
* [DeepSeek-Coder](https://github.com/deepseek-ai/DeepSeek-Coder): Base model for Magicoder-DS
* [CodeLlama](https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/): Base model for Magicoder-CL
* [StarCoder](https://arxiv.org/abs/2305.06161): Data decontamination
## Important Note
Magicoder models are trained on the synthetic data generated by OpenAI models. Please pay attention to OpenAI's [terms of use](https://openai.com/policies/terms-of-use) when using the models and the datasets. Magicoders will not compete with OpenAI's commercial products.
|
ITT-AF/ITT-42dot_LLM-PLM-1.3B-v6.0 | ITT-AF | 2024-03-06T07:40:07Z | 60 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T06:35:07Z | ---
license: cc-by-nc-4.0
---
# ITT-AF/ITT-42dot_LLM-PLM-1.3B-v6.0
This model is a fine-tuned version of [42dot/42dot_LLM-PLM-1.3B](https://huggingface.co/42dot/42dot_LLM-PLM-1.3B) on an custom dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 96
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.0.0
- Tokenizers 0.15.0 |
Sumail/Golden_Waves06_2b | Sumail | 2024-03-06T07:38:00Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Sumail/Bubble_bee04_2b",
"base_model:finetune:Sumail/Bubble_bee04_2b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T07:35:07Z | ---
base_model:
- Sumail/Bubble_bee04_2b
- 0x0dad0/nous_nb00
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [Sumail/Bubble_bee04_2b](https://huggingface.co/Sumail/Bubble_bee04_2b)
* [0x0dad0/nous_nb00](https://huggingface.co/0x0dad0/nous_nb00)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: 0x0dad0/nous_nb00
layer_range: [0, 18]
- model: Sumail/Bubble_bee04_2b
layer_range: [0, 18]
merge_method: slerp
base_model: 0x0dad0/nous_nb00
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.75
dtype: bfloat16
```
|
venkatarajendra/rm-falcon-7b | venkatarajendra | 2024-03-06T07:34:06Z | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:tiiuae/falcon-7b",
"base_model:adapter:tiiuae/falcon-7b",
"region:us"
] | null | 2024-03-06T07:33:47Z | ---
library_name: peft
base_model: tiiuae/falcon-7b
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.9.1.dev0 |
anum231/food_classifier | anum231 | 2024-03-06T07:26:58Z | 46 | 0 | transformers | [
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"base_model:anum231/cancer_classifier_100",
"base_model:finetune:anum231/cancer_classifier_100",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2024-01-27T05:41:37Z | ---
license: apache-2.0
base_model: anum231/cancer_classifier_100
tags:
- generated_from_keras_callback
model-index:
- name: anum231/food_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# anum231/food_classifier
This model is a fine-tuned version of [anum231/cancer_classifier_100](https://huggingface.co/anum231/cancer_classifier_100) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5815
- Validation Loss: 0.4561
- Train Accuracy: 0.8276
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 1160, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.6210 | 0.4706 | 0.8276 | 0 |
| 0.6095 | 0.4583 | 0.8103 | 1 |
| 0.6289 | 0.4566 | 0.8103 | 2 |
| 0.6230 | 0.5850 | 0.7241 | 3 |
| 0.5815 | 0.4561 | 0.8276 | 4 |
### Framework versions
- Transformers 4.38.1
- TensorFlow 2.15.0
- Datasets 2.18.0
- Tokenizers 0.15.2
|
OwOOwO/eacc_dc_5 | OwOOwO | 2024-03-06T07:17:53Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T07:15:15Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
eunyounglee/degreemotion-bert-finetuning-3 | eunyounglee | 2024-03-06T07:16:27Z | 89 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:klue/bert-base",
"base_model:finetune:klue/bert-base",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-06T06:44:00Z | ---
license: cc-by-sa-4.0
base_model: klue/bert-base
tags:
- generated_from_trainer
model-index:
- name: degreemotion-bert-finetuning-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# degreemotion-bert-finetuning-3
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.32.1
- Pytorch 2.2.0+cu121
- Datasets 2.12.0
- Tokenizers 0.13.2
|
Hadiboo/boguey | Hadiboo | 2024-03-06T07:16:09Z | 0 | 0 | adapter-transformers | [
"adapter-transformers",
"code",
"art",
"text-generation-inference",
"text-generation",
"en",
"dataset:HuggingFaceTB/cosmopedia",
"region:us"
] | text-generation | 2024-03-06T07:13:10Z | ---
datasets:
- HuggingFaceTB/cosmopedia
language:
- en
library_name: adapter-transformers
pipeline_tag: text-generation
tags:
- code
- art
- text-generation-inference
--- |
Sumail/Golden_Waves04_2b | Sumail | 2024-03-06T07:13:37Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"mergekit",
"merge",
"conversational",
"base_model:Sumail/Bubble_bee04_2b",
"base_model:finetune:Sumail/Bubble_bee04_2b",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T06:42:38Z | ---
base_model:
- 0x0dad0/nous_nb00
- Sumail/Bubble_bee04_2b
library_name: transformers
tags:
- mergekit
- merge
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the SLERP merge method.
### Models Merged
The following models were included in the merge:
* [0x0dad0/nous_nb00](https://huggingface.co/0x0dad0/nous_nb00)
* [Sumail/Bubble_bee04_2b](https://huggingface.co/Sumail/Bubble_bee04_2b)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
slices:
- sources:
- model: 0x0dad0/nous_nb00
layer_range: [0, 18]
- model: Sumail/Bubble_bee04_2b
layer_range: [0, 18]
merge_method: slerp
base_model: 0x0dad0/nous_nb00
parameters:
t:
- filter: self_attn
value: [0, 0.5, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.5, 0.7, 0.3, 0]
- value: 0.5
dtype: bfloat16
```
|
ottopilot/PriyaBelleXL | ottopilot | 2024-03-06T07:09:25Z | 4 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"lora",
"template:sd-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:cc-by-nc-nd-4.0",
"region:us"
] | text-to-image | 2024-03-06T07:07:58Z | ---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
- template:sd-lora
widget:
- text: >-
RAW photo, portrait, close-up, PriBlle, looking at viewer, smiling, perfect
black hair with highlights, brown eyes, professional headshot, shot on
Hasselblad, perfect lighting, dutch angle, bokeh, outdoors, depth of field,
blue dress, warm, loving, friendly <lora:PriyaBelleXL_v1:1>
parameters:
negative_prompt: bindi, mole, facial marks
output:
url: images/00001-3916971016.png
- text: >-
PriBlle, very dark-skinned woman, solo focus, mixed media, realistic anime
art style, art by Yusuke Nakamura, fractal, ukiyoe, watercolor ink wash
technique, intricate, highly detailed. Inspired by multiracial Hindi-West
Indian heritage, San Francisco Bay Area, and diaspora.
<lora:PriyaBelleXL_v1:1>
output:
url: images/00002-2902012777.png
- text: >-
PriBlle as Princess Jasmine, mind controlled by Jafar, sexy red outfit,
tiara, collar, Agrabah palace, entranced by magic:1.1, glowing, compliant,
submissive, obedient, Disney's Aladdin bad end <lora:PriyaBelleXL_v1:1>
output:
url: images/00121-3666660946.png
- text: >-
PriBlle is a college student on campus, dark blue and gold hooded sweatshirt
with bear logo and shorts, Berkeley <lora:PriyaBelleXL_v1:1>
output:
url: images/00172-3938050706.png
- text: >-
PriBlle is hella fine shawty, hyphy, outdoors, Lake Merritt, Oakland,
NorCal, yay area <lora:PriyaBelleXL_v1:1>
output:
url: images/00156-519328175.png
- text: >-
PriBlle, a woman wearing a green Oakland Athletics cap and sexy fan gear,
smiling, ponytail, bodycon, bedroom, natural light, sexy, tease, flirty
<lora:PriyaBelleXL_v1:1>
output:
url: images/00328-1196258457.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: PriBlle
license: cc-by-nc-nd-4.0
---
# Priya Belle (Ottoverse original character) - SDXL 1.0
<Gallery />
## Model description
https://huggingface.co/ottopilot/PriyaBelle, but trained for SDXL.
## Trigger words
You should use `PriBlle` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/ottopilot/PriyaBelleXL/tree/main) them in the Files & versions tab.
|
mahiatlinux/MasherAI-7B-v0.9-GGUF | mahiatlinux | 2024-03-06T06:59:17Z | 3 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:openchat/openchat-3.5-0106",
"base_model:quantized:openchat/openchat-3.5-0106",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-03-06T06:57:11Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
base_model: openchat/openchat-3.5-0106
---
# Uploaded model
- **Developed by:** mahiatlinux
- **License:** apache-2.0
- **Finetuned from model :** openchat/openchat-3.5-0106
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
CatBarks/t5_es100SEC2_4_tokenizer | CatBarks | 2024-03-06T06:52:17Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T06:52:16Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
JesseStover/L2AI-dictionary-klue-bert-base | JesseStover | 2024-03-06T06:47:19Z | 89 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"multiple-choice",
"endpoints_compatible",
"region:us"
] | multiple-choice | 2024-03-04T13:52:44Z | ---
{}
---
The L2AI-dictionary model is fine-tuned checkpoint of [klue/bert-base](https://huggingface.co/klue/bert-base) for multiple choice, specifically for selecting the best dictionary definition of a given word in a sentence. Below is an example usage:
```python
import numpy as np
import torch
from transformers import AutoModelForMultipleChoice, AutoTokenizer
model_name = "JesseStover/L2AI-dictionary-klue-bert-base"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForMultipleChoice.from_pretrained(model_name)
model.to(torch.device("cuda" if torch.cuda.is_available() else "cpu"))
prompts = "\"강아지는 뽀송뽀송하다.\"에 있는 \"강아지\"의 정의는 "
candidates = [
"\"(명사) 개의 새끼\"예요.",
"\"(명사) 부모나 할아버지, 할머니가 자식이나 손주를 귀여워하면서 부르는 말\"이예요."
]
inputs = tokenizer(
[[prompt, candidate] for candidate in candidates],
return_tensors="pt",
padding=True
)
labels = torch.tensor(0).unsqueeze(0)
with torch.no_grad():
outputs = model(
**{k: v.unsqueeze(0) for k, v in inputs.items()}, labels=labels
)
print({i: float(x) for i, x in enumerate(outputs.logits.softmax(1)[0])})
```
Training data was procured under Creative Commons [CC BY-SA 2.0 KR DEED](https://creativecommons.org/licenses/by-sa/2.0/kr/) from the National Institute of Korean Language's [Basic Korean Dictionary](https://krdict.korean.go.kr) and [Standard Korean Dictionary](https://stdict.korean.go.kr/). |
vsocrates/incar-status-any | vsocrates | 2024-03-06T06:44:25Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"longformer",
"text-classification",
"medical",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-03-06T05:07:27Z | ---
'[object Object]': null
license: apache-2.0
language:
- en
library_name: transformers
tags:
- medical
widget:
- text: "Patient is a a formerly incarcerated individual having arrived in the ED with stomach pain."
- example_title: "Former Incarceration"
- text: "Patient arrived in the ED for chest pain."
- example_title: "No Incarceration"
---
# Model Card for incar-status-any
A Clinical Longformer-based model trained by the HAIL lab to predict incarceration status (past and present) in ED Notes.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** Vimig Socrates
- **Model type:** Longformer
- **Language(s) (NLP):** English
- **License:** Apache License 2.0
- **Finetuned from model:** [Clinical Lonformer](https://huggingface.co/yikuan8/Clinical-Longformer
)
## Uses
This model can be used to predict the incarceration status that a patient might have given most types of clinical ED notes.
## Bias, Risks, and Limitations
This should not be used directly without supervision from a physician as predicting incarceration status incorrectly can have significant negative social and clinical impacts.
## Training Details
### Training Data
This model was trained on custom annotated data labeled for incarceration status from Yale-New Haven Health Hospital System ED Notes.
### Training Procedure
## Evaluation
TODO
### Testing Data, Factors & Metrics
### Results
TODO
]
## Citation [optional]
Coming soon!
**BibTeX:**
{{ citation_bibtex | default("[More Information Needed]", true)}}
**APA:**
{{ citation_apa | default("[More Information Needed]", true)}}
## Model Card Authors [optional]
Vimig Socrates
## Model Card Contact
Vimig Socrates: [[email protected]](mailto:[email protected]) |
samanthakarungi/fine-tuned-bert | samanthakarungi | 2024-03-06T06:42:24Z | 5 | 1 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"finance",
"business",
"en",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2024-02-26T08:29:46Z | ---
language:
- en
widget:
- text: uber for today
- text: airtime and data
- text: breakfast meeting with client
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- finance
- text-classification
- business
---
### Model Description
<p>This model is a fine tuned version of the <a href="https://huggingface.co/distilbert/distilbert-base-uncased">distilbert-base-uncased</a> model on Hugging face. The model is trained to classify payment notes for business owners into one of the following categories.</p>
<ol>
<li>INVENTORY, SUPPLIES AND EQUIPMENT</li>
<li>PROFESSIONAL SERVICES</li>
<li>TRANSPORTATION AND TRAVEL</li>
<li>UTILITIES</li>
<li>EMPLOYEE BENEFITS AND COMPENSATION</li>
<li>MEALS AND ENTERTAINMENT</li>
<li>TAX PAYMENTS</li>
<li>LEGAL AND COMPLIANCE FEES</li>
<li>BUSINESS DEVELOPMENT AND INVESTMENT</li>
</ol>
### Base Model Description
<p>DistilBERT is a transformers model, smaller and faster than BERT, which was pretrained on the same corpus in a self-supervised fashion, using the BERT base model as a teacher. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts using the BERT base model.</p>
### Training results
<table>
<tr>
<th>Epoch</th>
<th>Training Loss</th>
<th>Validation Loss</th>
<th>Accuracy</th>
</tr>
<tr>
<th>0</th>
<th>No Log</th>
<th>0.263793</th>
<th>0.916230</th>
</tr>
<tr>
<th>1</th>
<th>No Log</th>
<th>0.185122</th>
<th>0.937173</th>
</tr>
<tr>
<th>2</th>
<th>0.318300</th>
<th>0.191695</th>
<th>0.937173</th>
</tr>
</table>
### Training results
<p>Check out the training code at this <a href="https://github.com/samanthaKarungi/iotec-pay-model-bert/tree/main/model/training_and_evaluation">github repo</a></p>
### Framework versions
<ul>
<li>Transformers 4.37.2</li>
<li>PyTorch 2.2.0</li>
<li>Datasets 2.17.1</li>
<li>Tokenizers 0.15.2</li>
</ul> |
Demo0203/gyx | Demo0203 | 2024-03-06T06:39:57Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-06T06:35:36Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 263.17 +/- 14.30
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
GregoRio123/nsy | GregoRio123 | 2024-03-06T06:39:07Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-03-06T05:42:51Z | ---
license: creativeml-openrail-m
---
|
gokuls/wav2vec2-base-finetuned-ic-slurp | gokuls | 2024-03-06T06:34:14Z | 9 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/wav2vec2-base",
"base_model:finetune:facebook/wav2vec2-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2024-03-05T13:14:31Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-base
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-ic-slurp
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ic-slurp
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1101
- Accuracy: 0.7393
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 96
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 4.0345 | 1.0 | 527 | 3.9813 | 0.0673 |
| 3.5622 | 2.0 | 1055 | 3.4634 | 0.1867 |
| 2.7737 | 3.0 | 1582 | 2.7252 | 0.3638 |
| 2.1285 | 4.0 | 2110 | 2.1754 | 0.4827 |
| 1.6216 | 5.0 | 2637 | 1.8169 | 0.5701 |
| 1.1786 | 6.0 | 3165 | 1.5773 | 0.6347 |
| 0.8747 | 7.0 | 3692 | 1.5024 | 0.6568 |
| 0.7565 | 8.0 | 4220 | 1.5020 | 0.6694 |
| 0.5236 | 9.0 | 4747 | 1.5287 | 0.6799 |
| 0.4517 | 10.0 | 5275 | 1.5165 | 0.6879 |
| 0.364 | 11.0 | 5802 | 1.5159 | 0.6949 |
| 0.3221 | 12.0 | 6330 | 1.5217 | 0.6996 |
| 0.227 | 13.0 | 6857 | 1.5718 | 0.7075 |
| 0.1828 | 14.0 | 7385 | 1.6979 | 0.6901 |
| 0.1691 | 15.0 | 7912 | 1.6162 | 0.7093 |
| 0.1642 | 16.0 | 8440 | 1.6973 | 0.7048 |
| 0.1254 | 17.0 | 8967 | 1.7060 | 0.7100 |
| 0.1578 | 18.0 | 9495 | 1.7328 | 0.7063 |
| 0.1509 | 19.0 | 10022 | 1.7658 | 0.7073 |
| 0.1409 | 20.0 | 10550 | 1.7770 | 0.7052 |
| 0.1085 | 21.0 | 11077 | 1.8033 | 0.7074 |
| 0.106 | 22.0 | 11605 | 1.7000 | 0.7149 |
| 0.0764 | 23.0 | 12132 | 1.7943 | 0.7104 |
| 0.0671 | 24.0 | 12660 | 1.8323 | 0.7155 |
| 0.0768 | 25.0 | 13187 | 1.8486 | 0.7146 |
| 0.0741 | 26.0 | 13715 | 1.8227 | 0.7187 |
| 0.0731 | 27.0 | 14242 | 1.7824 | 0.7230 |
| 0.0935 | 28.0 | 14770 | 1.8987 | 0.7164 |
| 0.0829 | 29.0 | 15297 | 1.8774 | 0.7202 |
| 0.0588 | 30.0 | 15825 | 1.8820 | 0.7211 |
| 0.059 | 31.0 | 16352 | 1.9535 | 0.7246 |
| 0.0431 | 32.0 | 16880 | 1.9621 | 0.7237 |
| 0.0324 | 33.0 | 17407 | 2.0160 | 0.7256 |
| 0.0447 | 34.0 | 17935 | 1.9392 | 0.7262 |
| 0.025 | 35.0 | 18462 | 2.0095 | 0.7284 |
| 0.0522 | 36.0 | 18990 | 1.9994 | 0.7244 |
| 0.0482 | 37.0 | 19517 | 2.0566 | 0.7262 |
| 0.0203 | 38.0 | 20045 | 2.0287 | 0.7295 |
| 0.0221 | 39.0 | 20572 | 2.0634 | 0.7300 |
| 0.0444 | 40.0 | 21100 | 2.0593 | 0.7302 |
| 0.0348 | 41.0 | 21627 | 2.0712 | 0.7298 |
| 0.0154 | 42.0 | 22155 | 2.0429 | 0.7351 |
| 0.024 | 43.0 | 22682 | 2.0708 | 0.7352 |
| 0.0157 | 44.0 | 23210 | 2.0701 | 0.7368 |
| 0.0222 | 45.0 | 23737 | 2.0963 | 0.7338 |
| 0.0126 | 46.0 | 24265 | 2.1329 | 0.7340 |
| 0.0211 | 47.0 | 24792 | 2.1230 | 0.7370 |
| 0.0288 | 48.0 | 25320 | 2.1101 | 0.7393 |
| 0.0347 | 49.0 | 25847 | 2.1201 | 0.7375 |
| 0.0162 | 49.95 | 26350 | 2.1197 | 0.7381 |
### Framework versions
- Transformers 4.37.2
- Pytorch 2.2.0+cu121
- Datasets 2.17.1
- Tokenizers 0.15.2
|
WokeEngineer/Reinforce-cartPole-v1 | WokeEngineer | 2024-03-06T06:32:42Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2024-03-06T01:17:33Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Mayank1999/dummy-model | Mayank1999 | 2024-03-06T06:26:21Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"camembert",
"fill-mask",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2024-03-06T05:57:51Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
barca-boy/primate_autotrain_sample | barca-boy | 2024-03-06T06:26:19Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"autotrain",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T06:21:58Z | ---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
anashrivastava/tinyllama-colorist-lora | anashrivastava | 2024-03-06T06:23:59Z | 1 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"trl",
"sft",
"generated_from_trainer",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v0.3",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v0.3",
"license:apache-2.0",
"region:us"
] | null | 2024-03-06T06:19:00Z | ---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
base_model: PY007/TinyLlama-1.1B-Chat-v0.3
model-index:
- name: tinyllama-colorist-lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tinyllama-colorist-lora
This model is a fine-tuned version of [PY007/TinyLlama-1.1B-Chat-v0.3](https://huggingface.co/PY007/TinyLlama-1.1B-Chat-v0.3) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- training_steps: 200
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2 |
Hiraishin/reranker-malaysian-mistral-474M | Hiraishin | 2024-03-06T06:13:29Z | 0 | 0 | transformers | [
"transformers",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2024-03-06T06:13:29Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Ashwini1412/wav2vec2-nepali-itr-7 | Ashwini1412 | 2024-03-06T06:10:58Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2024-03-06T03:57:03Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Infi-MM/infimm-vicuna13b | Infi-MM | 2024-03-06T06:07:45Z | 18 | 3 | transformers | [
"transformers",
"pytorch",
"infimm-vicuna",
"text-generation",
"multimodal",
"text",
"image",
"image-to-text",
"conversational",
"custom_code",
"en",
"dataset:HuggingFaceM4/OBELICS",
"dataset:laion/laion2B-en",
"dataset:coyo-700m",
"dataset:mmc4",
"autotrain_compatible",
"region:us"
] | text-generation | 2024-01-05T01:45:50Z | ---
language: en
tags:
- multimodal
- text
- image
- image-to-text
datasets:
- HuggingFaceM4/OBELICS
- laion/laion2B-en
- coyo-700m
- mmc4
pipeline_tag: text-generation
inference: true
---
<br>
<p align="center">
<img src="assets/infimm-logo.webp" alt="InfiMM-logo" width="400"></a>
</p>
<br>
# InfiMM
InfiMM, inspired by the Flamingo architecture, sets itself apart with unique training data and diverse large language models (LLMs). This approach allows InfiMM to maintain the core strengths of Flamingo while offering enhanced capabilities. As the premier open-sourced variant in this domain, InfiMM excels in accessibility and adaptability, driven by community collaboration. It's more than an emulation of Flamingo; it's an innovation in visual language processing.
Our model is another attempt to produce the result reported in the paper "Flamingo: A Large-scale Visual Language Model for Multimodal Understanding" by DeepMind.
Compared with previous open-sourced attempts ([OpenFlamingo](https://github.com/mlfoundations/open_flamingo) and [IDEFIC](https://huggingface.co/blog/idefics)), InfiMM offers a more flexible models, allowing for a wide range of applications.
In particular, InfiMM integrates the latest LLM models into VLM domain the reveals the impact of LLMs with different scales and architectures.
Please note that InfiMM is currently in beta stage and we are continuously working on improving it.
## Model Details
- **Developed by**: Institute of Automation, Chinese Academy of Sciences and ByteDance
- **Model Type**: Visual Language Model (VLM)
- **Language**: English
- **LLMs**: [Zephyr](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta), [LLaMA2-13B](https://ai.meta.com/llama/), [Vicuna-13B](https://huggingface.co/lmsys/vicuna-13b-v1.5)
- **Vision Model**: [EVA CLIP](https://huggingface.co/QuanSun/EVA-CLIP)
- **Language(s) (NLP):** en
- **License:** see [License section](#license)
<!---
- **Parent Models:** [QuanSun/EVA-CLIP](https://huggingface.co/QuanSun/EVA-CLIP/blob/main/EVA02_CLIP_L_336_psz14_s6B.pt) and [HuggingFaceH4/zephyr-7b--beta ta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
-->
## Model Family
Our model consists of several different model. Please see the details below.
| Model | LLM | Vision Encoder | IFT |
| ---------------------- | -------------- | -------------- | --- |
| InfiMM-Zephyr | Zehpyr-7B-beta | ViT-L-336 | No |
| InfiMM-Llama-13B | Llama2-13B | ViT-G-224 | No |
| InfiMM-Vicuna-13B | Vicuna-13B | ViT-E-224 | No |
| InfiMM-Zephyr-Chat | Zehpyr-7B-beta | ViT-L-336 | Yes |
| InfiMM-Llama-13B-Chat | Llama2-13B | ViT-G-224 | Yes |
| InfiMM-Vicuna-13B-Chat | Vicuna-13B | ViT-E-224 | Yes |
<!-- InfiMM-Zephyr-Chat is an light-weighted, open-source re-production of Flamingo-style Multimodal large language models with chat capability that takes sequences of interleaved images and texts as inputs and generates text outputs, with only 9B parameters.
-->
## Demo
Will be released soon.
Our model adopts the Flamingo architecture, leveraging EVA CLIP as the visual encoder and employing LLaMA2, Vicuna, and Zephyr as language models. The visual and language modalities are connected through a Cross Attention module.
## Quickstart
Use the code below to get started with the base model:
```python
import torch
from transformers import AutoModelForCausalLM, AutoProcessor
processor = AutoProcessor.from_pretrained("InfiMM/infimm-zephyr", trust_remote_code=True)
prompts = [
{
"role": "user",
"content": [
{"image": "assets/infimm-logo.webp"},
"Please explain this image to me.",
],
}
]
inputs = processor(prompts)
# use bf16
model = AutoModelForCausalLM.from_pretrained(
"InfiMM/infimm-zephyr",
local_files_only=True,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
).eval()
inputs = inputs.to(model.device)
inputs["batch_images"] = inputs["batch_images"].to(torch.bfloat16)
generated_ids = model.generate(
**inputs,
min_generation_length=0,
max_generation_length=256,
)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)
print(generated_text)
```
## Training Details
We employed three stages to train our model: pretraining (PT), multi-task training (MTT), and instruction finetuning (IFT). Refer to the table below for detailed configurations in each stage. Due to significant noise in the pretraining data, we aimed to enhance the model's accuracy by incorporating higher-quality data. In the multi-task training (MTT) phase, we utilized substantial training data from diverse datasets. However, as the answer in these data mainly consisted of single words or phrases, the model's conversational ability was limited. Therefore, in the third stage, we introduced a considerable amount of image-text dialogue data (llava665k) for fine-tuning the model's instructions.
### Pretraining (PT)
We follow similar training procedures used in [IDEFICS](https://huggingface.co/HuggingFaceM4/idefics-9b-instruct/blob/main/README.md).
The model is trained on a mixture of image-text pairs and unstructured multimodal web documents. All data are from public sources. Many image URL links are expired, we are capable of only downloading partial samples. We filter low quality data, here are resulting data we used:
| Data Source | Type of Data | Number of Tokens in Source | Number of Images in Source | Number of Samples | Epochs |
| ---------------------------------------------------------------- | ------------------------------------- | -------------------------- | -------------------------- | ----------------- | ------ |
| [OBELICS](https://huggingface.co/datasets/HuggingFaceM4/OBELICS) | Unstructured Multimodal Web Documents | - | - | 101M | 1 |
| [MMC4](https://github.com/allenai/mmc4) | Unstructured Multimodal Web Documents | - | - | 53M | 1 |
| [LAION](https://huggingface.co/datasets/laion/laion2B-en) | Image-Text Pairs | - | 115M | 115M | 1 |
| [COYO](https://github.com/kakaobrain/coyo-dataset) | Image-Text Pairs | - | 238M | 238M | 1 |
| [LAION-COCO](https://laion.ai/blog/laion-coco/) | Image-Text Pairs | - | 140M | 140M | 1 |
| [PMD\*](https://huggingface.co/datasets/facebook/pmd) | Image-Text Pairs | - | 20M | 20M | 1 |
\*PMD is only used in models with 13B LLMs, not the 7B Zephyr model.
During pretraining of interleaved image text sample, we apply masked cross-attention, however, we didn't strictly follow Flamingo, which alternate attention of image to its previous text or later text by change of 0.5.
We use the following hyper parameters:
| Categories | Parameters | Value |
| ------------------------ | -------------------------- | -------------------- |
| Perceiver Resampler | Number of Layers | 6 |
| | Number of Latents | 64 |
| | Number of Heads | 16 |
| | Resampler Head Dimension | 96 |
| Training | Sequence Length | 384 (13B) / 792 (7B) |
| | Effective Batch Size | 40\*128 |
| | Max Images per Sample | 6 |
| | Weight Decay | 0.1 |
| | Optimizer | Adam(0.9, 0.999) |
| | Gradient Accumulation Step | 2 |
| Learning Rate | Initial Max | 1e-4 |
| | Decay Schedule | Constant |
| | Warmup Step rate | 0.005 |
| Large-scale Optimization | Gradient Checkpointing | False |
| | Precision | bf16 |
| | ZeRO Optimization | Stage 2 |
### Multi-Task Training (MTT)
Here we use mix_cap_vqa to represent the mixed training set from COCO caption, TextCap, VizWiz Caption, VQAv2, OKVQA, VizWiz VQA, TextVQA, OCRVQA, STVQA, DocVQA, GQA and ScienceQA-image. For caption, we add prefix such as "Please describe the image." before the question. And for QA, we add "Answer the question using a single word or phrase.". Specifically, for VizWiz VQA, we use "When the provided information is insufficient, respond with 'Unanswerable'. Answer the question using a single word or phrase.". While for ScienceQA-image, we use "Answer with the option's letter from the given choices directly."
### Instruction Fine-Tuning (IFT)
For instruction fine-tuning stage, we use the recently released [LLaVA-MIX-665k](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/tree/main).
We use the following hyper parameters:
| Categories | Parameters | Value |
| ------------------------ | -------------------------- | -------------------- |
| Perceiver Resampler | Number of Layers | 6 |
| | Number of Latents | 64 |
| | Number of Heads | 16 |
| | Resampler Head Dimension | 96 |
| Training | Sequence Length | 384 (13B) / 792 (7B) |
| | Effective Batch Size | 64 |
| | Max Images per Sample | 6 |
| | Weight Decay | 0.1 |
| | Optimizer | Adam(0.9, 0.999) |
| | Gradient Accumulation Step | 2 |
| Learning Rate | Initial Max | 1e-5 |
| | Decay Schedule | Constant |
| | Warmup Step rate | 0.005 |
| Large-scale Optimization | Gradient Checkpointing | False |
| | Precision | bf16 |
| | ZeRO Optimization | Stage 2 |
During IFT, similar to pretrain, we keep ViT and LLM frozen for both chat-based LLM (Vicuna and Zephyr). For Llama model, we keep LLM trainable during the IFT stage. We also apply chat-template to process the training samples.
## Evaluation
### PreTraining Evaluation
We evaluate the pretrained models on the following downstream tasks: Image Captioning and VQA. We also compare with our results with [IDEFICS](https://huggingface.co/blog/idefics).
| Model | Shots | COCO CIDEr | Flickr30K CIDEr | VQA v2 Acc | TextVQA Acc | OK-VQA Acc |
| ----------------- | ----- | ---------- | --------------- | ---------- | ----------- | ---------- |
| IDEFICS-9B | 0 | 46 | 27.3 | 50.9 | 25.9 | 38.4 |
| | 4 | 93 | 59.7 | 55.4 | 27.6 | 45.5 |
| IDEFICS-80B | 0 | 91.8 | 53.7 | 60 | 30.9 | 45.2 |
| | 4 | 110.3 | 73.7 | 64.6 | 34.4 | 52.4 |
| InfiMM-Zephyr-7B | 0 | 78.8 | 60.7 | 33.7 | 15.2 | 17.1 |
| | 4 | 108.6 | 71.9 | 59.1 | 34.3 | 50.5 |
| InfiMM-Llama2-13B | 0 | 85.4 | 54.6 | 51.6 | 24.2 | 26.4 |
| | 4 | 125.2 | 87.1 | 66.1 | 38.2 | 55.5 |
| InfiMM-Vicuna13B | 0 | 69.6 | 49.6 | 60.4 | 32.8 | 49.2 |
| | 4 | 118.1 | 81.4 | 64.2 | 38.4 | 53.7 |
### IFT Evaluation
In our analysis, we concentrate on two primary benchmarks for evaluating MLLMs: 1) Multi-choice Question Answering (QA) and 2) Open-ended Evaluation. We've observed that the evaluation metrics for tasks like Visual Question Answering (VQA) and Text-VQA are overly sensitive to exact answer matches. This approach can be misleading, particularly when models provide synonymous but technically accurate responses. Therefore, these metrics have been omitted from our comparison for a more precise assessment. The evaluation results are shown in the table below.
| Model | ScienceQA-Img | MME | MM-VET | InfiMM-Eval | MMbench | MMMU-Val | MMMU-Test |
| ------------------- | ------------- | --------------------- | ------ | ------------ | ------- | -------- | --------- |
| Otter-9B | - | 1292/306 | 24.6 | 32.2 | - | 22.69 | - |
| IDEFICS-9B-Instruct | 60.6 | -/- | - | - | - | 24.53 | - |
| InfiMM-Zephyr-7B | 71.1 | P: 1406<br>C:327 | 32.8 | 36.0 | 59.7 | 39.4 | 35.5 |
| InfiMM-Llama-13b | 73.0 | P: 1444.5<br>C: 337.6 | 39.2 | 0.4559/0.414 | 66.4 | 39.1 | 35.2 |
| InfiMM-Vicuna-13B | 74.0 | P: 1461.2<br>C: 323.5 | 36.0 | 40.0 | 66.7 | 37.6 | 34.6 |
<!--
| Model | TextVQA (no ocr) | OK-VQA | VQAv2 | ScienceQA-Img | GQA | MME | MM-VET | MMMU | InfiMM-Eval | MMbench |
| ----------------- | ---------------- | ------ | ----- | ------------- | ---- | --------------------- | ------ | ---- | ------------ | ------- |
| InfiMM-Zephyr-7B | 36.7 | 55.4 | / | 71.1 | | P: 1406<br>C:327 | 32.8 | 39.4 | 36.0 | 59.7 |
| InfiMM-Llama-13b | 44.6 | 62.3 | 78.5 | 73.0 | 61.2 | P: 1444.5<br>C: 337.6 | 39.2 | 39.1 | 0.4559/0.414 | 66.4 |
| InfiMM-Vicuna-13B | 41.7 | 58.5 | 73.0 | 74.0 | 58.5 | P: 1461.2<br>C: 323.5 | 36.0 | 37.6 | 40.0 | 66.7 |
We select checkpoint after 1 epoch instruction fine-tuning.
| Model | <nobr>ScienceQA <br>acc.</nobr> | <nobr>MME <br>P/C</nobr> | <nobr>MM-Vet</nobr> | <nobr>InfiMM-Eval</nobr> | <nobr>MMMU (val)</nobr> |
| :------------------ | ------------------------------: | -----------------------: | ------------------: | -----------------------: | ----------------------: |
| Otter-9B | - | 1292/306 | 24.6 | 22.69 | 32.2 |
| IDEFICS-9B-Instruct | 60.6 | -/- | - | 24.53 | - |
| InfiMM-Zephyr-Chat | 71.14 | 1406/327 | 33.3 | 35.97 | 39.4 |
-->
<details>
<summary>Leaderboard Details</summary>
<img src="assets/infimm-zephyr-mmmu-val.jpeg" style="zoom:40%;" />
<br>MMMU-Val split results<br>
<img src="assets/infimm-zephyr-mmmu-test.jpeg" style="zoom:40%;" />
<br>MMMU-Test split results<br>
</details>
## Citation
```latex
@misc{InfiMM,
title={InfiMM: Advancing Multimodal Understanding from Flamingo's Legacy through Diverse LLM Integration},
author={InfiMM Team},
url={https://huggingface.co/Infi-MM/},
year={2024}
}
```
## License
<a href="https://creativecommons.org/licenses/by-nc/4.0/deed.en">
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/d/d3/Cc_by-nc_icon.svg/600px-Cc_by-nc_icon.svg.png" width="160">
</a>
This project is licensed under the **CC BY-NC 4.0**.
The copyright of the images belongs to the original authors.
See [LICENSE](LICENSE) for more information.
## Contact Us
Please feel free to contact us via email [[email protected]]([email protected]) if you have any questions. |
Infi-MM/infimm-zephyr | Infi-MM | 2024-03-06T06:07:25Z | 17 | 10 | transformers | [
"transformers",
"pytorch",
"infimm-zephyr",
"text-generation",
"multimodal",
"text",
"image",
"image-to-text",
"conversational",
"custom_code",
"en",
"dataset:HuggingFaceM4/OBELICS",
"dataset:laion/laion2B-en",
"dataset:coyo-700m",
"dataset:mmc4",
"autotrain_compatible",
"region:us"
] | text-generation | 2024-01-04T08:15:39Z | ---
language: en
tags:
- multimodal
- text
- image
- image-to-text
datasets:
- HuggingFaceM4/OBELICS
- laion/laion2B-en
- coyo-700m
- mmc4
pipeline_tag: text-generation
inference: true
---
<br>
<p align="center">
<img src="assets/infimm-logo.webp" alt="InfiMM-logo" width="400"></a>
</p>
<br>
# InfiMM
InfiMM, inspired by the Flamingo architecture, sets itself apart with unique training data and diverse large language models (LLMs). This approach allows InfiMM to maintain the core strengths of Flamingo while offering enhanced capabilities. As the premier open-sourced variant in this domain, InfiMM excels in accessibility and adaptability, driven by community collaboration. It's more than an emulation of Flamingo; it's an innovation in visual language processing.
Our model is another attempt to produce the result reported in the paper "Flamingo: A Large-scale Visual Language Model for Multimodal Understanding" by DeepMind.
Compared with previous open-sourced attempts ([OpenFlamingo](https://github.com/mlfoundations/open_flamingo) and [IDEFIC](https://huggingface.co/blog/idefics)), InfiMM offers a more flexible models, allowing for a wide range of applications.
In particular, InfiMM integrates the latest LLM models into VLM domain the reveals the impact of LLMs with different scales and architectures.
Please note that InfiMM is currently in beta stage and we are continuously working on improving it.
## Model Details
- **Developed by**: Institute of Automation, Chinese Academy of Sciences and ByteDance
- **Model Type**: Visual Language Model (VLM)
- **Language**: English
- **LLMs**: [Zephyr](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta), [LLaMA2-13B](https://ai.meta.com/llama/), [Vicuna-13B](https://huggingface.co/lmsys/vicuna-13b-v1.5)
- **Vision Model**: [EVA CLIP](https://huggingface.co/QuanSun/EVA-CLIP)
- **Language(s) (NLP):** en
- **License:** see [License section](#license)
<!---
- **Parent Models:** [QuanSun/EVA-CLIP](https://huggingface.co/QuanSun/EVA-CLIP/blob/main/EVA02_CLIP_L_336_psz14_s6B.pt) and [HuggingFaceH4/zephyr-7b--beta ta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta)
-->
## Model Family
Our model consists of several different model. Please see the details below.
| Model | LLM | Vision Encoder | IFT |
| ---------------------- | -------------- | -------------- | --- |
| InfiMM-Zephyr | Zehpyr-7B-beta | ViT-L-336 | No |
| InfiMM-Llama-13B | Llama2-13B | ViT-G-224 | No |
| InfiMM-Vicuna-13B | Vicuna-13B | ViT-E-224 | No |
| InfiMM-Zephyr-Chat | Zehpyr-7B-beta | ViT-L-336 | Yes |
| InfiMM-Llama-13B-Chat | Llama2-13B | ViT-G-224 | Yes |
| InfiMM-Vicuna-13B-Chat | Vicuna-13B | ViT-E-224 | Yes |
<!-- InfiMM-Zephyr-Chat is an light-weighted, open-source re-production of Flamingo-style Multimodal large language models with chat capability that takes sequences of interleaved images and texts as inputs and generates text outputs, with only 9B parameters.
-->
## Demo
Will be released soon.
Our model adopts the Flamingo architecture, leveraging EVA CLIP as the visual encoder and employing LLaMA2, Vicuna, and Zephyr as language models. The visual and language modalities are connected through a Cross Attention module.
## Quickstart
Use the code below to get started with the base model:
```python
import torch
from transformers import AutoModelForCausalLM, AutoProcessor
processor = AutoProcessor.from_pretrained("Infi-MM/infimm-zephyr", trust_remote_code=True)
prompts = [
{
"role": "user",
"content": [
{"image": "assets/infimm-logo.webp"},
"Please explain this image to me.",
],
}
]
inputs = processor(prompts)
# use bf16
model = AutoModelForCausalLM.from_pretrained(
"Infi-MM/infimm-zephyr",
local_files_only=True,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
).eval()
inputs = inputs.to(model.device)
inputs["batch_images"] = inputs["batch_images"].to(torch.bfloat16)
generated_ids = model.generate(
**inputs,
min_generation_length=0,
max_generation_length=256,
)
generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)
print(generated_text)
```
## Training Details
We employed three stages to train our model: pretraining (PT), multi-task training (MTT), and instruction finetuning (IFT). Refer to the table below for detailed configurations in each stage. Due to significant noise in the pretraining data, we aimed to enhance the model's accuracy by incorporating higher-quality data. In the multi-task training (MTT) phase, we utilized substantial training data from diverse datasets. However, as the answer in these data mainly consisted of single words or phrases, the model's conversational ability was limited. Therefore, in the third stage, we introduced a considerable amount of image-text dialogue data (llava665k) for fine-tuning the model's instructions.
### Pretraining (PT)
We follow similar training procedures used in [IDEFICS](https://huggingface.co/HuggingFaceM4/idefics-9b-instruct/blob/main/README.md).
The model is trained on a mixture of image-text pairs and unstructured multimodal web documents. All data are from public sources. Many image URL links are expired, we are capable of only downloading partial samples. We filter low quality data, here are resulting data we used:
| Data Source | Type of Data | Number of Tokens in Source | Number of Images in Source | Number of Samples | Epochs |
| ---------------------------------------------------------------- | ------------------------------------- | -------------------------- | -------------------------- | ----------------- | ------ |
| [OBELICS](https://huggingface.co/datasets/HuggingFaceM4/OBELICS) | Unstructured Multimodal Web Documents | - | - | 101M | 1 |
| [MMC4](https://github.com/allenai/mmc4) | Unstructured Multimodal Web Documents | - | - | 53M | 1 |
| [LAION](https://huggingface.co/datasets/laion/laion2B-en) | Image-Text Pairs | - | 115M | 115M | 1 |
| [COYO](https://github.com/kakaobrain/coyo-dataset) | Image-Text Pairs | - | 238M | 238M | 1 |
| [LAION-COCO](https://laion.ai/blog/laion-coco/) | Image-Text Pairs | - | 140M | 140M | 1 |
| [PMD\*](https://huggingface.co/datasets/facebook/pmd) | Image-Text Pairs | - | 20M | 20M | 1 |
\*PMD is only used in models with 13B LLMs, not the 7B Zephyr model.
During pretraining of interleaved image text sample, we apply masked cross-attention, however, we didn't strictly follow Flamingo, which alternate attention of image to its previous text or later text by change of 0.5.
We use the following hyper parameters:
| Categories | Parameters | Value |
| ------------------------ | -------------------------- | -------------------- |
| Perceiver Resampler | Number of Layers | 6 |
| | Number of Latents | 64 |
| | Number of Heads | 16 |
| | Resampler Head Dimension | 96 |
| Training | Sequence Length | 384 (13B) / 792 (7B) |
| | Effective Batch Size | 40\*128 |
| | Max Images per Sample | 6 |
| | Weight Decay | 0.1 |
| | Optimizer | Adam(0.9, 0.999) |
| | Gradient Accumulation Step | 2 |
| Learning Rate | Initial Max | 1e-4 |
| | Decay Schedule | Constant |
| | Warmup Step rate | 0.005 |
| Large-scale Optimization | Gradient Checkpointing | False |
| | Precision | bf16 |
| | ZeRO Optimization | Stage 2 |
### Multi-Task Training (MTT)
Here we use mix_cap_vqa to represent the mixed training set from COCO caption, TextCap, VizWiz Caption, VQAv2, OKVQA, VizWiz VQA, TextVQA, OCRVQA, STVQA, DocVQA, GQA and ScienceQA-image. For caption, we add prefix such as "Please describe the image." before the question. And for QA, we add "Answer the question using a single word or phrase.". Specifically, for VizWiz VQA, we use "When the provided information is insufficient, respond with 'Unanswerable'. Answer the question using a single word or phrase.". While for ScienceQA-image, we use "Answer with the option's letter from the given choices directly."
### Instruction Fine-Tuning (IFT)
For instruction fine-tuning stage, we use the recently released [LLaVA-MIX-665k](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K/tree/main).
We use the following hyper parameters:
| Categories | Parameters | Value |
| ------------------------ | -------------------------- | -------------------- |
| Perceiver Resampler | Number of Layers | 6 |
| | Number of Latents | 64 |
| | Number of Heads | 16 |
| | Resampler Head Dimension | 96 |
| Training | Sequence Length | 384 (13B) / 792 (7B) |
| | Effective Batch Size | 64 |
| | Max Images per Sample | 6 |
| | Weight Decay | 0.1 |
| | Optimizer | Adam(0.9, 0.999) |
| | Gradient Accumulation Step | 2 |
| Learning Rate | Initial Max | 1e-5 |
| | Decay Schedule | Constant |
| | Warmup Step rate | 0.005 |
| Large-scale Optimization | Gradient Checkpointing | False |
| | Precision | bf16 |
| | ZeRO Optimization | Stage 2 |
During IFT, similar to pretrain, we keep ViT and LLM frozen for both chat-based LLM (Vicuna and Zephyr). For Llama model, we keep LLM trainable during the IFT stage. We also apply chat-template to process the training samples.
## Evaluation
### PreTraining Evaluation
We evaluate the pretrained models on the following downstream tasks: Image Captioning and VQA. We also compare with our results with [IDEFICS](https://huggingface.co/blog/idefics).
| Model | Shots | COCO CIDEr | Flickr30K CIDEr | VQA v2 Acc | TextVQA Acc | OK-VQA Acc |
| ----------------- | ----- | ---------- | --------------- | ---------- | ----------- | ---------- |
| IDEFICS-9B | 0 | 46 | 27.3 | 50.9 | 25.9 | 38.4 |
| | 4 | 93 | 59.7 | 55.4 | 27.6 | 45.5 |
| IDEFICS-80B | 0 | 91.8 | 53.7 | 60 | 30.9 | 45.2 |
| | 4 | 110.3 | 73.7 | 64.6 | 34.4 | 52.4 |
| InfiMM-Zephyr-7B | 0 | 78.8 | 60.7 | 33.7 | 15.2 | 17.1 |
| | 4 | 108.6 | 71.9 | 59.1 | 34.3 | 50.5 |
| InfiMM-Llama2-13B | 0 | 85.4 | 54.6 | 51.6 | 24.2 | 26.4 |
| | 4 | 125.2 | 87.1 | 66.1 | 38.2 | 55.5 |
| InfiMM-Vicuna13B | 0 | 69.6 | 49.6 | 60.4 | 32.8 | 49.2 |
| | 4 | 118.1 | 81.4 | 64.2 | 38.4 | 53.7 |
### IFT Evaluation
In our analysis, we concentrate on two primary benchmarks for evaluating MLLMs: 1) Multi-choice Question Answering (QA) and 2) Open-ended Evaluation. We've observed that the evaluation metrics for tasks like Visual Question Answering (VQA) and Text-VQA are overly sensitive to exact answer matches. This approach can be misleading, particularly when models provide synonymous but technically accurate responses. Therefore, these metrics have been omitted from our comparison for a more precise assessment. The evaluation results are shown in the table below.
| Model | ScienceQA-Img | MME | MM-VET | InfiMM-Eval | MMbench | MMMU-Val | MMMU-Test |
| ------------------- | ------------- | --------------------- | ------ | ------------ | ------- | -------- | --------- |
| Otter-9B | - | 1292/306 | 24.6 | 32.2 | - | 22.69 | - |
| IDEFICS-9B-Instruct | 60.6 | -/- | - | - | - | 24.53 | - |
| InfiMM-Zephyr-7B | 71.1 | P: 1406<br>C:327 | 32.8 | 36.0 | 59.7 | 39.4 | 35.5 |
| InfiMM-Llama-13b | 73.0 | P: 1444.5<br>C: 337.6 | 39.2 | 0.4559/0.414 | 66.4 | 39.1 | 35.2 |
| InfiMM-Vicuna-13B | 74.0 | P: 1461.2<br>C: 323.5 | 36.0 | 40.0 | 66.7 | 37.6 | 34.6 |
<!--
| Model | TextVQA (no ocr) | OK-VQA | VQAv2 | ScienceQA-Img | GQA | MME | MM-VET | MMMU | InfiMM-Eval | MMbench |
| ----------------- | ---------------- | ------ | ----- | ------------- | ---- | --------------------- | ------ | ---- | ------------ | ------- |
| InfiMM-Zephyr-7B | 36.7 | 55.4 | / | 71.1 | | P: 1406<br>C:327 | 32.8 | 39.4 | 36.0 | 59.7 |
| InfiMM-Llama-13b | 44.6 | 62.3 | 78.5 | 73.0 | 61.2 | P: 1444.5<br>C: 337.6 | 39.2 | 39.1 | 0.4559/0.414 | 66.4 |
| InfiMM-Vicuna-13B | 41.7 | 58.5 | 73.0 | 74.0 | 58.5 | P: 1461.2<br>C: 323.5 | 36.0 | 37.6 | 40.0 | 66.7 |
We select checkpoint after 1 epoch instruction fine-tuning.
| Model | <nobr>ScienceQA <br>acc.</nobr> | <nobr>MME <br>P/C</nobr> | <nobr>MM-Vet</nobr> | <nobr>InfiMM-Eval</nobr> | <nobr>MMMU (val)</nobr> |
| :------------------ | ------------------------------: | -----------------------: | ------------------: | -----------------------: | ----------------------: |
| Otter-9B | - | 1292/306 | 24.6 | 22.69 | 32.2 |
| IDEFICS-9B-Instruct | 60.6 | -/- | - | 24.53 | - |
| InfiMM-Zephyr-Chat | 71.14 | 1406/327 | 33.3 | 35.97 | 39.4 |
-->
<details>
<summary>Leaderboard Details</summary>
<img src="assets/infimm-zephyr-mmmu-val.jpeg" style="zoom:40%;" />
<br>MMMU-Val split results<br>
<img src="assets/infimm-zephyr-mmmu-test.jpeg" style="zoom:40%;" />
<br>MMMU-Test split results<br>
</details>
## Citation
```latex
@misc{InfiMM,
title={InfiMM: Advancing Multimodal Understanding from Flamingo's Legacy through Diverse LLM Integration},
author={InfiMM Team},
url={https://huggingface.co/Infi-MM/},
year={2024}
}
```
## License
<a href="https://creativecommons.org/licenses/by-nc/4.0/deed.en">
<img src="https://upload.wikimedia.org/wikipedia/commons/thumb/d/d3/Cc_by-nc_icon.svg/600px-Cc_by-nc_icon.svg.png" width="160">
</a>
This project is licensed under the **CC BY-NC 4.0**.
The copyright of the images belongs to the original authors.
See [LICENSE](LICENSE) for more information.
## Contact Us
Please feel free to contact us via email [[email protected]]([email protected]) if you have any questions. |
lex117/cproj-gpt | lex117 | 2024-03-06T06:03:24Z | 4 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"text-generation",
"autotrain",
"conversational",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T06:02:46Z | ---
tags:
- autotrain
- text-generation
widget:
- text: "I love AutoTrain because "
license: other
---
# Model Trained Using AutoTrain
This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).
# Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_path = "PATH_TO_THIS_REPO"
tokenizer = AutoTokenizer.from_pretrained(model_path)
model = AutoModelForCausalLM.from_pretrained(
model_path,
device_map="auto",
torch_dtype='auto'
).eval()
# Prompt content: "hi"
messages = [
{"role": "user", "content": "hi"}
]
input_ids = tokenizer.apply_chat_template(conversation=messages, tokenize=True, add_generation_prompt=True, return_tensors='pt')
output_ids = model.generate(input_ids.to('cuda'))
response = tokenizer.decode(output_ids[0][input_ids.shape[1]:], skip_special_tokens=True)
# Model response: "Hello! How can I assist you today?"
print(response)
``` |
hughlan1214/Speech_Emotion_Recognition_wav2vec2-large-xlsr-53_240304_SER_fine-tuned1.1 | hughlan1214 | 2024-03-06T05:53:20Z | 10 | 1 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:hughlan1214/Speech_Emotion_Recognition_wav2vec2-large-xlsr-53_240304_SER_fine-tuned1.1",
"base_model:finetune:hughlan1214/Speech_Emotion_Recognition_wav2vec2-large-xlsr-53_240304_SER_fine-tuned1.1",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2024-03-03T17:32:16Z | ---
license: apache-2.0
base_model: hughlan1214/Speech_Emotion_Recognition_wav2vec2-large-xlsr-53_240304_SER_fine-tuned1.1
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: Speech_Emotion_Recognition_wav2vec2-large-xlsr-53_240304_SER_fine-tuned1.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Speech_Emotion_Recognition_wav2vec2-large-xlsr-53_240304_SER_fine-tuned1.1
This model is a fine-tuned version of [hughlan1214/SER_wav2vec2-large-xlsr-53_fine-tuned_1.0](https://huggingface.co/hughlan1214/SER_wav2vec2-large-xlsr-53_fine-tuned_1.0) on a [Speech Emotion Recognition (en)](https://www.kaggle.com/datasets/dmitrybabko/speech-emotion-recognition-en) dataset.
This dataset includes the 4 most popular datasets in English: Crema, Ravdess, Savee, and Tess, containing a total of over 12,000 .wav audio files. Each of these four datasets includes 6 to 8 different emotional labels.
It achieves the following results on the evaluation set:
- Loss: 1.1815
- Accuracy: 0.5776
- Precision: 0.6236
- Recall: 0.5921
- F1: 0.5806
-
## For a better performance version, please refer to
[hughlan1214/Speech_Emotion_Recognition_wav2vec2-large-xlsr-53_240304_SER_fine-tuned2.0](https://huggingface.co/hughlan1214/Speech_Emotion_Recognition_wav2vec2-large-xlsr-53_240304_SER_fine-tuned2.0)
## Model description
The model was obtained through feature extraction using [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) and underwent several rounds of fine-tuning. It predicts the 7 types of emotions contained in speech, aiming to lay the foundation for subsequent use of human micro-expressions on the visual level and context semantics under LLMS to infer user emotions in real-time.
Although the model was trained on purely English datasets, post-release testing showed that it also performs well in predicting emotions in Chinese and French, demonstrating the powerful cross-linguistic capability of the [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) pre-trained model.
```python
emotions = ['angry', 'disgust', 'fear', 'happy', 'neutral', 'sad', 'surprise']
```
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.5816 | 1.0 | 1048 | 1.4920 | 0.4392 | 0.4568 | 0.4623 | 0.4226 |
| 1.2355 | 2.0 | 2096 | 1.2957 | 0.5135 | 0.6082 | 0.5292 | 0.5192 |
| 1.0605 | 3.0 | 3144 | 1.2225 | 0.5405 | 0.5925 | 0.5531 | 0.5462 |
| 1.0291 | 4.0 | 4192 | 1.2163 | 0.5586 | 0.6215 | 0.5739 | 0.5660 |
| 1.0128 | 5.0 | 5240 | 1.1815 | 0.5776 | 0.6236 | 0.5921 | 0.5806 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.1
- Datasets 2.17.1
- Tokenizers 0.15.2
|
hughlan1214/SER_wav2vec2-large-xlsr-53_fine-tuned_1.0 | hughlan1214 | 2024-03-06T05:53:02Z | 9 | 1 | transformers | [
"transformers",
"safetensors",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"base_model:facebook/wav2vec2-large-xlsr-53",
"base_model:finetune:facebook/wav2vec2-large-xlsr-53",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2024-03-03T13:30:18Z | ---
license: apache-2.0
base_model: facebook/wav2vec2-large-xlsr-53
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: SER_wav2vec2-large-xlsr-53_fine-tuned_1.0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SER_wav2vec2-large-xlsr-53_240303
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on a [Speech Emotion Recognition (en)](https://www.kaggle.com/datasets/dmitrybabko/speech-emotion-recognition-en) dataset.
This dataset includes the 4 most popular datasets in English: Crema, Ravdess, Savee, and Tess, containing a total of over 12,000 .wav audio files. Each of these four datasets includes 6 to 8 different emotional labels.
It achieves the following results on the evaluation set:
- Loss: 1.7923
- Accuracy: 0.2408
- Precision: 0.2324
- Recall: 0.2466
- F1: 0.2226
## For a better performance version, please refer to
[hughlan1214/Speech_Emotion_Recognition_wav2vec2-large-xlsr-53_240304_SER_fine-tuned2.0](https://huggingface.co/hughlan1214/Speech_Emotion_Recognition_wav2vec2-large-xlsr-53_240304_SER_fine-tuned2.0)
## Model description
The model was obtained through feature extraction using [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) and underwent several rounds of fine-tuning. It predicts the 7 types of emotions contained in speech, aiming to lay the foundation for subsequent use of human micro-expressions on the visual level and context semantics under LLMS to infer user emotions in real-time.
Although the model was trained on purely English datasets, post-release testing showed that it also performs well in predicting emotions in Chinese and French, demonstrating the powerful cross-linguistic capability of the [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) pre-trained model.
```python
emotions = ['angry', 'disgust', 'fear', 'happy', 'neutral', 'sad', 'surprise']
```
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 3
- total_train_batch_size: 12
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| 1.9297 | 1.0 | 101 | 1.9452 | 0.1233 | 0.0306 | 0.1468 | 0.0454 |
| 1.9114 | 2.0 | 202 | 1.9115 | 0.1773 | 0.1501 | 0.1803 | 0.1323 |
| 1.7863 | 3.0 | 303 | 1.8564 | 0.2081 | 0.1117 | 0.2193 | 0.1336 |
| 1.8439 | 4.0 | 404 | 1.8590 | 0.2042 | 0.2196 | 0.2156 | 0.1755 |
| 1.9361 | 5.0 | 505 | 1.8375 | 0.2081 | 0.2617 | 0.2213 | 0.1573 |
| 1.7572 | 6.0 | 606 | 1.8081 | 0.2100 | 0.2018 | 0.2214 | 0.1841 |
| 1.6715 | 7.0 | 707 | 1.8131 | 0.2389 | 0.2263 | 0.2442 | 0.2129 |
| 1.6687 | 8.0 | 808 | 1.7923 | 0.2408 | 0.2324 | 0.2466 | 0.2226 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.2.1
- Datasets 2.17.1
- Tokenizers 0.15.2
|
LN1996/output_run_3 | LN1996 | 2024-03-06T05:52:53Z | 1 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"lora",
"stable-diffusion",
"stable-diffusion-diffusers",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2024-03-06T05:22:43Z | ---
license: creativeml-openrail-m
library_name: diffusers
tags:
- text-to-image
- diffusers
- lora
- stable-diffusion
- stable-diffusion-diffusers
inference: true
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: photo of a room with professional interior design
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# LoRA DreamBooth - LN1996/output_run_3
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on photo of a room with professional interior design using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
LoRA for the text encoder was enabled: False.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
asadmasad/GIST-large-finetuned | asadmasad | 2024-03-06T05:49:17Z | 46 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2024-03-04T12:04:59Z | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# asadmasad/GIST-large-finetuned
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('asadmasad/GIST-large-finetuned')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=asadmasad/GIST-large-finetuned)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 2476 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "warmupcosine",
"steps_per_epoch": null,
"warmup_steps": 742,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': True}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 1024, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
UBC-NLP/InfoDCL-hashtag | UBC-NLP | 2024-03-06T05:44:55Z | 14 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"social media",
"contrastive learning",
"en",
"arxiv:2203.07648",
"license:cc",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2023-07-07T03:56:29Z | ---
license: cc
language:
- en
library_name: transformers
tags:
- social media
- contrastive learning
---
# Contrastive Learning of Sociopragmatic Meaning in Social Media
<p align="center"> <a href="https://chiyuzhang94.github.io/" target="_blank">Chiyu Zhang</a>, <a href="https://mageed.arts.ubc.ca/" target="_blank">Muhammad Abdul-Mageed</a>, <a href="https://ganeshjawahar.github.io/" target="_blank">Ganesh Jarwaha</a></p>
<p align="center" float="left">
<p align="center">Publish at Findings of ACL 2023</p>
<p align="center"> <a href="https://arxiv.org/abs/2203.07648" target="_blank">Paper</a></p>
[]()
[]()
<p align="center" width="100%">
<a><img src="https://github.com/UBC-NLP/infodcl/blob/master/images/infodcl_vis.png?raw=true" alt="Title" style="width: 90%; min-width: 300px; display: block; margin: auto;"></a>
</p>
Illustration of our proposed InfoDCL framework. We exploit distant/surrogate labels (i.e., emojis) to supervise two contrastive losses, corpus-aware contrastive loss (CCL) and Light label-aware contrastive loss (LCL-LiT). Sequence representations from our model should keep the cluster of each class distinguishable and preserve semantic relationships between classes.
## Checkpoints of Models Pre-Trained with InfoDCL
* InfoDCL-RoBERTa trained with TweetEmoji-EN: https://huggingface.co/UBC-NLP/InfoDCL-emoji
* InfoDCL-RoBERTa trained with TweetHashtag-EN: https://huggingface.co/UBC-NLP/InfoDCL-hashtag
## Model Performance
<p align="center" width="100%">
<a><img src="https://github.com/UBC-NLP/infodcl/blob/master/images/main_table.png?raw=true" alt="main table" style="width: 95%; min-width: 300px; display: block; margin: auto;"></a>
</p>
Fine-tuning results on our 24 Socio-pragmatic Meaning datasets (average macro-F1 over five runs). |
UBC-NLP/InfoDCL-emoji | UBC-NLP | 2024-03-06T05:44:15Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"social media",
"contrastive learning",
"en",
"arxiv:2203.07648",
"license:cc",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2023-07-07T03:54:23Z | ---
license: cc
language:
- en
library_name: transformers
tags:
- social media
- contrastive learning
---
# Contrastive Learning of Sociopragmatic Meaning in Social Media
<p align="center"> <a href="https://chiyuzhang94.github.io/" target="_blank">Chiyu Zhang</a>, <a href="https://mageed.arts.ubc.ca/" target="_blank">Muhammad Abdul-Mageed</a>, <a href="https://ganeshjawahar.github.io/" target="_blank">Ganesh Jarwaha</a></p>
<p align="center" float="left">
<p align="center">Publish at Findings of ACL 2023</p>
<p align="center"> <a href="https://arxiv.org/abs/2203.07648" target="_blank">Paper</a></p>
[]()
[]()
<p align="center" width="100%">
<a><img src="https://github.com/UBC-NLP/infodcl/blob/master/images/infodcl_vis.png?raw=true" alt="Title" style="width: 90%; min-width: 300px; display: block; margin: auto;"></a>
</p>
Illustration of our proposed InfoDCL framework. We exploit distant/surrogate labels (i.e., emojis) to supervise two contrastive losses, corpus-aware contrastive loss (CCL) and Light label-aware contrastive loss (LCL-LiT). Sequence representations from our model should keep the cluster of each class distinguishable and preserve semantic relationships between classes.
## Checkpoints of Models Pre-Trained with InfoDCL
* InfoDCL-RoBERTa trained with TweetEmoji-EN: https://huggingface.co/UBC-NLP/InfoDCL-emoji
* InfoDCL-RoBERTa trained with TweetHashtag-EN: https://huggingface.co/UBC-NLP/InfoDCL-hashtag
## Model Performance
<p align="center" width="100%">
<a><img src="https://github.com/UBC-NLP/infodcl/blob/master/images/main_table.png?raw=true" alt="main table" style="width: 95%; min-width: 300px; display: block; margin: auto;"></a>
</p>
Fine-tuning results on our 24 Socio-pragmatic Meaning datasets (average macro-F1 over five runs). |
Litzy619/V0305P4 | Litzy619 | 2024-03-06T05:43:14Z | 0 | 0 | null | [
"safetensors",
"generated_from_trainer",
"base_model:yahma/llama-7b-hf",
"base_model:finetune:yahma/llama-7b-hf",
"license:other",
"region:us"
] | null | 2024-03-05T16:50:42Z | ---
license: other
base_model: yahma/llama-7b-hf
tags:
- generated_from_trainer
model-index:
- name: V0305P4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# V0305P4
This model is a fine-tuned version of [yahma/llama-7b-hf](https://huggingface.co/yahma/llama-7b-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0583
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_steps: 20
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8125 | 0.09 | 10 | 0.6624 |
| 0.25 | 0.17 | 20 | 0.1568 |
| 0.1567 | 0.26 | 30 | 0.1543 |
| 0.1522 | 0.34 | 40 | 0.1471 |
| 0.1487 | 0.43 | 50 | 0.1446 |
| 0.1517 | 0.51 | 60 | 0.1370 |
| 0.1348 | 0.6 | 70 | 0.1154 |
| 0.1261 | 0.68 | 80 | 0.1077 |
| 0.1125 | 0.77 | 90 | 0.0915 |
| 0.1142 | 0.85 | 100 | 0.0879 |
| 0.1095 | 0.94 | 110 | 0.0932 |
| 0.1035 | 1.02 | 120 | 0.0936 |
| 0.094 | 1.11 | 130 | 0.0874 |
| 0.0899 | 1.19 | 140 | 0.0800 |
| 0.0875 | 1.28 | 150 | 0.0835 |
| 0.0887 | 1.37 | 160 | 0.0783 |
| 0.0884 | 1.45 | 170 | 0.0791 |
| 0.0819 | 1.54 | 180 | 0.0745 |
| 0.0831 | 1.62 | 190 | 0.0685 |
| 0.0878 | 1.71 | 200 | 0.0681 |
| 0.0847 | 1.79 | 210 | 0.0680 |
| 0.0798 | 1.88 | 220 | 0.0646 |
| 0.0757 | 1.96 | 230 | 0.0680 |
| 0.0653 | 2.05 | 240 | 0.0663 |
| 0.0557 | 2.13 | 250 | 0.0678 |
| 0.052 | 2.22 | 260 | 0.0634 |
| 0.0517 | 2.3 | 270 | 0.0654 |
| 0.0576 | 2.39 | 280 | 0.0593 |
| 0.0573 | 2.47 | 290 | 0.0584 |
| 0.056 | 2.56 | 300 | 0.0569 |
| 0.0597 | 2.65 | 310 | 0.0584 |
| 0.0514 | 2.73 | 320 | 0.0578 |
| 0.0533 | 2.82 | 330 | 0.0577 |
| 0.0538 | 2.9 | 340 | 0.0582 |
| 0.0507 | 2.99 | 350 | 0.0583 |
### Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 2.14.6
- Tokenizers 0.14.1
|
jerrish/distilbert-base-uncased-finetuned-ner | jerrish | 2024-03-06T05:37:57Z | 7 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"token-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2024-03-06T05:28:32Z | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0610
- Precision: 0.9266
- Recall: 0.9380
- F1: 0.9323
- Accuracy: 0.9836
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2562 | 1.0 | 878 | 0.0712 | 0.9007 | 0.9178 | 0.9092 | 0.9797 |
| 0.0512 | 2.0 | 1756 | 0.0607 | 0.9256 | 0.9325 | 0.9291 | 0.9830 |
| 0.0304 | 3.0 | 2634 | 0.0610 | 0.9266 | 0.9380 | 0.9323 | 0.9836 |
### Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
Mweny/alpha-monarch-finetuned-7b-v2.1-8-bit-gguf | Mweny | 2024-03-06T05:36:21Z | 8 | 0 | transformers | [
"transformers",
"gguf",
"mistral",
"text-generation-inference",
"unsloth",
"en",
"base_model:Adeptschneider/alpha-monarch-7B-fine-tuned-model",
"base_model:quantized:Adeptschneider/alpha-monarch-7B-fine-tuned-model",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2024-03-06T02:23:52Z | ---
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- gguf
base_model: Adeptschneider/alpha-monarch-7B-fine-tuned-model
---
# Uploaded model
- **Developed by:** Mweny
- **License:** apache-2.0
- **Finetuned from model :** mlabonne/alpha-monarch-7B-fine-tuned-model
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
letgoofthepizza/Llama-2-7b-chat-hf-finetuned-open-korean-instructions | letgoofthepizza | 2024-03-06T05:06:42Z | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-03-06T04:57:59Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
min-dong/LLM_test1 | min-dong | 2024-03-06T04:58:42Z | 2 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | 2024-03-06T04:47:11Z | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
guy-smiley/flan-t5-small-samsum | guy-smiley | 2024-03-06T04:55:44Z | 5 | 0 | transformers | [
"transformers",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/flan-t5-small",
"base_model:finetune:google/flan-t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2024-03-06T04:32:29Z | ---
license: apache-2.0
tags:
- generated_from_trainer
base_model: google/flan-t5-small
metrics:
- rouge
model-index:
- name: flan-t5-small-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-small-samsum
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6726
- Rouge1: 42.9923
- Rouge2: 18.9028
- Rougel: 35.7014
- Rougelsum: 39.2624
- Gen Len: 16.8400
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.8324 | 1.0 | 1842 | 1.6726 | 42.9923 | 18.9028 | 35.7014 | 39.2624 | 16.8400 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
|
cllm/deepseekcoder-7b-instruct-spider | cllm | 2024-03-06T04:46:28Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2024-01-27T09:58:48Z | Metadata:
AR loss to consistency loss ratio: 10: 1
Spider dataset size: 7k
n-token sequence length: 16
Jacobi trajectory data cleaning: True
Target model: Deepseek-Coder-7B fine-tuned on Spider
release date: 02/26/2024 |
Subsets and Splits