modelId
stringlengths 4
112
| sha
stringlengths 40
40
| lastModified
stringlengths 24
24
| tags
list | pipeline_tag
stringclasses 29
values | private
bool 1
class | author
stringlengths 2
38
⌀ | config
null | id
stringlengths 4
112
| downloads
float64 0
36.8M
⌀ | likes
float64 0
712
⌀ | library_name
stringclasses 17
values | __index_level_0__
int64 0
38.5k
| readme
stringlengths 0
186k
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Aktsvigun/bart-base_abssum_debate_3449378
|
482f4de6a41135d8b6dcd6209efab3835eee5c76
|
2022-07-16T21:49:43.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_abssum_debate_3449378
| 2 | null |
transformers
| 27,500 |
Entry not found
|
Aktsvigun/bart-base_abssum_debate_4521825
|
7861da1f575c071a95b2a48dc544d8629218fccf
|
2022-07-16T22:01:48.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_abssum_debate_4521825
| 2 | null |
transformers
| 27,501 |
Entry not found
|
Aktsvigun/bart-base_abssum_debate_9463133
|
9f5e45ca40369e125f17812babf33e65d1163f37
|
2022-07-16T22:19:19.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_abssum_debate_9463133
| 2 | null |
transformers
| 27,502 |
Entry not found
|
Aktsvigun/bart-base_abssum_debate_3198548
|
fa61f12259060dae5473bf558ed174fbf5f5f8cf
|
2022-07-16T22:35:49.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_abssum_debate_3198548
| 2 | null |
transformers
| 27,503 |
Entry not found
|
Aktsvigun/bart-base_abssum_debate_4006598
|
faf47ec00f9c5666b2b7757e8284add392753849
|
2022-07-16T22:51:46.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_abssum_debate_4006598
| 2 | null |
transformers
| 27,504 |
Entry not found
|
Aktsvigun/bart-base_abssum_debate_3982742
|
6bd540036e2f19e781a761d360c7a3dbbafd03ce
|
2022-07-16T23:08:19.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_abssum_debate_3982742
| 2 | null |
transformers
| 27,505 |
Entry not found
|
Aktsvigun/bart-base_abssum_debate_2470973
|
8bc2592aaacb503d624c287f7c71c83e9e27d697
|
2022-07-16T23:22:34.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_abssum_debate_2470973
| 2 | null |
transformers
| 27,506 |
Entry not found
|
Aktsvigun/bart-base_abssum_debate_6864530
|
ba135f94799f39de3022398745c8bb205dbcda96
|
2022-07-16T23:37:21.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_abssum_debate_6864530
| 2 | null |
transformers
| 27,507 |
Entry not found
|
tanfiona/unicausal
|
61ce1ad8835758209844b56ae43f2dfacd03e5fd
|
2022-07-17T07:01:02.000Z
|
[
"pytorch",
"bert",
"transformers"
] | null | false |
tanfiona
| null |
tanfiona/unicausal
| 2 | null |
transformers
| 27,508 |
Entry not found
|
pyronear/rexnet1_3x
|
2ad4b6847aa3fd71c8699e2707307fb418e8034a
|
2022-07-17T23:47:30.000Z
|
[
"pytorch",
"onnx",
"dataset:pyronear/openfire",
"arxiv:2007.00992",
"transformers",
"image-classification",
"license:apache-2.0"
] |
image-classification
| false |
pyronear
| null |
pyronear/rexnet1_3x
| 2 | null |
transformers
| 27,509 |
---
license: apache-2.0
tags:
- image-classification
- pytorch
- onnx
datasets:
- pyronear/openfire
---
# ReXNet-1.3x model
Pretrained on a dataset for wildfire binary classification (soon to be shared). The ReXNet architecture was introduced in [this paper](https://arxiv.org/pdf/2007.00992.pdf).
## Model description
The core idea of the author is to add a customized Squeeze-Excitation layer in the residual blocks that will prevent channel redundancy.
## Installation
### Prerequisites
Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install PyroVision.
### Latest stable release
You can install the last stable release of the package using [pypi](https://pypi.org/project/pyrovision/) as follows:
```shell
pip install pyrovision
```
or using [conda](https://anaconda.org/pyronear/pyrovision):
```shell
conda install -c pyronear pyrovision
```
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
```shell
git clone https://github.com/pyronear/pyro-vision.git
pip install -e pyro-vision/.
```
## Usage instructions
```python
from PIL import Image
from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
from torchvision.transforms.functional import InterpolationMode
from pyrovision.models import model_from_hf_hub
model = model_from_hf_hub("pyronear/rexnet1_3x").eval()
img = Image.open(path_to_an_image).convert("RGB")
# Preprocessing
config = model.default_cfg
transform = Compose([
Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
PILToTensor(),
ConvertImageDtype(torch.float32),
Normalize(config['mean'], config['std'])
])
input_tensor = transform(img).unsqueeze(0)
# Inference
with torch.inference_mode():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
## Citation
Original paper
```bibtex
@article{DBLP:journals/corr/abs-2007-00992,
author = {Dongyoon Han and
Sangdoo Yun and
Byeongho Heo and
Young Joon Yoo},
title = {ReXNet: Diminishing Representational Bottleneck on Convolutional Neural
Network},
journal = {CoRR},
volume = {abs/2007.00992},
year = {2020},
url = {https://arxiv.org/abs/2007.00992},
eprinttype = {arXiv},
eprint = {2007.00992},
timestamp = {Mon, 06 Jul 2020 15:26:01 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2007-00992.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Source of this implementation
```bibtex
@software{Fernandez_Holocron_2020,
author = {Fernandez, François-Guillaume},
month = {5},
title = {{Holocron}},
url = {https://github.com/frgfm/Holocron},
year = {2020}
}
```
|
pyronear/rexnet1_5x
|
3c30aad1db06dbff23f9877c20ca5da82d9369f2
|
2022-07-17T23:47:47.000Z
|
[
"pytorch",
"onnx",
"dataset:pyronear/openfire",
"arxiv:2007.00992",
"transformers",
"image-classification",
"license:apache-2.0"
] |
image-classification
| false |
pyronear
| null |
pyronear/rexnet1_5x
| 2 | null |
transformers
| 27,510 |
---
license: apache-2.0
tags:
- image-classification
- pytorch
- onnx
datasets:
- pyronear/openfire
---
# ReXNet-1.5x model
Pretrained on a dataset for wildfire binary classification (soon to be shared). The ReXNet architecture was introduced in [this paper](https://arxiv.org/pdf/2007.00992.pdf).
## Model description
The core idea of the author is to add a customized Squeeze-Excitation layer in the residual blocks that will prevent channel redundancy.
## Installation
### Prerequisites
Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install PyroVision.
### Latest stable release
You can install the last stable release of the package using [pypi](https://pypi.org/project/pyrovision/) as follows:
```shell
pip install pyrovision
```
or using [conda](https://anaconda.org/pyronear/pyrovision):
```shell
conda install -c pyronear pyrovision
```
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
```shell
git clone https://github.com/pyronear/pyro-vision.git
pip install -e pyro-vision/.
```
## Usage instructions
```python
from PIL import Image
from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
from torchvision.transforms.functional import InterpolationMode
from pyrovision.models import model_from_hf_hub
model = model_from_hf_hub("pyronear/rexnet1_5x").eval()
img = Image.open(path_to_an_image).convert("RGB")
# Preprocessing
config = model.default_cfg
transform = Compose([
Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
PILToTensor(),
ConvertImageDtype(torch.float32),
Normalize(config['mean'], config['std'])
])
input_tensor = transform(img).unsqueeze(0)
# Inference
with torch.inference_mode():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
## Citation
Original paper
```bibtex
@article{DBLP:journals/corr/abs-2007-00992,
author = {Dongyoon Han and
Sangdoo Yun and
Byeongho Heo and
Young Joon Yoo},
title = {ReXNet: Diminishing Representational Bottleneck on Convolutional Neural
Network},
journal = {CoRR},
volume = {abs/2007.00992},
year = {2020},
url = {https://arxiv.org/abs/2007.00992},
eprinttype = {arXiv},
eprint = {2007.00992},
timestamp = {Mon, 06 Jul 2020 15:26:01 +0200},
biburl = {https://dblp.org/rec/journals/corr/abs-2007-00992.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Source of this implementation
```bibtex
@software{Fernandez_Holocron_2020,
author = {Fernandez, François-Guillaume},
month = {5},
title = {{Holocron}},
url = {https://github.com/frgfm/Holocron},
year = {2020}
}
```
|
pyronear/resnet18
|
a7e4a197a7b582d89ddc1883ab30e39882e3fec1
|
2022-07-17T23:48:06.000Z
|
[
"pytorch",
"onnx",
"dataset:pyronear/openfire",
"arxiv:1512.03385",
"transformers",
"image-classification",
"license:apache-2.0"
] |
image-classification
| false |
pyronear
| null |
pyronear/resnet18
| 2 | null |
transformers
| 27,511 |
---
license: apache-2.0
tags:
- image-classification
- pytorch
- onnx
datasets:
- pyronear/openfire
---
# ResNet-18 model
Pretrained on a dataset for wildfire binary classification (soon to be shared).
## Model description
The core idea of the author is to help the gradient propagation through numerous layers by adding a skip connection.
## Installation
### Prerequisites
Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install PyroVision.
### Latest stable release
You can install the last stable release of the package using [pypi](https://pypi.org/project/pyrovision/) as follows:
```shell
pip install pyrovision
```
or using [conda](https://anaconda.org/pyronear/pyrovision):
```shell
conda install -c pyronear pyrovision
```
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
```shell
git clone https://github.com/pyronear/pyro-vision.git
pip install -e pyro-vision/.
```
## Usage instructions
```python
from PIL import Image
from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
from torchvision.transforms.functional import InterpolationMode
from pyrovision.models import model_from_hf_hub
model = model_from_hf_hub("pyronear/resnet18").eval()
img = Image.open(path_to_an_image).convert("RGB")
# Preprocessing
config = model.default_cfg
transform = Compose([
Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
PILToTensor(),
ConvertImageDtype(torch.float32),
Normalize(config['mean'], config['std'])
])
input_tensor = transform(img).unsqueeze(0)
# Inference
with torch.inference_mode():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
## Citation
Original paper
```bibtex
@article{DBLP:journals/corr/HeZRS15,
author = {Kaiming He and
Xiangyu Zhang and
Shaoqing Ren and
Jian Sun},
title = {Deep Residual Learning for Image Recognition},
journal = {CoRR},
volume = {abs/1512.03385},
year = {2015},
url = {http://arxiv.org/abs/1512.03385},
eprinttype = {arXiv},
eprint = {1512.03385},
timestamp = {Wed, 17 Apr 2019 17:23:45 +0200},
biburl = {https://dblp.org/rec/journals/corr/HeZRS15.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Source of this implementation
```bibtex
@software{chintala_torchvision_2017,
author = {Chintala, Soumith},
month = {4},
title = {{Torchvision}},
url = {https://github.com/pytorch/vision},
year = {2017}
}
```
|
pyronear/resnet34
|
1f22eff410ee84f6af51100df5835435707b2686
|
2022-07-17T23:48:22.000Z
|
[
"pytorch",
"onnx",
"dataset:pyronear/openfire",
"arxiv:1512.03385",
"transformers",
"image-classification",
"license:apache-2.0"
] |
image-classification
| false |
pyronear
| null |
pyronear/resnet34
| 2 | null |
transformers
| 27,512 |
---
license: apache-2.0
tags:
- image-classification
- pytorch
- onnx
datasets:
- pyronear/openfire
---
# ResNet-34 model
Pretrained on a dataset for wildfire binary classification (soon to be shared).
## Model description
The core idea of the author is to help the gradient propagation through numerous layers by adding a skip connection.
## Installation
### Prerequisites
Python 3.6 (or higher) and [pip](https://pip.pypa.io/en/stable/)/[conda](https://docs.conda.io/en/latest/miniconda.html) are required to install PyroVision.
### Latest stable release
You can install the last stable release of the package using [pypi](https://pypi.org/project/pyrovision/) as follows:
```shell
pip install pyrovision
```
or using [conda](https://anaconda.org/pyronear/pyrovision):
```shell
conda install -c pyronear pyrovision
```
### Developer mode
Alternatively, if you wish to use the latest features of the project that haven't made their way to a release yet, you can install the package from source *(install [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) first)*:
```shell
git clone https://github.com/pyronear/pyro-vision.git
pip install -e pyro-vision/.
```
## Usage instructions
```python
from PIL import Image
from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize
from torchvision.transforms.functional import InterpolationMode
from pyrovision.models import model_from_hf_hub
model = model_from_hf_hub("pyronear/resnet34").eval()
img = Image.open(path_to_an_image).convert("RGB")
# Preprocessing
config = model.default_cfg
transform = Compose([
Resize(config['input_shape'][1:], interpolation=InterpolationMode.BILINEAR),
PILToTensor(),
ConvertImageDtype(torch.float32),
Normalize(config['mean'], config['std'])
])
input_tensor = transform(img).unsqueeze(0)
# Inference
with torch.inference_mode():
output = model(input_tensor)
probs = output.squeeze(0).softmax(dim=0)
```
## Citation
Original paper
```bibtex
@article{DBLP:journals/corr/HeZRS15,
author = {Kaiming He and
Xiangyu Zhang and
Shaoqing Ren and
Jian Sun},
title = {Deep Residual Learning for Image Recognition},
journal = {CoRR},
volume = {abs/1512.03385},
year = {2015},
url = {http://arxiv.org/abs/1512.03385},
eprinttype = {arXiv},
eprint = {1512.03385},
timestamp = {Wed, 17 Apr 2019 17:23:45 +0200},
biburl = {https://dblp.org/rec/journals/corr/HeZRS15.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
Source of this implementation
```bibtex
@software{chintala_torchvision_2017,
author = {Chintala, Soumith},
month = {4},
title = {{Torchvision}},
url = {https://github.com/pytorch/vision},
year = {2017}
}
```
|
shengnan/visualize-v1-pre10w-preseed1
|
fe03ea7f8f8cc5c780df8a2228f111f10caf35c3
|
2022-07-18T02:42:04.000Z
|
[
"pytorch",
"t5",
"transformers"
] | null | false |
shengnan
| null |
shengnan/visualize-v1-pre10w-preseed1
| 2 | null |
transformers
| 27,513 |
Entry not found
|
shengnan/visualize-cst-v1-pre10w-preseed1
|
b6191273b3973ca89a14acd8703d90305ca5f8ff
|
2022-07-18T03:03:19.000Z
|
[
"pytorch",
"t5",
"transformers"
] | null | false |
shengnan
| null |
shengnan/visualize-cst-v1-pre10w-preseed1
| 2 | null |
transformers
| 27,514 |
Entry not found
|
shengnan/visualize-cst-v2-pre10w-preseed1
|
0d7ff91b7e07b05c50d2ec481e8ec1a0638a3cad
|
2022-07-18T03:08:26.000Z
|
[
"pytorch",
"t5",
"transformers"
] | null | false |
shengnan
| null |
shengnan/visualize-cst-v2-pre10w-preseed1
| 2 | null |
transformers
| 27,515 |
Entry not found
|
MadridMaverick/phobert-base-finetuned-imdb
|
61321c2b0c53c629baeb4df3a13d8cf463bb8423
|
2022-07-18T04:04:21.000Z
|
[
"pytorch",
"roberta",
"fill-mask",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] |
fill-mask
| false |
MadridMaverick
| null |
MadridMaverick/phobert-base-finetuned-imdb
| 2 | null |
transformers
| 27,516 |
---
tags:
- generated_from_trainer
model-index:
- name: phobert-base-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phobert-base-finetuned-imdb
This model is a fine-tuned version of [vinai/phobert-base](https://huggingface.co/vinai/phobert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2510
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7146 | 1.0 | 2500 | 1.4245 |
| 1.3821 | 2.0 | 5000 | 1.2666 |
| 1.3308 | 3.0 | 7500 | 1.2564 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Aktsvigun/bart-base_debate_3449378
|
f2e284136b66a5ec31d2db4895a9c38e1877c2f5
|
2022-07-18T07:43:07.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_debate_3449378
| 2 | null |
transformers
| 27,517 |
Entry not found
|
Aktsvigun/bart-base_scisummnet_3982742
|
ba9454b01c0df5b55d242c082b9edc6c35284f01
|
2022-07-18T07:42:07.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_scisummnet_3982742
| 2 | null |
transformers
| 27,518 |
Entry not found
|
Aktsvigun/bart-base_scisummnet_3198548
|
ce341e9eb72cd4984c5aeb253720aebb80571048
|
2022-07-18T07:44:50.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_scisummnet_3198548
| 2 | null |
transformers
| 27,519 |
Entry not found
|
Aktsvigun/bart-base_debate_4065329
|
82a6ab5dcd3f2407fa6d7c3764bf1e3ea91aa94d
|
2022-07-18T07:45:17.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_debate_4065329
| 2 | null |
transformers
| 27,520 |
Entry not found
|
Aktsvigun/bart-base_scisummnet_4521825
|
999d418c417eb5e230cfab4fa912317d44b2cec4
|
2022-07-18T07:47:22.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_scisummnet_4521825
| 2 | null |
transformers
| 27,521 |
Entry not found
|
Aktsvigun/bart-base_debate_7629317
|
15151b819356f54399e6347ebb79d7739c6b08dd
|
2022-07-18T07:47:49.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_debate_7629317
| 2 | null |
transformers
| 27,522 |
Entry not found
|
Aktsvigun/bart-base_scisummnet_4006598
|
fab1516e2a478ab2c513cc3eb74012c39d275e98
|
2022-07-18T07:59:17.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_scisummnet_4006598
| 2 | null |
transformers
| 27,523 |
Entry not found
|
Aktsvigun/bart-base_debate_4521825
|
7c4d0bd6632007b91c87b362712c4bcc927c16b3
|
2022-07-18T07:59:55.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_debate_4521825
| 2 | null |
transformers
| 27,524 |
Entry not found
|
Aktsvigun/bart-base_scisummnet_7629317
|
4bad91b7eb307f376e3fbe6aa5ca0f86ef7b2018
|
2022-07-18T08:10:21.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_scisummnet_7629317
| 2 | null |
transformers
| 27,525 |
Entry not found
|
Aktsvigun/bart-base_debate_3198548
|
d8dfd3ce6b8daf85956cb2e27c443de53560e1e6
|
2022-07-18T08:06:15.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_debate_3198548
| 2 | null |
transformers
| 27,526 |
Entry not found
|
fqw/t5-pegasus-finetuned
|
30d112e3b6cc9dc97e77cf492deace9835fddd2d
|
2022-07-19T03:01:41.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] |
text2text-generation
| false |
fqw
| null |
fqw/t5-pegasus-finetuned
| 2 | null |
transformers
| 27,527 |
---
tags:
- generated_from_trainer
metrics:
- sacrebleu
model-index:
- name: t5-pegasus-finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-pegasus-finetuned
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 4.8469
- Sacrebleu: 1.6544
- Rouge 1: 0.0646
- Rouge 2: 0.0076
- Rouge L: 0.0629
- Bleu 1: 0.2040
- Bleu 2: 0.0959
- Bleu 3: 0.0484
- Bleu 4: 0.0287
- Meteor: 0.0872
- Gen Len: 16.0997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 512
- eval_batch_size: 256
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Aktsvigun/bart-base_debate_6864530
|
23f1973cb71b74d1705bf73236796c3dd6a20ce7
|
2022-07-18T08:13:15.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_debate_6864530
| 2 | null |
transformers
| 27,528 |
Entry not found
|
Aktsvigun/bart-base_scisummnet_3449378
|
8956f254f8ee4e4f9a7394a2e7c048beb8b70838
|
2022-07-18T08:18:18.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_scisummnet_3449378
| 2 | null |
transformers
| 27,529 |
Entry not found
|
Aktsvigun/bart-base_pubmed_12345
|
1947055be9536e74198e1739fb47f5a95faefc70
|
2022-07-18T08:14:39.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_pubmed_12345
| 2 | null |
transformers
| 27,530 |
Entry not found
|
Aktsvigun/bart-base_debate_9463133
|
20dd2ada418f1934d0144c7ad87831eac4ab6ff0
|
2022-07-18T08:18:09.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_debate_9463133
| 2 | null |
transformers
| 27,531 |
Entry not found
|
Aktsvigun/bart-base_debate_2470973
|
a201bd59ee81b35b25d49e03efe53e0fdf475dfa
|
2022-07-18T08:24:45.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_debate_2470973
| 2 | null |
transformers
| 27,532 |
Entry not found
|
Aktsvigun/bart-base_scisummnet_6864530
|
d17846b40a0a596d02d8c3feb16f63caa2f83416
|
2022-07-18T08:25:12.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_scisummnet_6864530
| 2 | null |
transformers
| 27,533 |
Entry not found
|
Aktsvigun/bart-base_debate_3982742
|
5a9be604c9ddb66b6d41640906c39c1b30296c3e
|
2022-07-18T08:32:16.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_debate_3982742
| 2 | null |
transformers
| 27,534 |
Entry not found
|
Aktsvigun/bart-base_scisummnet_4065329
|
54b1d14a1223608f1e55bc169ef936dd838ec3d6
|
2022-07-18T08:32:36.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_scisummnet_4065329
| 2 | null |
transformers
| 27,535 |
Entry not found
|
Aktsvigun/bart-base_debate_4006598
|
b5105d4a78b23a8dff9f59d5d961f7982c4a924a
|
2022-07-18T08:43:46.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_debate_4006598
| 2 | null |
transformers
| 27,536 |
Entry not found
|
Aktsvigun/bart-base_scisummnet_9463133
|
4d4c4a509b0007dd4c45df8d91ba7f7814d4df4e
|
2022-07-18T08:39:46.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_scisummnet_9463133
| 2 | null |
transformers
| 27,537 |
Entry not found
|
Aktsvigun/bart-base_scisummnet_2470973
|
45864486eac9284643403609eb43dbcc87e7b330
|
2022-07-18T08:53:47.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_scisummnet_2470973
| 2 | null |
transformers
| 27,538 |
Entry not found
|
Aktsvigun/bart-base_debate_3878022
|
251953a08329f2eb394f4cea6b1493aedb5e3a74
|
2022-07-18T08:50:14.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_debate_3878022
| 2 | null |
transformers
| 27,539 |
Entry not found
|
Aktsvigun/bart-base_debate_5893459
|
8094e17f6c804d043f1efb90fbf0e7ac829f4063
|
2022-07-18T09:03:51.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_debate_5893459
| 2 | null |
transformers
| 27,540 |
Entry not found
|
Aktsvigun/bart-base_scisummnet_3878022
|
65df5752aa310f1a10608a05bea3d424b34514ab
|
2022-07-18T09:05:33.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_scisummnet_3878022
| 2 | null |
transformers
| 27,541 |
Entry not found
|
Aktsvigun/bart-base_abssum_scisummnet_705525
|
935a36fe6bc01c23af369abe3d8767497f2674b9
|
2022-07-18T09:05:44.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_abssum_scisummnet_705525
| 2 | null |
transformers
| 27,542 |
Entry not found
|
rsuwaileh/IDRISI-LMR-EN-random-typebased
|
e2b314fea6eab88b5b8b936c247804dfd5d39cb9
|
2022-07-20T14:58:34.000Z
|
[
"pytorch",
"bert",
"token-classification",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
token-classification
| false |
rsuwaileh
| null |
rsuwaileh/IDRISI-LMR-EN-random-typebased
| 2 | null |
transformers
| 27,543 |
---
license: apache-2.0
---
This model is a BERT-based Location Mention Recognition model that is adopted from the [TLLMR4CM GitHub](https://github.com/rsuwaileh/TLLMR4CM/). The model identifies the toponyms' spans in text and predicts their location types. The location type can be coarse-grained (e.g., country, city, etc.) and fine-grained (e.g., street, POI, etc.)
The model is trained using the training splits of all events from [IDRISI-R dataset](https://github.com/rsuwaileh/IDRISI) under the `Type-based` LMR mode and using the `Random` version of the data. You can download this data in `BILOU` format from [here](https://github.com/rsuwaileh/IDRISI/tree/main/data/LMR/EN/gold-random-bilou/). More details are available [here](https://github.com/rsuwaileh/IDRISI/tree/main/models).
* Different variants of the model are available through HuggingFace:
- [rsuwaileh/IDRISI-LMR-EN-random-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-random-typeless/)
- [rsuwaileh/IDRISI-LMR-EN-timebased-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-timebased-typeless/)
- [rsuwaileh/IDRISI-LMR-EN-timebased-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-timebased-typebased/)
* Arabic models are also available:
- [rsuwaileh/IDRISI-LMR-AR-random-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-AR-random-typeless/)
- [rsuwaileh/IDRISI-LMR-AR-random-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-AR-random-typebased/)
- [rsuwaileh/IDRISI-LMR-AR-timebased-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-AR-timebased-typeless/)
- [rsuwaileh/IDRISI-LMR-AR-timebased-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-AR-timebased-typebased/)
To cite the models:
```
@article{suwaileh2022tlLMR4disaster,
title={When a Disaster Happens, We Are Ready: Location Mention Recognition from Crisis Tweets},
author={Suwaileh, Reem and Elsayed, Tamer and Imran, Muhammad and Sajjad, Hassan},
journal={International Journal of Disaster Risk Reduction},
year={2022}
}
@inproceedings{suwaileh2020tlLMR4disaster,
title={Are We Ready for this Disaster? Towards Location Mention Recognition from Crisis Tweets},
author={Suwaileh, Reem and Imran, Muhammad and Elsayed, Tamer and Sajjad, Hassan},
booktitle={Proceedings of the 28th International Conference on Computational Linguistics},
pages={6252--6263},
year={2020}
}
```
To cite the IDRISI-R dataset:
```
@article{rsuwaileh2022Idrisi-r,
title={IDRISI-R: Large-scale English and Arabic Location Mention Recognition Datasets for Disaster Response over Twitter},
author={Suwaileh, Reem and Elsayed, Tamer and Imran, Muhammad},
journal={...},
volume={...},
pages={...},
year={2022},
publisher={...}
}
```
|
Aktsvigun/bart-base_debate_6585777
|
194f93516f0af42a9b571ca3fccffb3b86b5b1b8
|
2022-07-18T09:06:06.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_debate_6585777
| 2 | null |
transformers
| 27,544 |
Entry not found
|
Aktsvigun/bart-base_scisummnet_5537116
|
84be63fdd161fd9f7d9cb8d15952cc9e6fd669a9
|
2022-07-18T09:16:10.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_scisummnet_5537116
| 2 | null |
transformers
| 27,545 |
Entry not found
|
Aktsvigun/bart-base_debate_6880281
|
333d0c82150fba20a8489a11257780e1c1b01b87
|
2022-07-18T09:11:43.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_debate_6880281
| 2 | null |
transformers
| 27,546 |
Entry not found
|
Aktsvigun/bart-base_aeslc_4521825
|
94d7eb6f82a559fe866aa761492db1f6cc881f88
|
2022-07-18T09:13:27.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_aeslc_4521825
| 2 | null |
transformers
| 27,547 |
Entry not found
|
Aktsvigun/bart-base_debate_5537116
|
1ae04a2f061ce5ee96e273d90ac10d671a2a2464
|
2022-07-18T09:26:14.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_debate_5537116
| 2 | null |
transformers
| 27,548 |
Entry not found
|
Aktsvigun/bart-base_scisummnet_919213
|
6c11b31c03612d59835f64a9c6228919d3c72013
|
2022-07-18T09:26:11.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_scisummnet_919213
| 2 | null |
transformers
| 27,549 |
Entry not found
|
rsuwaileh/IDRISI-LMR-EN-random-typeless
|
5ace9a7be5cd048b286541ac22748e0114caa95d
|
2022-07-20T14:58:59.000Z
|
[
"pytorch",
"bert",
"token-classification",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
token-classification
| false |
rsuwaileh
| null |
rsuwaileh/IDRISI-LMR-EN-random-typeless
| 2 | null |
transformers
| 27,550 |
---
license: apache-2.0
---
This model is a BERT-based Location Mention Recognition model that is adopted from the [TLLMR4CM GitHub](https://github.com/rsuwaileh/TLLMR4CM/). The model identifies the toponyms' spans in the text without predicting their location types.
The model is trained using the training splits of all events from [IDRISI-R dataset](https://github.com/rsuwaileh/IDRISI) under the `Type-less` LMR mode and using the `Random` version of the data. You can download this data in `BILOU` format from [here](https://github.com/rsuwaileh/IDRISI/tree/main/data/LMR/EN/gold-random-bilou/). All Location types in the data were normalized to the `LOC` tag. More details about the models are available [here](https://github.com/rsuwaileh/IDRISI/tree/main/models).
* Different variants of the model are available through HuggingFace:
- [rsuwaileh/IDRISI-LMR-EN-random-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-random-typebased/)
- [rsuwaileh/IDRISI-LMR-EN-timebased-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-timebased-typeless/)
- [rsuwaileh/IDRISI-LMR-EN-timebased-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-timebased-typebased/)
* Arabic models are also available:
- [rsuwaileh/IDRISI-LMR-AR-random-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-AR-random-typeless/)
- [rsuwaileh/IDRISI-LMR-AR-random-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-AR-random-typebased/)
- [rsuwaileh/IDRISI-LMR-AR-timebased-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-AR-timebased-typeless/)
- [rsuwaileh/IDRISI-LMR-AR-timebased-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-AR-timebased-typebased/)
To cite the models:
```
@article{suwaileh2022tlLMR4disaster,
title={When a Disaster Happens, We Are Ready: Location Mention Recognition from Crisis Tweets},
author={Suwaileh, Reem and Elsayed, Tamer and Imran, Muhammad and Sajjad, Hassan},
journal={International Journal of Disaster Risk Reduction},
year={2022}
}
@inproceedings{suwaileh2020tlLMR4disaster,
title={Are We Ready for this Disaster? Towards Location Mention Recognition from Crisis Tweets},
author={Suwaileh, Reem and Imran, Muhammad and Elsayed, Tamer and Sajjad, Hassan},
booktitle={Proceedings of the 28th International Conference on Computational Linguistics},
pages={6252--6263},
year={2020}
}
```
To cite the IDRISI-R dataset:
```
@article{rsuwaileh2022Idrisi-r,
title={IDRISI-R: Large-scale English and Arabic Location Mention Recognition Datasets for Disaster Response over Twitter},
author={Suwaileh, Reem and Elsayed, Tamer and Imran, Muhammad},
journal={...},
volume={...},
pages={...},
year={2022},
publisher={...}
}
```
|
Aktsvigun/bart-base_scisummnet_9467153
|
129e7ab705dbbe7dd2e90a2108e3a7c48855eda4
|
2022-07-18T09:28:44.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_scisummnet_9467153
| 2 | null |
transformers
| 27,551 |
Entry not found
|
Aktsvigun/bart-base_debate_919213
|
12a70f19bfa273acfe7b90f507e95c616c860e5d
|
2022-07-18T09:28:58.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_debate_919213
| 2 | null |
transformers
| 27,552 |
Entry not found
|
rsuwaileh/IDRISI-LMR-EN-timebased-typebased
|
653810c546b6f92c27d4e53d6dfbc6cf0a5ff018
|
2022-07-20T14:55:57.000Z
|
[
"pytorch",
"bert",
"token-classification",
"transformers",
"license:apache-2.0",
"autotrain_compatible"
] |
token-classification
| false |
rsuwaileh
| null |
rsuwaileh/IDRISI-LMR-EN-timebased-typebased
| 2 | null |
transformers
| 27,553 |
---
license: apache-2.0
---
This model is a BERT-based Location Mention Recognition model that is adopted from the [TLLMR4CM GitHub](https://github.com/rsuwaileh/TLLMR4CM/). The model identifies the toponyms' spans in the text and predicts their location types. The location type can be coarse-grained (e.g., country, city, etc.) and fine-grained (e.g., street, POI, etc.)
The model is trained using the training splits of all events from [IDRISI-R dataset](https://github.com/rsuwaileh/IDRISI) under the `Type-based` LMR mode and using the `Time-based` version of the data. You can download this data in `BILOU` format from [here](https://github.com/rsuwaileh/IDRISI/tree/main/data/LMR/EN/gold-random-bilou/). More details about the models are available [here](https://github.com/rsuwaileh/IDRISI/tree/main/models).
* Different variants of the model are available through HuggingFace:
- [rsuwaileh/IDRISI-LMR-EN-random-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-random-typeless/)
- [rsuwaileh/IDRISI-LMR-EN-random-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-random-typebased/)
- [rsuwaileh/IDRISI-LMR-EN-timebased-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-EN-timebased-typeless/)
* Arabic models are also available:
- [rsuwaileh/IDRISI-LMR-AR-random-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-AR-random-typeless/)
- [rsuwaileh/IDRISI-LMR-AR-random-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-AR-random-typebased/)
- [rsuwaileh/IDRISI-LMR-AR-timebased-typeless](https://huggingface.co/rsuwaileh/IDRISI-LMR-AR-timebased-typeless/)
- [rsuwaileh/IDRISI-LMR-AR-timebased-typebased](https://huggingface.co/rsuwaileh/IDRISI-LMR-AR-timebased-typebased/)
To cite the models:
```
@article{suwaileh2022tlLMR4disaster,
title={When a Disaster Happens, We Are Ready: Location Mention Recognition from Crisis Tweets},
author={Suwaileh, Reem and Elsayed, Tamer and Imran, Muhammad and Sajjad, Hassan},
journal={International Journal of Disaster Risk Reduction},
year={2022}
}
@inproceedings{suwaileh2020tlLMR4disaster,
title={Are We Ready for this Disaster? Towards Location Mention Recognition from Crisis Tweets},
author={Suwaileh, Reem and Imran, Muhammad and Elsayed, Tamer and Sajjad, Hassan},
booktitle={Proceedings of the 28th International Conference on Computational Linguistics},
pages={6252--6263},
year={2020}
}
```
To cite the IDRISI-R dataset:
```
@article{rsuwaileh2022Idrisi-r,
title={IDRISI-R: Large-scale English and Arabic Location Mention Recognition Datasets for Disaster Response over Twitter},
author={Suwaileh, Reem and Elsayed, Tamer and Imran, Muhammad},
journal={...},
volume={...},
pages={...},
year={2022},
publisher={...}
}
```
|
Aktsvigun/bart-base_scisummnet_8653685
|
de0e236be70c2f12a85dd2c28e1f65c480a2c4fc
|
2022-07-18T09:31:31.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_scisummnet_8653685
| 2 | null |
transformers
| 27,554 |
Entry not found
|
Aktsvigun/bart-base_debate_9467153
|
6e3cdae193f6a8e70237d3d8c288194b6910d184
|
2022-07-18T09:31:43.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_debate_9467153
| 2 | null |
transformers
| 27,555 |
Entry not found
|
Aktsvigun/bart-base_scisummnet_5893459
|
875745255ad237ffcc79e513000bf3dc00577ff5
|
2022-07-18T09:37:30.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_scisummnet_5893459
| 2 | null |
transformers
| 27,556 |
Entry not found
|
Aktsvigun/bart-base_debate_9478495
|
2dd4d5a1d43533afb109c0564e36fad9674e5dee
|
2022-07-18T09:39:50.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_debate_9478495
| 2 | null |
transformers
| 27,557 |
Entry not found
|
Aktsvigun/bart-base_scisummnet_6585777
|
5212853197efef0816c8031d713bf538b9a4fa04
|
2022-07-18T09:39:31.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_scisummnet_6585777
| 2 | null |
transformers
| 27,558 |
Entry not found
|
Aktsvigun/bart-base_scisummnet_6880281
|
db995f7a93fc9afc90682168d1cf644f7eef18bb
|
2022-07-18T09:46:57.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_scisummnet_6880281
| 2 | null |
transformers
| 27,559 |
Entry not found
|
Aktsvigun/bart-base_debate_8653685
|
08f8774d3f032157cd65cc8353a2d7fa84411635
|
2022-07-18T09:45:19.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_debate_8653685
| 2 | null |
transformers
| 27,560 |
Entry not found
|
Aktsvigun/bart-base_debate_2930982
|
51d2a145de4742e6ffd791ef466ea6e4827677c0
|
2022-07-18T09:47:26.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_debate_2930982
| 2 | null |
transformers
| 27,561 |
Entry not found
|
Aktsvigun/bart-base_scisummnet_9478495
|
fc2a89907332fd83fe901e630efbdf509558bec4
|
2022-07-18T10:00:26.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_scisummnet_9478495
| 2 | null |
transformers
| 27,562 |
Entry not found
|
Aktsvigun/bart-base_scisummnet_2930982
|
df4e6f1327c791f643cbd1a4ed12ac2d96a62e61
|
2022-07-18T10:11:51.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_scisummnet_2930982
| 2 | null |
transformers
| 27,563 |
Entry not found
|
Aktsvigun/bart-base_abssum_scisummnet_4837
|
3440afe56dde11599ddb2b262626b7bad75c47ef
|
2022-07-18T10:29:47.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_abssum_scisummnet_4837
| 2 | null |
transformers
| 27,564 |
Entry not found
|
Aktsvigun/bart-base_aeslc_9463133
|
612c9c77a8207daa2eb77a0dd1234c1446525372
|
2022-07-18T10:57:36.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_aeslc_9463133
| 2 | null |
transformers
| 27,565 |
Entry not found
|
Aktsvigun/bart-base_abssum_scisummnet_42
|
fd4dd028fdd8de68f7f875559004aaabc99b52f2
|
2022-07-18T11:45:02.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_abssum_scisummnet_42
| 2 | null |
transformers
| 27,566 |
Entry not found
|
Aktsvigun/bart-base_abssum_scisummnet_919213
|
419f38e77b46123c30af9de2dbb7eeb73a01a513
|
2022-07-18T12:39:53.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_abssum_scisummnet_919213
| 2 | null |
transformers
| 27,567 |
Entry not found
|
Aktsvigun/bart-base_aeslc_3982742
|
66681c67be0504afa2a4dadb891350bd2f0dfdfb
|
2022-07-18T12:47:55.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_aeslc_3982742
| 2 | null |
transformers
| 27,568 |
Entry not found
|
yixi/bert-finetuned-ner-accelerate
|
5a74e69346d1b3269f4a30be1f4c471ded4f5213
|
2022-07-18T13:18:11.000Z
|
[
"pytorch",
"bert",
"token-classification",
"transformers",
"autotrain_compatible"
] |
token-classification
| false |
yixi
| null |
yixi/bert-finetuned-ner-accelerate
| 2 | null |
transformers
| 27,569 |
Entry not found
|
Aktsvigun/bart-base_abssum_wikihow_all_4837
|
3d3d1f510baf6bd13ef39dc0bcac870bf173cdfb
|
2022-07-18T13:56:16.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_abssum_wikihow_all_4837
| 2 | null |
transformers
| 27,570 |
Entry not found
|
andreaschandra/pegasus-samsum
|
480c0371c4f9b91d42e0aad7cea5db066acce40f
|
2022-07-18T14:47:25.000Z
|
[
"pytorch",
"pegasus",
"text2text-generation",
"dataset:samsum",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] |
text2text-generation
| false |
andreaschandra
| null |
andreaschandra/pegasus-samsum
| 2 | null |
transformers
| 27,571 |
---
tags:
- generated_from_trainer
datasets:
- samsum
model-index:
- name: pegasus-samsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-samsum
This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.19.4
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Aktsvigun/bart-base_abssum_scisummnet_9467153
|
07f6572126c0b602ebf7644073868c574fbcf494
|
2022-07-18T14:09:54.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_abssum_scisummnet_9467153
| 2 | null |
transformers
| 27,572 |
Entry not found
|
Aktsvigun/bart-base_aeslc_2470973
|
667c3ba39c0e7dfbee38420d2eaa87d17128819a
|
2022-07-18T14:14:32.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_aeslc_2470973
| 2 | null |
transformers
| 27,573 |
Entry not found
|
Aktsvigun/bart-base_abssum_scisummnet_6585777
|
47051197b79e8eebfea9931d8d7a78c6001ad734
|
2022-07-18T14:57:12.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_abssum_scisummnet_6585777
| 2 | null |
transformers
| 27,574 |
Entry not found
|
Aktsvigun/bart-base_abssum_scisummnet_3878022
|
af7d29e6d83ecb4f6ee2c955f42c4104a3d6b817
|
2022-07-18T15:43:41.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_abssum_scisummnet_3878022
| 2 | null |
transformers
| 27,575 |
Entry not found
|
Aktsvigun/bart-base_aeslc_6864530
|
8376270386114a25b1865229e50d22bd8ecca14b
|
2022-07-18T15:57:57.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_aeslc_6864530
| 2 | null |
transformers
| 27,576 |
Entry not found
|
Aktsvigun/bart-base_abssum_scisummnet_5537116
|
24d440dce083d5153d269c03e6db76734e568329
|
2022-07-18T16:47:44.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_abssum_scisummnet_5537116
| 2 | null |
transformers
| 27,577 |
Entry not found
|
Aktsvigun/bart-base_abssum_scisummnet_5893459
|
8c60d7d8b944a6dedbf3fabd3b267e00566b4804
|
2022-07-18T17:34:12.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_abssum_scisummnet_5893459
| 2 | null |
transformers
| 27,578 |
Entry not found
|
Aktsvigun/bart-base_abssum_scisummnet_8653685
|
2f2fc12eb29f84f1da94c603d0e79eadfa7c1d63
|
2022-07-18T18:31:30.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_abssum_scisummnet_8653685
| 2 | null |
transformers
| 27,579 |
Entry not found
|
Aktsvigun/bart-base_abssum_scisummnet_6880281
|
3c59b906d4be99878459ff497acf7d144a4138eb
|
2022-07-18T19:26:41.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_abssum_scisummnet_6880281
| 2 | null |
transformers
| 27,580 |
Entry not found
|
Aktsvigun/bart-base_abssum_scisummnet_9478495
|
0f4a5f6984d5550e7ba049414bcb653256dc74ce
|
2022-07-18T20:32:57.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_abssum_scisummnet_9478495
| 2 | null |
transformers
| 27,581 |
Entry not found
|
Aktsvigun/bart-base_abssum_scisummnet_2930982
|
8b6ef99abeca4bb6f12c2ad44f27cd859c30a6c5
|
2022-07-18T21:40:47.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_abssum_scisummnet_2930982
| 2 | null |
transformers
| 27,582 |
Entry not found
|
Aktsvigun/bart-base_abssum_wikihow_all_919213
|
bdc92a33564d96e46c5497768bb8107763f3ad17
|
2022-07-18T22:02:40.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_abssum_wikihow_all_919213
| 2 | null |
transformers
| 27,583 |
Entry not found
|
Aktsvigun/bart-base_abssum_scisummnet_7629317
|
055cf58812c4dbf2d24be08c781db9683c952e1b
|
2022-07-18T22:53:17.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_abssum_scisummnet_7629317
| 2 | null |
transformers
| 27,584 |
Entry not found
|
Aktsvigun/bart-base_abssum_scisummnet_4065329
|
07e2ec993aec1e8f5d7e8d3742ea65306e92d948
|
2022-07-19T00:16:54.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_abssum_scisummnet_4065329
| 2 | null |
transformers
| 27,585 |
Entry not found
|
Aktsvigun/bart-base_abssum_scisummnet_3449378
|
747e5da6579ad58820a0b560b75a79610d67a786
|
2022-07-19T01:22:14.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_abssum_scisummnet_3449378
| 2 | null |
transformers
| 27,586 |
Entry not found
|
duchung17/wav2vec2-base-cmv
|
ecb004d909aa2ae10174ee1f3777dad21463f16b
|
2022-07-19T04:20:43.000Z
|
[
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"dataset:common_voice_9_0",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] |
automatic-speech-recognition
| false |
duchung17
| null |
duchung17/wav2vec2-base-cmv
| 2 | null |
transformers
| 27,587 |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice_9_0
model-index:
- name: wav2vec2-base-cmv
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-cmv
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice_9_0 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5441
- Wer: 0.6622
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 11.8566 | 4.84 | 300 | 3.6325 | 1.0 |
| 3.5331 | 9.68 | 600 | 3.4828 | 1.0 |
| 1.9836 | 14.52 | 900 | 1.5971 | 0.7705 |
| 0.3777 | 19.35 | 1200 | 1.4990 | 0.6944 |
| 0.2192 | 24.19 | 1500 | 1.4303 | 0.6571 |
| 0.1552 | 29.03 | 1800 | 1.5441 | 0.6622 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.10.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Aktsvigun/bart-base_abssum_scisummnet_4521825
|
15b468c2b3bb3fb2ffd7e8c77011486abe026cb8
|
2022-07-19T02:29:01.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_abssum_scisummnet_4521825
| 2 | null |
transformers
| 27,588 |
Entry not found
|
Aktsvigun/bart-base_abssum_scisummnet_9463133
|
2eb66ac1bbf862124cbdd16bc2da82cc2c0d4fec
|
2022-07-19T03:20:30.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_abssum_scisummnet_9463133
| 2 | null |
transformers
| 27,589 |
Entry not found
|
fqw/t5-pegasus-finetuned_test
|
4d931d974b5bcdd192fc915bcf7fb9af61f77e8d
|
2022-07-19T06:14:16.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"generated_from_trainer",
"model-index",
"autotrain_compatible"
] |
text2text-generation
| false |
fqw
| null |
fqw/t5-pegasus-finetuned_test
| 2 | null |
transformers
| 27,590 |
---
tags:
- generated_from_trainer
metrics:
- sacrebleu
model-index:
- name: t5-pegasus-finetuned_test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-pegasus-finetuned_test
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 6.0045
- Sacrebleu: 0.8737
- Rouge 1: 0.0237
- Rouge 2: 0.0
- Rouge L: 0.0232
- Bleu 1: 0.1444
- Bleu 2: 0.0447
- Bleu 3: 0.0175
- Bleu 4: 0.0083
- Meteor: 0.0609
- Gen Len: 15.098
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 128
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Sacrebleu | Rouge 1 | Rouge 2 | Rouge L | Bleu 1 | Bleu 2 | Bleu 3 | Bleu 4 | Meteor | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|:-------:|:-------:|:------:|:------:|:------:|:------:|:------:|:-------:|
| No log | 52.5 | 210 | 5.9818 | 0.9114 | 0.0229 | 0.0 | 0.0225 | 0.1424 | 0.0436 | 0.0183 | 0.0091 | 0.06 | 15.126 |
| No log | 70.0 | 280 | 6.0072 | 0.876 | 0.0233 | 0.0 | 0.0228 | 0.1437 | 0.0452 | 0.0177 | 0.0083 | 0.0607 | 15.088 |
| No log | 87.5 | 350 | 6.0017 | 0.8695 | 0.0229 | 0.0 | 0.0225 | 0.1445 | 0.0443 | 0.0175 | 0.0082 | 0.0609 | 15.12 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.9.1+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Aktsvigun/bart-base_abssum_scisummnet_3198548
|
c6fcff92a386cc9c9519ca03afdbe022f6b81c3f
|
2022-07-19T04:22:57.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_abssum_scisummnet_3198548
| 2 | null |
transformers
| 27,591 |
Entry not found
|
Aktsvigun/bart-base_abssum_scisummnet_4006598
|
039a896b86602a054066dbabaaa8f19bc4b6253f
|
2022-07-19T05:03:18.000Z
|
[
"pytorch",
"bart",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
Aktsvigun
| null |
Aktsvigun/bart-base_abssum_scisummnet_4006598
| 2 | null |
transformers
| 27,592 |
Entry not found
|
fqw/mt5-small-finetuned-test
|
6526fca18ceb413c80f5ff27e77e008dc0567701
|
2022-07-19T09:03:40.000Z
|
[
"pytorch",
"mt5",
"text2text-generation",
"transformers",
"autotrain_compatible"
] |
text2text-generation
| false |
fqw
| null |
fqw/mt5-small-finetuned-test
| 2 | null |
transformers
| 27,593 |
Entry not found
|
jordyvl/udpos28-en-CRF-first-POS
|
a5232068308eb830113515cbc8f7932949668af9
|
2022-07-19T10:42:26.000Z
|
[
"pytorch",
"tensorboard",
"bert",
"transformers"
] | null | false |
jordyvl
| null |
jordyvl/udpos28-en-CRF-first-POS
| 2 | null |
transformers
| 27,594 |
Entry not found
|
chinmaygharat/testDA
|
511333024be37ee2c29a51ba25724778fa16b758
|
2022-07-19T10:37:23.000Z
|
[
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
| false |
chinmaygharat
| null |
chinmaygharat/testDA
| 2 | null |
transformers
| 27,595 |
Entry not found
|
zenkingsama/hubert-large-ll60k-librispeech-clean-100h-demo-dist
|
b310220bccdb85d783fd85caf4cb5d4d5da2e00e
|
2022-07-19T14:41:15.000Z
|
[
"pytorch",
"tensorboard",
"hubert",
"automatic-speech-recognition",
"transformers",
"librispeech_asr",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] |
automatic-speech-recognition
| false |
zenkingsama
| null |
zenkingsama/hubert-large-ll60k-librispeech-clean-100h-demo-dist
| 2 | null |
transformers
| 27,596 |
---
license: apache-2.0
tags:
- automatic-speech-recognition
- librispeech_asr
- generated_from_trainer
model-index:
- name: hubert-large-ll60k-librispeech-clean-100h-demo-dist
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert-large-ll60k-librispeech-clean-100h-demo-dist
This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on the LIBRISPEECH_ASR - CLEAN dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1361
- Wer: 0.9769
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.4699 | 0.15 | 100 | 2.0098 | 1.0 |
| 1.1 | 0.3 | 200 | 1.0910 | 0.9769 |
| 0.6468 | 0.45 | 300 | 1.5089 | 0.9769 |
| 0.6928 | 0.61 | 400 | 1.6171 | 1.0 |
| 0.698 | 0.76 | 500 | 1.2406 | 0.9769 |
| 1.0461 | 0.91 | 600 | 1.3760 | 1.0 |
| 0.6363 | 1.06 | 700 | 2.1654 | 1.0 |
| 0.6743 | 1.21 | 800 | 1.7481 | 0.9769 |
| 0.565 | 1.36 | 900 | 2.1965 | 1.0 |
| 0.5761 | 1.51 | 1000 | 1.8223 | 0.9769 |
| 0.6179 | 1.66 | 1100 | 1.9976 | 0.9769 |
| 0.5052 | 1.82 | 1200 | 1.5585 | 0.9769 |
| 0.5434 | 1.97 | 1300 | 2.0349 | 0.9769 |
| 0.4997 | 2.12 | 1400 | 2.4083 | 0.9769 |
| 0.489 | 2.27 | 1500 | 2.4164 | 0.9769 |
| 0.5104 | 2.42 | 1600 | 2.4970 | 0.9769 |
| 0.5324 | 2.57 | 1700 | 2.3352 | 0.9769 |
| 0.5207 | 2.72 | 1800 | 2.2009 | 0.9769 |
| 0.5224 | 2.87 | 1900 | 2.1035 | 0.9769 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
chinmaygharat/test1
|
5519629f71b389400927c90bbdf0e2418a108a4e
|
2022-07-19T16:17:17.000Z
|
[
"pytorch",
"distilbert",
"fill-mask",
"transformers",
"autotrain_compatible"
] |
fill-mask
| false |
chinmaygharat
| null |
chinmaygharat/test1
| 2 | null |
transformers
| 27,597 |
Entry not found
|
himal/swin-tiny-patch4-window7-224-finetuned-eurosat
|
5f011269b503b92f4a8988afec50181460059b3a
|
2022-07-19T17:44:49.000Z
|
[
"pytorch",
"tensorboard",
"swin",
"image-classification",
"dataset:imagefolder",
"transformers",
"generated_from_trainer",
"license:apache-2.0",
"model-index"
] |
image-classification
| false |
himal
| null |
himal/swin-tiny-patch4-window7-224-finetuned-eurosat
| 2 | null |
transformers
| 27,598 |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: swin-tiny-patch4-window7-224-finetuned-eurosat
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9755555555555555
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0738
- Accuracy: 0.9756
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2469 | 1.0 | 190 | 0.1173 | 0.9622 |
| 0.1471 | 2.0 | 380 | 0.0806 | 0.9748 |
| 0.1588 | 3.0 | 570 | 0.0738 | 0.9756 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
rapid3/gpt2-wikitext2
|
f8351d205242dd6749a8b851e682c2f9a700d3a3
|
2022-07-19T19:15:42.000Z
|
[
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"transformers",
"generated_from_trainer",
"license:mit",
"model-index"
] |
text-generation
| false |
rapid3
| null |
rapid3/gpt2-wikitext2
| 2 | null |
transformers
| 27,599 |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.1100
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.5578 | 1.0 | 2249 | 6.4697 |
| 6.1907 | 2.0 | 4498 | 6.1998 |
| 6.0152 | 3.0 | 6747 | 6.1100 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.