modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-23 18:27:52
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 492
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-23 18:25:26
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
sana-ngu/bart-base-finetuned-summarize-scientific-articles | sana-ngu | 2023-05-12T20:07:11Z | 101 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-05-12T19:30:00Z | # How to use
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="sana-ngu/bart-base-finetuned-summarize-scientific-articles")
article = "The novel Coronavirus disease (COVID-19), caused by the severe acute respiratory syndrome coronavirus—2 (SARS-CoV-2), in Africa is characterised by a more substantial proportion of asymptomatic (or mildly symptomatic) individuals thought to be playing a role in the spread of the infection. The exact proportion and degree of infectiousness of asymptomatic individuals remains unclear. Studies however indicate that their management is crucial for control of SARS-CoV-2 transmission.
We developed a simplified deterministic susceptible-exposed-infectious-removed (SEIR) mathematical model to assess the effect of active isolation of SARS-CoV-2 infected but asymptomatic individuals through blanket testing for control of the outbreak in Lusaka Province of Zambia. Here we modelled two scenarios; (1) assuming asymptomatic individuals comprised 70% of all COVID-19 cases and (2) asymptomatic individuals comprised only 50% of the cases. For contrast, the model was assessed first under the assumption that asymptomatic individuals are equally as infectious as symptomatic individuals and then secondly, and more likely, assuming asymptomatic individuals are only half as infectious as symptomatic individuals.
For the model assuming 70% asymptomatic cases, a minimum sustained daily blanket testing rate of ≥ 7911 tests/100000 population was sufficient to control the outbreak if asymptomatic individuals are only half as infectious while if equal infectiousness was assumed then a testing rate of ≥ 10028 tests/ 100000 population would be required. For 50% asymptomatic, minimum blanket testing rates of ≥ 4540 tests/ 100000 population was sufficient to control the outbreak at both assumed levels of infectiousness for asymptomatic individuals relative to symptomatic individuals.
Discussion and conclusion
Our model predicts that active isolation of COVID-19 cases, including asymptomatic individuals, through blanket testing can be used as a possible measure for the control of the SARS-Cov-2 transmission in Lusaka, Zambia, but it would come at a high cost."
summarizer(conversation)
``` |
sana-ngu/t5-large-finetuned-summarize-scientific-articles | sana-ngu | 2023-05-12T20:05:19Z | 94 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-05-12T19:21:03Z | # How to use
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="sana-ngu/t5-large-finetuned-summarize-scientific-articles")
article = "The novel Coronavirus disease (COVID-19), caused by the severe acute respiratory syndrome coronavirus—2 (SARS-CoV-2), in Africa is characterised by a more substantial proportion of asymptomatic (or mildly symptomatic) individuals thought to be playing a role in the spread of the infection. The exact proportion and degree of infectiousness of asymptomatic individuals remains unclear. Studies however indicate that their management is crucial for control of SARS-CoV-2 transmission.
We developed a simplified deterministic susceptible-exposed-infectious-removed (SEIR) mathematical model to assess the effect of active isolation of SARS-CoV-2 infected but asymptomatic individuals through blanket testing for control of the outbreak in Lusaka Province of Zambia. Here we modelled two scenarios; (1) assuming asymptomatic individuals comprised 70% of all COVID-19 cases and (2) asymptomatic individuals comprised only 50% of the cases. For contrast, the model was assessed first under the assumption that asymptomatic individuals are equally as infectious as symptomatic individuals and then secondly, and more likely, assuming asymptomatic individuals are only half as infectious as symptomatic individuals.
For the model assuming 70% asymptomatic cases, a minimum sustained daily blanket testing rate of ≥ 7911 tests/100000 population was sufficient to control the outbreak if asymptomatic individuals are only half as infectious while if equal infectiousness was assumed then a testing rate of ≥ 10028 tests/ 100000 population would be required. For 50% asymptomatic, minimum blanket testing rates of ≥ 4540 tests/ 100000 population was sufficient to control the outbreak at both assumed levels of infectiousness for asymptomatic individuals relative to symptomatic individuals.
Discussion and conclusion
Our model predicts that active isolation of COVID-19 cases, including asymptomatic individuals, through blanket testing can be used as a possible measure for the control of the SARS-Cov-2 transmission in Lusaka, Zambia, but it would come at a high cost."
summarizer(conversation)
``` |
sana-ngu/t5-small-finetuned-summarize-scientific-articles | sana-ngu | 2023-05-12T20:02:49Z | 10 | 2 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-05-12T19:14:02Z | # How to use
```python
from transformers import pipeline
summarizer = pipeline("summarization", model="sana-ngu/t5-small-finetuned-summarize-scientific-articles")
article = "The novel Coronavirus disease (COVID-19), caused by the severe acute respiratory syndrome coronavirus—2 (SARS-CoV-2), in Africa is characterised by a more substantial proportion of asymptomatic (or mildly symptomatic) individuals thought to be playing a role in the spread of the infection. The exact proportion and degree of infectiousness of asymptomatic individuals remains unclear. Studies however indicate that their management is crucial for control of SARS-CoV-2 transmission.
We developed a simplified deterministic susceptible-exposed-infectious-removed (SEIR) mathematical model to assess the effect of active isolation of SARS-CoV-2 infected but asymptomatic individuals through blanket testing for control of the outbreak in Lusaka Province of Zambia. Here we modelled two scenarios; (1) assuming asymptomatic individuals comprised 70% of all COVID-19 cases and (2) asymptomatic individuals comprised only 50% of the cases. For contrast, the model was assessed first under the assumption that asymptomatic individuals are equally as infectious as symptomatic individuals and then secondly, and more likely, assuming asymptomatic individuals are only half as infectious as symptomatic individuals.
For the model assuming 70% asymptomatic cases, a minimum sustained daily blanket testing rate of ≥ 7911 tests/100000 population was sufficient to control the outbreak if asymptomatic individuals are only half as infectious while if equal infectiousness was assumed then a testing rate of ≥ 10028 tests/ 100000 population would be required. For 50% asymptomatic, minimum blanket testing rates of ≥ 4540 tests/ 100000 population was sufficient to control the outbreak at both assumed levels of infectiousness for asymptomatic individuals relative to symptomatic individuals.
Discussion and conclusion
Our model predicts that active isolation of COVID-19 cases, including asymptomatic individuals, through blanket testing can be used as a possible measure for the control of the SARS-Cov-2 transmission in Lusaka, Zambia, but it would come at a high cost."
summarizer(conversation)
``` |
hyoni/kogpt2-base-v2-finetuned-klue-ner | hyoni | 2023-05-12T19:55:52Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"token-classification",
"generated_from_trainer",
"dataset:klue",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-05-12T19:42:52Z | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- f1
model-index:
- name: kogpt2-base-v2-finetuned-klue-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: klue
type: klue
config: ner
split: validation
args: ner
metrics:
- name: F1
type: f1
value: 0.37298165525403665
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kogpt2-base-v2-finetuned-klue-ner
This model is a fine-tuned version of [skt/kogpt2-base-v2](https://huggingface.co/skt/kogpt2-base-v2) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4076
- F1: 0.3730
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6084 | 1.0 | 876 | 0.5353 | 0.2118 |
| 0.3911 | 2.0 | 1752 | 0.4691 | 0.3041 |
| 0.2855 | 3.0 | 2628 | 0.4076 | 0.3730 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Multi-Domain-Expert-Learning/merge-arxiv-freelaw-pubmed | Multi-Domain-Expert-Learning | 2023-05-12T19:13:01Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"MDEL",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-05-12T19:10:40Z |
---
tags:
- MDEL
---
# Model Name
Multi-Domain-Expert-Layers/merge-arxiv-freelaw-pubmed
# Model Description
This model was generated by averaging the weights of the following models
- [Multi-Domain-Expert-Layers/expert-pubmed_central](https://huggingface.co/Multi-Domain-Expert-Layers/expert-pubmed_central)
- [Multi-Domain-Expert-Layers/expert-freelaw](https://huggingface.co/Multi-Domain-Expert-Layers/expert-freelaw)
- [Multi-Domain-Expert-Layers/expert-arxiv](https://huggingface.co/Multi-Domain-Expert-Layers/expert-arxiv)
|
guillaumekln/faster-whisper-large-v1 | guillaumekln | 2023-05-12T18:58:39Z | 327 | 3 | ctranslate2 | [
"ctranslate2",
"audio",
"automatic-speech-recognition",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"ro",
"da",
"hu",
"ta",
"no",
"th",
"ur",
"hr",
"bg",
"lt",
"la",
"mi",
"ml",
"cy",
"sk",
"te",
"fa",
"lv",
"bn",
"sr",
"az",
"sl",
"kn",
"et",
"mk",
"br",
"eu",
"is",
"hy",
"ne",
"mn",
"bs",
"kk",
"sq",
"sw",
"gl",
"mr",
"pa",
"si",
"km",
"sn",
"yo",
"so",
"af",
"oc",
"ka",
"be",
"tg",
"sd",
"gu",
"am",
"yi",
"lo",
"uz",
"fo",
"ht",
"ps",
"tk",
"nn",
"mt",
"sa",
"lb",
"my",
"bo",
"tl",
"mg",
"as",
"tt",
"haw",
"ln",
"ha",
"ba",
"jw",
"su",
"license:mit",
"region:us"
] | automatic-speech-recognition | 2023-03-28T12:33:41Z | ---
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- 'no'
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
- mr
- pa
- si
- km
- sn
- yo
- so
- af
- oc
- ka
- be
- tg
- sd
- gu
- am
- yi
- lo
- uz
- fo
- ht
- ps
- tk
- nn
- mt
- sa
- lb
- my
- bo
- tl
- mg
- as
- tt
- haw
- ln
- ha
- ba
- jw
- su
tags:
- audio
- automatic-speech-recognition
license: mit
library_name: ctranslate2
---
# Whisper large-v1 model for CTranslate2
This repository contains the conversion of [openai/whisper-large](https://huggingface.co/openai/whisper-large) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.
This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/guillaumekln/faster-whisper).
## Example
```python
from faster_whisper import WhisperModel
model = WhisperModel("large-v1")
segments, info = model.transcribe("audio.mp3")
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
## Conversion details
The original model was converted with the following command:
```
ct2-transformers-converter --model openai/whisper-large --output_dir faster-whisper-large-v1 \
--copy_files tokenizer.json --quantization float16
```
Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html).
## More information
**For more information about the original model, see its [model card](https://huggingface.co/openai/whisper-large).**
|
guillaumekln/faster-whisper-large-v2 | guillaumekln | 2023-05-12T18:58:25Z | 161,400 | 193 | ctranslate2 | [
"ctranslate2",
"audio",
"automatic-speech-recognition",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"ro",
"da",
"hu",
"ta",
"no",
"th",
"ur",
"hr",
"bg",
"lt",
"la",
"mi",
"ml",
"cy",
"sk",
"te",
"fa",
"lv",
"bn",
"sr",
"az",
"sl",
"kn",
"et",
"mk",
"br",
"eu",
"is",
"hy",
"ne",
"mn",
"bs",
"kk",
"sq",
"sw",
"gl",
"mr",
"pa",
"si",
"km",
"sn",
"yo",
"so",
"af",
"oc",
"ka",
"be",
"tg",
"sd",
"gu",
"am",
"yi",
"lo",
"uz",
"fo",
"ht",
"ps",
"tk",
"nn",
"mt",
"sa",
"lb",
"my",
"bo",
"tl",
"mg",
"as",
"tt",
"haw",
"ln",
"ha",
"ba",
"jw",
"su",
"license:mit",
"region:us"
] | automatic-speech-recognition | 2023-03-23T10:36:06Z | ---
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- 'no'
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
- mr
- pa
- si
- km
- sn
- yo
- so
- af
- oc
- ka
- be
- tg
- sd
- gu
- am
- yi
- lo
- uz
- fo
- ht
- ps
- tk
- nn
- mt
- sa
- lb
- my
- bo
- tl
- mg
- as
- tt
- haw
- ln
- ha
- ba
- jw
- su
tags:
- audio
- automatic-speech-recognition
license: mit
library_name: ctranslate2
---
# Whisper large-v2 model for CTranslate2
This repository contains the conversion of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.
This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/guillaumekln/faster-whisper).
## Example
```python
from faster_whisper import WhisperModel
model = WhisperModel("large-v2")
segments, info = model.transcribe("audio.mp3")
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
## Conversion details
The original model was converted with the following command:
```
ct2-transformers-converter --model openai/whisper-large-v2 --output_dir faster-whisper-large-v2 \
--copy_files tokenizer.json --quantization float16
```
Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html).
## More information
**For more information about the original model, see its [model card](https://huggingface.co/openai/whisper-large-v2).**
|
guillaumekln/faster-whisper-medium | guillaumekln | 2023-05-12T18:58:10Z | 15,872 | 32 | ctranslate2 | [
"ctranslate2",
"audio",
"automatic-speech-recognition",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"ro",
"da",
"hu",
"ta",
"no",
"th",
"ur",
"hr",
"bg",
"lt",
"la",
"mi",
"ml",
"cy",
"sk",
"te",
"fa",
"lv",
"bn",
"sr",
"az",
"sl",
"kn",
"et",
"mk",
"br",
"eu",
"is",
"hy",
"ne",
"mn",
"bs",
"kk",
"sq",
"sw",
"gl",
"mr",
"pa",
"si",
"km",
"sn",
"yo",
"so",
"af",
"oc",
"ka",
"be",
"tg",
"sd",
"gu",
"am",
"yi",
"lo",
"uz",
"fo",
"ht",
"ps",
"tk",
"nn",
"mt",
"sa",
"lb",
"my",
"bo",
"tl",
"mg",
"as",
"tt",
"haw",
"ln",
"ha",
"ba",
"jw",
"su",
"license:mit",
"region:us"
] | automatic-speech-recognition | 2023-03-23T10:28:55Z | ---
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- 'no'
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
- mr
- pa
- si
- km
- sn
- yo
- so
- af
- oc
- ka
- be
- tg
- sd
- gu
- am
- yi
- lo
- uz
- fo
- ht
- ps
- tk
- nn
- mt
- sa
- lb
- my
- bo
- tl
- mg
- as
- tt
- haw
- ln
- ha
- ba
- jw
- su
tags:
- audio
- automatic-speech-recognition
license: mit
library_name: ctranslate2
---
# Whisper medium model for CTranslate2
This repository contains the conversion of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.
This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/guillaumekln/faster-whisper).
## Example
```python
from faster_whisper import WhisperModel
model = WhisperModel("medium")
segments, info = model.transcribe("audio.mp3")
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
## Conversion details
The original model was converted with the following command:
```
ct2-transformers-converter --model openai/whisper-medium --output_dir faster-whisper-medium \
--copy_files tokenizer.json --quantization float16
```
Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html).
## More information
**For more information about the original model, see its [model card](https://huggingface.co/openai/whisper-medium).**
|
guillaumekln/faster-whisper-small.en | guillaumekln | 2023-05-12T18:57:44Z | 110 | 1 | ctranslate2 | [
"ctranslate2",
"audio",
"automatic-speech-recognition",
"en",
"license:mit",
"region:us"
] | automatic-speech-recognition | 2023-03-23T10:20:17Z | ---
language:
- en
tags:
- audio
- automatic-speech-recognition
license: mit
library_name: ctranslate2
---
# Whisper small.en model for CTranslate2
This repository contains the conversion of [openai/whisper-small.en](https://huggingface.co/openai/whisper-small.en) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.
This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/guillaumekln/faster-whisper).
## Example
```python
from faster_whisper import WhisperModel
model = WhisperModel("small.en")
segments, info = model.transcribe("audio.mp3")
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
## Conversion details
The original model was converted with the following command:
```
ct2-transformers-converter --model openai/whisper-small.en --output_dir faster-whisper-small.en \
--copy_files tokenizer.json --quantization float16
```
Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html).
## More information
**For more information about the original model, see its [model card](https://huggingface.co/openai/whisper-small.en).**
|
seviladiguzel/roberta-base-finetuned-squad_roberta | seviladiguzel | 2023-05-12T18:57:33Z | 18 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-05-12T17:52:56Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: roberta-base-finetuned-squad_roberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-squad_roberta
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8537
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.8917 | 1.0 | 5536 | 0.8669 |
| 0.6739 | 2.0 | 11072 | 0.8537 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
guillaumekln/faster-whisper-base | guillaumekln | 2023-05-12T18:57:32Z | 110,649 | 9 | ctranslate2 | [
"ctranslate2",
"audio",
"automatic-speech-recognition",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"ro",
"da",
"hu",
"ta",
"no",
"th",
"ur",
"hr",
"bg",
"lt",
"la",
"mi",
"ml",
"cy",
"sk",
"te",
"fa",
"lv",
"bn",
"sr",
"az",
"sl",
"kn",
"et",
"mk",
"br",
"eu",
"is",
"hy",
"ne",
"mn",
"bs",
"kk",
"sq",
"sw",
"gl",
"mr",
"pa",
"si",
"km",
"sn",
"yo",
"so",
"af",
"oc",
"ka",
"be",
"tg",
"sd",
"gu",
"am",
"yi",
"lo",
"uz",
"fo",
"ht",
"ps",
"tk",
"nn",
"mt",
"sa",
"lb",
"my",
"bo",
"tl",
"mg",
"as",
"tt",
"haw",
"ln",
"ha",
"ba",
"jw",
"su",
"license:mit",
"region:us"
] | automatic-speech-recognition | 2023-03-23T10:19:37Z | ---
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- 'no'
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
- mr
- pa
- si
- km
- sn
- yo
- so
- af
- oc
- ka
- be
- tg
- sd
- gu
- am
- yi
- lo
- uz
- fo
- ht
- ps
- tk
- nn
- mt
- sa
- lb
- my
- bo
- tl
- mg
- as
- tt
- haw
- ln
- ha
- ba
- jw
- su
tags:
- audio
- automatic-speech-recognition
license: mit
library_name: ctranslate2
---
# Whisper base model for CTranslate2
This repository contains the conversion of [openai/whisper-base](https://huggingface.co/openai/whisper-base) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.
This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/guillaumekln/faster-whisper).
## Example
```python
from faster_whisper import WhisperModel
model = WhisperModel("base")
segments, info = model.transcribe("audio.mp3")
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
## Conversion details
The original model was converted with the following command:
```
ct2-transformers-converter --model openai/whisper-base --output_dir faster-whisper-base \
--copy_files tokenizer.json --quantization float16
```
Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html).
## More information
**For more information about the original model, see its [model card](https://huggingface.co/openai/whisper-base).**
|
guillaumekln/faster-whisper-tiny | guillaumekln | 2023-05-12T18:57:08Z | 2,644 | 5 | ctranslate2 | [
"ctranslate2",
"audio",
"automatic-speech-recognition",
"en",
"zh",
"de",
"es",
"ru",
"ko",
"fr",
"ja",
"pt",
"tr",
"pl",
"ca",
"nl",
"ar",
"sv",
"it",
"id",
"hi",
"fi",
"vi",
"he",
"uk",
"el",
"ms",
"cs",
"ro",
"da",
"hu",
"ta",
"no",
"th",
"ur",
"hr",
"bg",
"lt",
"la",
"mi",
"ml",
"cy",
"sk",
"te",
"fa",
"lv",
"bn",
"sr",
"az",
"sl",
"kn",
"et",
"mk",
"br",
"eu",
"is",
"hy",
"ne",
"mn",
"bs",
"kk",
"sq",
"sw",
"gl",
"mr",
"pa",
"si",
"km",
"sn",
"yo",
"so",
"af",
"oc",
"ka",
"be",
"tg",
"sd",
"gu",
"am",
"yi",
"lo",
"uz",
"fo",
"ht",
"ps",
"tk",
"nn",
"mt",
"sa",
"lb",
"my",
"bo",
"tl",
"mg",
"as",
"tt",
"haw",
"ln",
"ha",
"ba",
"jw",
"su",
"license:mit",
"region:us"
] | automatic-speech-recognition | 2023-03-23T10:14:28Z | ---
language:
- en
- zh
- de
- es
- ru
- ko
- fr
- ja
- pt
- tr
- pl
- ca
- nl
- ar
- sv
- it
- id
- hi
- fi
- vi
- he
- uk
- el
- ms
- cs
- ro
- da
- hu
- ta
- 'no'
- th
- ur
- hr
- bg
- lt
- la
- mi
- ml
- cy
- sk
- te
- fa
- lv
- bn
- sr
- az
- sl
- kn
- et
- mk
- br
- eu
- is
- hy
- ne
- mn
- bs
- kk
- sq
- sw
- gl
- mr
- pa
- si
- km
- sn
- yo
- so
- af
- oc
- ka
- be
- tg
- sd
- gu
- am
- yi
- lo
- uz
- fo
- ht
- ps
- tk
- nn
- mt
- sa
- lb
- my
- bo
- tl
- mg
- as
- tt
- haw
- ln
- ha
- ba
- jw
- su
tags:
- audio
- automatic-speech-recognition
license: mit
library_name: ctranslate2
---
# Whisper tiny model for CTranslate2
This repository contains the conversion of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.
This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/guillaumekln/faster-whisper).
## Example
```python
from faster_whisper import WhisperModel
model = WhisperModel("tiny")
segments, info = model.transcribe("audio.mp3")
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
## Conversion details
The original model was converted with the following command:
```
ct2-transformers-converter --model openai/whisper-tiny --output_dir faster-whisper-tiny \
--copy_files tokenizer.json --quantization float16
```
Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html).
## More information
**For more information about the original model, see its [model card](https://huggingface.co/openai/whisper-tiny).**
|
guillaumekln/faster-whisper-tiny.en | guillaumekln | 2023-05-12T18:56:53Z | 378 | 2 | ctranslate2 | [
"ctranslate2",
"audio",
"automatic-speech-recognition",
"en",
"license:mit",
"region:us"
] | automatic-speech-recognition | 2023-03-23T10:17:41Z | ---
language:
- en
tags:
- audio
- automatic-speech-recognition
license: mit
library_name: ctranslate2
---
# Whisper tiny.en model for CTranslate2
This repository contains the conversion of [openai/whisper-tiny.en](https://huggingface.co/openai/whisper-tiny.en) to the [CTranslate2](https://github.com/OpenNMT/CTranslate2) model format.
This model can be used in CTranslate2 or projects based on CTranslate2 such as [faster-whisper](https://github.com/guillaumekln/faster-whisper).
## Example
```python
from faster_whisper import WhisperModel
model = WhisperModel("tiny.en")
segments, info = model.transcribe("audio.mp3")
for segment in segments:
print("[%.2fs -> %.2fs] %s" % (segment.start, segment.end, segment.text))
```
## Conversion details
The original model was converted with the following command:
```
ct2-transformers-converter --model openai/whisper-tiny.en --output_dir faster-whisper-tiny.en \
--copy_files tokenizer.json --quantization float16
```
Note that the model weights are saved in FP16. This type can be changed when the model is loaded using the [`compute_type` option in CTranslate2](https://opennmt.net/CTranslate2/quantization.html).
## More information
**For more information about the original model, see its [model card](https://huggingface.co/openai/whisper-tiny.en).**
|
Ktang2k/ppo-Huggy-1 | Ktang2k | 2023-05-12T18:54:20Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-05-12T18:54:13Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: Ktang2k/ppo-Huggy-1
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
moisesmota/donut-base-sroie | moisesmota | 2023-05-12T18:52:23Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2023-04-28T13:38:18Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-sroie
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroie
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.2
|
jainr3/sd-diffusiondb-pixelart-model-lora | jainr3 | 2023-05-12T18:51:50Z | 3 | 2 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:stabilityai/stable-diffusion-2-1",
"base_model:adapter:stabilityai/stable-diffusion-2-1",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-05-12T18:39:45Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-1
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA text2image fine-tuning - jainr3/sd-diffusiondb-pixelart-model-lora
These are LoRA adaption weights for stabilityai/stable-diffusion-2-1. The weights were fine-tuned on the jainr3/diffusiondb-pixelart dataset. You can find some example images in the following.




|
Abdeldjalil21/djalil-base-sentiment-model-10k-samples | Abdeldjalil21 | 2023-05-12T18:29:32Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-09T23:59:57Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: djalil-base-sentiment-model-10k-samples
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# djalil-base-sentiment-model-10k-samples
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4204
- Accuracy: 0.827
- F1: 0.8171
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
stillerman/MDEL-github-arxiv | stillerman | 2023-05-12T18:27:15Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"MDEL",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-05-12T18:24:42Z |
---
tags:
- MDEL
---
# Model Name
stillerman/MDEL-github-arxiv
# Model Description
This model was generated by averaging the weights of the following models
- [Multi-Domain-Expert-Layers/expert-github](https://huggingface.co/Multi-Domain-Expert-Layers/expert-github)
- [Multi-Domain-Expert-Layers/expert-arxiv](https://huggingface.co/Multi-Domain-Expert-Layers/expert-arxiv)
|
Guilherme34/Jennifer-lora-7bv3 | Guilherme34 | 2023-05-12T18:25:36Z | 0 | 1 | null | [
"tensorboard",
"pt",
"region:us"
] | null | 2023-05-12T17:32:20Z | ---
language:
- pt
---
Esta é a terceira versão de uma inteligência artificial finetunada que fala em português do brasil, ela foi treinada em cima do llama 7b de decapoda, e foi treinada no LLaMA-LoRA Tuner de zetavg utilizando o dataset da cabrita lora e o alpaca cleaned
Divirta-se! |
stillerman/MDEL-pubmed-feelaw-github-arxiv | stillerman | 2023-05-12T18:18:39Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"MDEL",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-05-12T18:11:40Z |
---
tags:
- MDEL
---
# Model Name
stillerman/MDEL-pubmed-feelaw-github-arxiv
# Model Description
This model was generated by averaging the weights of the following models
- [Multi-Domain-Expert-Layers/expert-pubmed_central](https://huggingface.co/Multi-Domain-Expert-Layers/expert-pubmed_central)
- [Multi-Domain-Expert-Layers/expert-freelaw](https://huggingface.co/Multi-Domain-Expert-Layers/expert-freelaw)
- [Multi-Domain-Expert-Layers/expert-github](https://huggingface.co/Multi-Domain-Expert-Layers/expert-github)
- [Multi-Domain-Expert-Layers/expert-arxiv](https://huggingface.co/Multi-Domain-Expert-Layers/expert-arxiv)
|
aloutzidis/pegasus-large-cnn-quantized | aloutzidis | 2023-05-12T18:08:53Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"en",
"dataset:cnn_dailymail",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-05-11T20:55:16Z | ---
tags:
- generated_from_trainer
model-index:
- name: pegasus-large-cnn-quantized
results: []
license: apache-2.0
datasets:
- cnn_dailymail
metrics:
- bleu
- rouge
language:
- en
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-large-cnn-quantized
This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on CNN/DailyMail dataset.

## Model description
Online news reading has become one of the most popular ways to consume the latest news. News aggregation websites such as Google News and Yahoo News have made it easy for users to find the latest news and provide thousands of news stories from hundreds of news publishers. As people have limited time, reading all news articles is not feasible. The success of zero-shot and few-shot prompting with models like GPT-3 has led to a paradigm shift in NLP. So, in the era of ChatGPT, we we conducted experiments using various large language models (LLMs) such as BART and PEGASUS to improve the quality and coherence of the generated news summaries.
This is a quantized model finetuned on the CNN/DailyMail dataset.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 123
- optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Framework versions
- Transformers 4.29.0
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3 |
madhav-devrev/flan-t5-small-work-filters | madhav-devrev | 2023-05-12T18:03:58Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-05-10T17:53:30Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: flan-t5-small-work-filters
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-small-work-filters
This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0346
- eval_rouge1: 46.2714
- eval_rouge2: 36.488
- eval_rougeL: 45.9709
- eval_rougeLsum: 45.9674
- eval_gen_len: 18.9753
- eval_runtime: 44.1148
- eval_samples_per_second: 23.87
- eval_steps_per_second: 2.992
- epoch: 5.0
- step: 660
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
millionhz/segformer-b0-finetuned-flame | millionhz | 2023-05-12T17:55:49Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"segformer",
"vision",
"image-segmentation",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-segmentation | 2023-05-12T14:59:06Z | ---
license: mit
tags:
- vision
- image-segmentation
---
# SegFormer (b0-sized) model fine-tuned on FLAME
The model was trained for a deep learning project titled [Forest Fire Detection](https://github.com/millionhz/forest-fire-detection).
## Model Description
The model is intended to be used for fire detection through image segmentation.
The provided pretrained model was finetuned on the [FLAME](https://dx.doi.org/10.21227/qad6-r683) dataset for 3 epochs with a learning rate of 1e-3 and was able to score an IOU score of **0.745** on the test examples.
# How to use
Here is how to use this model to segment an image:
```python
from transformers import SegformerImageProcessor, SegformerForSemanticSegmentation
from PIL import Image
import requests
processor = AutoFeatureExtractor.from_pretrained("millionhz/segformer-b0-finetuned-flame")
model = SegformerForSemanticSegmentation.from_pretrained("millionhz/segformer-b0-finetuned-flame")
url = <add url here>
image = Image.open(requests.get(url, stream=True).raw)
inputs = feature_extractor(images=image, return_tensors="pt")
outputs = model(**inputs)
```
## License
The license for this model can be found [here](https://github.com/NVlabs/SegFormer/blob/master/LICENSE).
|
eason0203/swin-tiny-patch4-window7-224-arty-bg-classifier | eason0203 | 2023-05-12T17:30:14Z | 187 | 0 | transformers | [
"transformers",
"pytorch",
"swin",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-05-12T07:42:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: swin-tiny-patch4-window7-224-arty-bg-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-arty-bg-classifier
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0086
- eval_accuracy: 0.9975
- eval_runtime: 102.2105
- eval_samples_per_second: 127.639
- eval_steps_per_second: 1.331
- epoch: 1.84
- step: 250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 96
- eval_batch_size: 96
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 384
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Halcyonindo/an1kulora | Halcyonindo | 2023-05-12T17:27:24Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-12T17:26:20Z | ---
license: creativeml-openrail-m
---
|
hemagamal/mdeberta_Quran_qa | hemagamal | 2023-05-12T17:24:14Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-05-12T17:07:25Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: mdeberta_Quran_qa
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta_Quran_qa
This model is a fine-tuned version of [timpal0l/mdeberta-v3-base-squad2](https://huggingface.co/timpal0l/mdeberta-v3-base-squad2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.6248
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 89 | 2.2395 |
| No log | 2.0 | 178 | 2.3282 |
| No log | 3.0 | 267 | 2.4226 |
| No log | 4.0 | 356 | 2.6551 |
| No log | 5.0 | 445 | 2.9332 |
| 1.0317 | 6.0 | 534 | 3.2124 |
| 1.0317 | 7.0 | 623 | 3.2915 |
| 1.0317 | 8.0 | 712 | 3.5401 |
| 1.0317 | 9.0 | 801 | 3.6132 |
| 1.0317 | 10.0 | 890 | 3.6248 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AustinCarthy/Base_10Kphish_benignFall_IL_10Krealphish | AustinCarthy | 2023-05-12T17:22:27Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-12T16:10:35Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Base_10Kphish_benignFall_IL_10Krealphish_0.75
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Base_10Kphish_benignFall_IL_10Krealphish_0.75
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0551
- Accuracy: 0.9938
- F1: 0.9303
- Precision: 0.9982
- Recall: 0.871
- Roc Auc Score: 0.9355
- Tpr At Fpr 0.01: 0.8794
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0079 | 1.0 | 6563 | 0.0209 | 0.9956 | 0.9525 | 0.9878 | 0.9196 | 0.9595 | 0.862 |
| 0.003 | 2.0 | 13126 | 0.0338 | 0.9949 | 0.9438 | 0.9940 | 0.8984 | 0.9491 | 0.8796 |
| 0.0024 | 3.0 | 19689 | 0.0410 | 0.9948 | 0.9427 | 0.9949 | 0.8958 | 0.9478 | 0.8648 |
| 0.0014 | 4.0 | 26252 | 0.0493 | 0.9941 | 0.9342 | 0.9982 | 0.878 | 0.9390 | 0.881 |
| 0.0003 | 5.0 | 32815 | 0.0551 | 0.9938 | 0.9303 | 0.9982 | 0.871 | 0.9355 | 0.8794 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
|
berluk/resnet50-fish-rec | berluk | 2023-05-12T17:18:31Z | 5 | 0 | keras | [
"keras",
"tf-keras",
"image-classification",
"region:us"
] | image-classification | 2023-05-12T13:15:58Z | ---
library_name: keras
pipeline_tag: image-classification
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| learning_rate | 0.001 |
| decay | 0.0 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 | |
bohanhou14/abortion_news_model | bohanhou14 | 2023-05-12T17:09:43Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-12T07:20:21Z | ---
license: apache-2.0
language:
- en
--- |
asenella/mmnist_MoPoEconfig2_seed_1_ratio_0_c | asenella | 2023-05-12T17:01:55Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-05-12T17:01:47Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
aprilzoo/distilbert-base-uncased-finetuned-imdb | aprilzoo | 2023-05-12T16:53:13Z | 61 | 0 | transformers | [
"transformers",
"tf",
"distilbert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-05-06T18:51:57Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: aprilzoo/distilbert-base-uncased-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# aprilzoo/distilbert-base-uncased-finetuned-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 4.2701
- Validation Loss: 2.6432
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -997, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 4.2701 | 2.6432 | 0 |
### Framework versions
- Transformers 4.29.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
spectraldecomp/ppo-LunarLander-v2 | spectraldecomp | 2023-05-12T16:44:30Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-12T16:44:08Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 260.29 +/- 12.94
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dian34323/melatijkt48 | dian34323 | 2023-05-12T16:32:24Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-12T16:31:15Z | ---
license: creativeml-openrail-m
---
|
muhammadravi251001/fine-tuned-DatasetQAS-TYDI-QA-ID-with-xlm-roberta-large-with-ITTL-without-freeze-LR-1e-05 | muhammadravi251001 | 2023-05-12T16:25:57Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-05-05T04:56:01Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: fine-tuned-DatasetQAS-TYDI-QA-ID-with-xlm-roberta-large-with-ITTL-without-freeze-LR-1e-05
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-DatasetQAS-TYDI-QA-ID-with-xlm-roberta-large-with-ITTL-without-freeze-LR-1e-05
This model is a fine-tuned version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9402
- Exact Match: 69.3662
- F1: 82.0036
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Exact Match | F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----------:|:-------:|
| 6.2837 | 0.5 | 19 | 3.6986 | 8.4507 | 17.7536 |
| 6.2837 | 0.99 | 38 | 2.5899 | 18.4859 | 29.7766 |
| 3.6833 | 1.5 | 57 | 1.7044 | 42.6056 | 56.8157 |
| 3.6833 | 1.99 | 76 | 1.2711 | 53.3451 | 70.2979 |
| 3.6833 | 2.5 | 95 | 1.1063 | 62.3239 | 75.7765 |
| 1.5024 | 2.99 | 114 | 1.0275 | 64.2606 | 78.0460 |
| 1.5024 | 3.5 | 133 | 0.9941 | 65.8451 | 79.1313 |
| 1.0028 | 3.99 | 152 | 0.9642 | 67.4296 | 80.6196 |
| 1.0028 | 4.5 | 171 | 0.9682 | 69.0141 | 82.4975 |
| 1.0028 | 4.99 | 190 | 0.9455 | 67.9577 | 81.0386 |
| 0.7765 | 5.5 | 209 | 0.9802 | 67.7817 | 81.0844 |
| 0.7765 | 5.99 | 228 | 0.9402 | 69.3662 | 82.0036 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.2.0
- Tokenizers 0.13.2
|
Amalq/roberta-large-schizophrenia-v12 | Amalq | 2023-05-12T16:18:11Z | 163 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-05-12T15:54:41Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: roberta-large-schizophrenia-v12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-schizophrenia-v12
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5260
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
grenmon/pegasus-x-large-finetuned-summarization | grenmon | 2023-05-12T16:06:30Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"pegasus_x",
"text2text-generation",
"summarization",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2023-05-12T15:25:46Z | ---
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: pegasus-x-large-finetuned-summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# pegasus-x-large-finetuned-summarization
This model is a fine-tuned version of [google/pegasus-x-large](https://huggingface.co/google/pegasus-x-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9503
- Rouge1: 54.656
- Rouge2: 33.2773
- Rougel: 44.7797
- Rougelsum: 51.2888
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 1.1821 | 1.0 | 308 | 0.9389 | 49.6848 | 29.0753 | 40.9828 | 47.1619 |
| 0.8932 | 2.0 | 616 | 0.8955 | 49.6176 | 28.8588 | 41.7149 | 47.3719 |
| 0.7433 | 3.0 | 924 | 0.9202 | 54.0016 | 31.8254 | 43.4441 | 50.9312 |
| 0.6495 | 4.0 | 1232 | 0.9321 | 52.6912 | 31.6843 | 43.8896 | 49.8726 |
| 0.587 | 5.0 | 1540 | 0.9503 | 54.656 | 33.2773 | 44.7797 | 51.2888 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Mikepool117/q-FrozenLake-v1-4x4-noSlippery | Mikepool117 | 2023-05-12T16:00:55Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-12T16:00:23Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
|
henripett/ppo-Huggy | henripett | 2023-05-12T15:59:31Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-05-12T15:12:33Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Find your model_id: henripett/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
saikatkumardey/my_awesome_qa_model | saikatkumardey | 2023-05-12T15:46:32Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-05-12T15:12:29Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9972
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 250 | 2.5592 |
| 2.909 | 2.0 | 500 | 2.0550 |
| 2.909 | 3.0 | 750 | 1.9972 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
casellimarco/q-FrozenLake-v1-4x4-noSlippery-gamma_1 | casellimarco | 2023-05-12T15:33:36Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-12T15:31:22Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery-gamma_1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 0.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
With `gamma=1` the Q table is
```
array([[1., 1., 1., 1.],
[1., 0., 1., 1.],
[1., 1., 1., 1.],
[1., 0., 1., 1.],
[1., 1., 0., 1.],
[0., 0., 0., 0.],
[0., 1., 0., 1.],
[0., 0., 0., 0.],
[1., 0., 1., 1.],
[1., 1., 1., 0.],
[1., 1., 0., 1.],
[0., 0., 0., 0.],
[0., 0., 0., 0.],
[0., 1., 1., 1.],
[1., 1., 1., 1.],
[0., 0., 0., 0.]])
```
## Usage
```python
model = load_from_hub(repo_id="casellimarco/q-FrozenLake-v1-4x4-noSlippery-gamma_1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
itsmeboris/jobbert-base-cased-ner | itsmeboris | 2023-05-12T15:21:11Z | 99 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-05-12T14:43:12Z | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: jobbert-base-cased-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# jobbert-base-cased-ner
This model is a fine-tuned version of [jjzha/jobbert-base-cased](https://huggingface.co/jjzha/jobbert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3789
- Job Title precision: 0.7916
- Job Title recall: 0.8721
- Job Title f1: 0.8299
- Loc precision: 0.8572
- Loc recall: 0.9506
- Loc f1: 0.9015
- Org precision: 0.6727
- Org recall: 0.7458
- Org f1: 0.7074
- Misc precision: 0.6893
- Misc recall: 0.6587
- Misc f1: 0.6736
- Precision: 0.7772
- Recall: 0.8551
- F1: 0.8143
- Accuracy: 0.8680
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Job Title precision | Job Title recall | Job Title f1 | Loc precision | Loc recall | Loc f1 | Org precision | Org recall | Org f1 | Misc precision | Misc recall | Misc f1 | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------------:|:----------------:|:------------:|:-------------:|:----------:|:------:|:-------------:|:----------:|:------:|:--------------:|:-----------:|:-------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 308 | 0.3896 | 0.7827 | 0.8394 | 0.8101 | 0.8813 | 0.8856 | 0.8835 | 0.6865 | 0.7151 | 0.7005 | 0.6730 | 0.6041 | 0.6367 | 0.7800 | 0.8148 | 0.7970 | 0.8577 |
| 0.4672 | 2.0 | 616 | 0.3789 | 0.7916 | 0.8721 | 0.8299 | 0.8572 | 0.9506 | 0.9015 | 0.6727 | 0.7458 | 0.7074 | 0.6893 | 0.6587 | 0.6736 | 0.7772 | 0.8551 | 0.8143 | 0.8680 |
| 0.4672 | 3.0 | 924 | 0.4067 | 0.7800 | 0.8876 | 0.8304 | 0.8560 | 0.9443 | 0.8980 | 0.6928 | 0.7026 | 0.6977 | 0.6006 | 0.7440 | 0.6646 | 0.7730 | 0.8549 | 0.8119 | 0.8651 |
### Framework versions
- Transformers 4.28.1
- Pytorch 1.7.1+cu110
- Datasets 2.12.0
- Tokenizers 0.13.2
|
seviladiguzel/distilbert-base-uncased-finetuned-squad_v2 | seviladiguzel | 2023-05-12T15:21:02Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-05-12T13:20:15Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad_v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad_v2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1316
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2238 | 1.0 | 5533 | 1.1654 |
| 0.9823 | 2.0 | 11066 | 1.1316 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
rickysk/videomae-base-finetuned-ucf101-subset | rickysk | 2023-05-12T15:19:14Z | 59 | 0 | transformers | [
"transformers",
"pytorch",
"videomae",
"video-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2023-05-12T05:24:04Z | ---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3230
- Accuracy: 0.8968
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 1200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.1382 | 0.06 | 75 | 2.1056 | 0.2571 |
| 0.8185 | 1.06 | 150 | 0.6270 | 0.8 |
| 0.5221 | 2.06 | 225 | 0.4341 | 0.8 |
| 0.1069 | 3.06 | 300 | 0.4390 | 0.8714 |
| 0.0195 | 4.06 | 375 | 0.2938 | 0.8571 |
| 0.0097 | 5.06 | 450 | 0.2114 | 0.9 |
| 0.0076 | 6.06 | 525 | 0.1509 | 0.9429 |
| 0.1686 | 7.06 | 600 | 0.2527 | 0.9571 |
| 0.0679 | 8.06 | 675 | 0.0615 | 0.9714 |
| 0.0024 | 9.06 | 750 | 0.1589 | 0.9429 |
| 0.1946 | 10.06 | 825 | 0.4014 | 0.9 |
| 0.154 | 11.06 | 900 | 0.1862 | 0.9429 |
| 0.0021 | 12.06 | 975 | 0.0683 | 0.9857 |
| 0.0019 | 13.06 | 1050 | 0.0541 | 0.9857 |
| 0.002 | 14.06 | 1125 | 0.0473 | 0.9857 |
| 0.0018 | 15.06 | 1200 | 0.0475 | 0.9857 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
WALIDALI/bekiorangemixs | WALIDALI | 2023-05-12T15:16:09Z | 0 | 0 | null | [
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-05-12T15:03:42Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### bekiOrangeMixs Dreambooth model trained by WALIDALI with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
rishabhjain16/whisper_medium_en_to_myst_pf_ot100 | rishabhjain16 | 2023-05-12T15:05:57Z | 10 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-05-10T09:45:26Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: openai/whisper-medium.en
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: rishabhjain16/infer_myst
type: rishabhjain16/infer_myst
config: en
split: test
metrics:
- type: wer
value: 12.3
name: WER
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: rishabhjain16/infer_pfs
type: rishabhjain16/infer_pfs
config: en
split: test
metrics:
- type: wer
value: 3.28
name: WER
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: rishabhjain16/infer_cmu
type: rishabhjain16/infer_cmu
config: en
split: test
metrics:
- type: wer
value: 9.53
name: WER
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: rishabhjain16/libritts_dev_clean
type: rishabhjain16/libritts_dev_clean
config: en
split: test
metrics:
- type: wer
value: 5.01
name: WER
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: rishabhjain16/infer_pf_swedish
type: rishabhjain16/infer_pf_swedish
config: en
split: test
metrics:
- type: wer
value: 8.94
name: WER
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: rishabhjain16/infer_pf_german
type: rishabhjain16/infer_pf_german
config: en
split: test
metrics:
- type: wer
value: 34.78
name: WER
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: rishabhjain16/infer_pf_italian
type: rishabhjain16/infer_pf_italian
config: en
split: test
metrics:
- type: wer
value: 4.42
name: WER
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: rishabhjain16/infer_so_chinese
type: rishabhjain16/infer_so_chinese
config: en
split: test
metrics:
- type: wer
value: 14.87
name: WER
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai/whisper-medium.en
This model is a fine-tuned version of [openai/whisper-medium.en](https://huggingface.co/openai/whisper-medium.en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4158
- Wer: 10.8712
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.6148 | 0.12 | 500 | 0.3107 | 12.7838 |
| 0.1877 | 1.09 | 1000 | 0.2892 | 11.2910 |
| 0.0697 | 2.05 | 1500 | 0.3146 | 10.7857 |
| 0.0748 | 3.02 | 2000 | 0.3162 | 11.5254 |
| 0.0308 | 3.14 | 2500 | 0.3450 | 11.1111 |
| 0.0192 | 4.11 | 3000 | 0.3720 | 10.9101 |
| 0.0046 | 5.07 | 3500 | 0.4155 | 11.2344 |
| 0.0096 | 6.03 | 4000 | 0.4158 | 10.8712 |
### Framework versions
- Transformers 4.29.0
- Pytorch 1.14.0a0+44dac51
- Datasets 2.12.0
- Tokenizers 0.13.3
|
tp-runport/bert-base-uncased-brands | tp-runport | 2023-05-12T14:59:27Z | 104 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-04-19T21:14:49Z | ---
{}
---
This is a BERT-based NER model trained to detect PERSON and BRAND entities in text. |
alistvt/single-docalog | alistvt | 2023-05-12T14:55:33Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:doc2dial",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-05-11T09:17:03Z | ---
tags:
- generated_from_trainer
datasets:
- doc2dial
model-index:
- name: single-docalog
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# single-docalog
This model is a fine-tuned version of [alistvt/single-docalog](https://huggingface.co/alistvt/single-docalog) on the doc2dial dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 30
- total_train_batch_size: 240
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
|
grenmon/bart-large-finetuned-summarization | grenmon | 2023-05-12T14:49:29Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | 2023-05-12T14:28:31Z | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-finetuned-summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-finetuned-summarization
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1841
- Rouge1: 32.6763
- Rouge2: 23.1598
- Rougel: 31.2322
- Rougelsum: 32.278
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 1.7048 | 1.0 | 308 | 1.1916 | 32.0296 | 21.6931 | 30.2623 | 31.1959 |
| 1.1153 | 2.0 | 616 | 1.2054 | 30.7076 | 21.7771 | 29.3115 | 29.9377 |
| 0.78 | 3.0 | 924 | 1.1096 | 32.4164 | 22.494 | 31.0367 | 31.8135 |
| 0.5335 | 4.0 | 1232 | 1.1547 | 33.2561 | 23.6119 | 32.1371 | 32.591 |
| 0.361 | 5.0 | 1540 | 1.1841 | 32.6763 | 23.1598 | 31.2322 | 32.278 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
arb9p4/rl_course_vizdoom_health_gathering_supreme | arb9p4 | 2023-05-12T14:43:29Z | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-12T13:42:41Z | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 13.07 +/- 5.50
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r arb9p4/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
itsmeboris/bert-base-cased-ner | itsmeboris | 2023-05-12T14:30:54Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2023-05-12T14:02:52Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-cased-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3793
- Job Title precision: 0.8079
- Job Title recall: 0.8248
- Job Title f1: 0.8163
- Loc precision: 0.8911
- Loc recall: 0.9081
- Loc f1: 0.8995
- Org precision: 0.6484
- Org recall: 0.7620
- Org f1: 0.7006
- Misc precision: 0.6134
- Misc recall: 0.7201
- Misc f1: 0.6625
- Precision: 0.7800
- Recall: 0.8265
- F1: 0.8025
- Accuracy: 0.8606
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Job Title precision | Job Title recall | Job Title f1 | Loc precision | Loc recall | Loc f1 | Org precision | Org recall | Org f1 | Misc precision | Misc recall | Misc f1 | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------------:|:----------------:|:------------:|:-------------:|:----------:|:------:|:-------------:|:----------:|:------:|:--------------:|:-----------:|:-------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 308 | 0.3793 | 0.8079 | 0.8248 | 0.8163 | 0.8911 | 0.9081 | 0.8995 | 0.6484 | 0.7620 | 0.7006 | 0.6134 | 0.7201 | 0.6625 | 0.7800 | 0.8265 | 0.8025 | 0.8606 |
| 0.4249 | 2.0 | 616 | 0.3866 | 0.7911 | 0.8728 | 0.8299 | 0.8676 | 0.9541 | 0.9088 | 0.6551 | 0.7886 | 0.7157 | 0.6623 | 0.6962 | 0.6789 | 0.7719 | 0.8669 | 0.8167 | 0.8685 |
### Framework versions
- Transformers 4.28.1
- Pytorch 1.7.1+cu110
- Datasets 2.12.0
- Tokenizers 0.13.2
|
zeeshan-sardar/Taxi-v3 | zeeshan-sardar | 2023-05-12T14:25:30Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-12T14:25:28Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.69
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="zeeshan-sardar/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
ku-nlp/deberta-v2-base-japanese | ku-nlp | 2023-05-12T14:13:03Z | 56,613 | 28 | transformers | [
"transformers",
"pytorch",
"safetensors",
"deberta-v2",
"fill-mask",
"deberta",
"ja",
"dataset:wikipedia",
"dataset:cc100",
"dataset:oscar",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-01-05T08:04:14Z | ---
language: ja
license: cc-by-sa-4.0
library_name: transformers
tags:
- deberta
- deberta-v2
- fill-mask
datasets:
- wikipedia
- cc100
- oscar
metrics:
- accuracy
mask_token: "[MASK]"
widget:
- text: "京都 大学 で 自然 言語 処理 を [MASK] する 。"
---
# Model Card for Japanese DeBERTa V2 base
## Model description
This is a Japanese DeBERTa V2 base model pre-trained on Japanese Wikipedia, the Japanese portion of CC-100, and the Japanese portion of OSCAR.
## How to use
You can use this model for masked language modeling as follows:
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained('ku-nlp/deberta-v2-base-japanese')
model = AutoModelForMaskedLM.from_pretrained('ku-nlp/deberta-v2-base-japanese')
sentence = '京都 大学 で 自然 言語 処理 を [MASK] する 。' # input should be segmented into words by Juman++ in advance
encoding = tokenizer(sentence, return_tensors='pt')
...
```
You can also fine-tune this model on downstream tasks.
## Tokenization
The input text should be segmented into words by [Juman++](https://github.com/ku-nlp/jumanpp) in advance. [Juman++ 2.0.0-rc3](https://github.com/ku-nlp/jumanpp/releases/tag/v2.0.0-rc3) was used for pre-training. Each word is tokenized into subwords by [sentencepiece](https://github.com/google/sentencepiece).
## Training data
We used the following corpora for pre-training:
- Japanese Wikipedia (as of 20221020, 3.2GB, 27M sentences, 1.3M documents)
- Japanese portion of CC-100 (85GB, 619M sentences, 66M documents)
- Japanese portion of OSCAR (54GB, 326M sentences, 25M documents)
Note that we filtered out documents annotated with "header", "footer", or "noisy" tags in OSCAR.
Also note that Japanese Wikipedia was duplicated 10 times to make the total size of the corpus comparable to that of CC-100 and OSCAR. As a result, the total size of the training data is 171GB.
## Training procedure
We first segmented texts in the corpora into words using [Juman++](https://github.com/ku-nlp/jumanpp).
Then, we built a sentencepiece model with 32000 tokens including words ([JumanDIC](https://github.com/ku-nlp/JumanDIC)) and subwords induced by the unigram language model of [sentencepiece](https://github.com/google/sentencepiece).
We tokenized the segmented corpora into subwords using the sentencepiece model and trained the Japanese DeBERTa model using [transformers](https://github.com/huggingface/transformers) library.
The training took three weeks using 8 NVIDIA A100-SXM4-40GB GPUs.
The following hyperparameters were used during pre-training:
- learning_rate: 2e-4
- per_device_train_batch_size: 44
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 6
- total_train_batch_size: 2,112
- max_seq_length: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear schedule with warmup
- training_steps: 500,000
- warmup_steps: 10,000
The accuracy of the trained model on the masked language modeling task was 0.779.
The evaluation set consists of 5,000 randomly sampled documents from each of the training corpora.
## Fine-tuning on NLU tasks
We fine-tuned the following models and evaluated them on the dev set of JGLUE.
We tuned learning rate and training epochs for each model and task following [the JGLUE paper](https://www.jstage.jst.go.jp/article/jnlp/30/1/30_63/_pdf/-char/ja).
| Model | MARC-ja/acc | JSTS/pearson | JSTS/spearman | JNLI/acc | JSQuAD/EM | JSQuAD/F1 | JComQA/acc |
|-------------------------------|-------------|--------------|---------------|----------|-----------|-----------|------------|
| Waseda RoBERTa base | 0.965 | 0.913 | 0.876 | 0.905 | 0.853 | 0.916 | 0.853 |
| Waseda RoBERTa large (seq512) | 0.969 | 0.925 | 0.890 | 0.928 | 0.910 | 0.955 | 0.900 |
| LUKE Japanese base* | 0.965 | 0.916 | 0.877 | 0.912 | - | - | 0.842 |
| LUKE Japanese large* | 0.965 | 0.932 | 0.902 | 0.927 | - | - | 0.893 |
| DeBERTaV2 base | 0.970 | 0.922 | 0.886 | 0.922 | 0.899 | 0.951 | 0.873 |
| DeBERTaV2 large | 0.968 | 0.925 | 0.892 | 0.924 | 0.912 | 0.959 | 0.890 |
*The scores of LUKE are from [the official repository](https://github.com/studio-ousia/luke).
## Acknowledgments
This work was supported by Joint Usage/Research Center for Interdisciplinary Large-scale Information Infrastructures (JHPCN) through General Collaboration Project no. jh221004, "Developing a Platform for Constructing and Sharing of Large-Scale Japanese Language Models".
For training models, we used the mdx: a platform for the data-driven future.
|
ku-nlp/deberta-v2-large-japanese | ku-nlp | 2023-05-12T14:10:35Z | 36,402 | 9 | transformers | [
"transformers",
"pytorch",
"safetensors",
"deberta-v2",
"fill-mask",
"deberta",
"ja",
"dataset:wikipedia",
"dataset:cc100",
"dataset:oscar",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-01-07T07:45:25Z | ---
language: ja
license: cc-by-sa-4.0
library_name: transformers
tags:
- deberta
- deberta-v2
- fill-mask
datasets:
- wikipedia
- cc100
- oscar
metrics:
- accuracy
mask_token: "[MASK]"
widget:
- text: "京都 大学 で 自然 言語 処理 を [MASK] する 。"
---
# Model Card for Japanese DeBERTa V2 large
## Model description
This is a Japanese DeBERTa V2 large model pre-trained on Japanese Wikipedia, the Japanese portion of CC-100, and the
Japanese portion of OSCAR.
## How to use
You can use this model for masked language modeling as follows:
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained('ku-nlp/deberta-v2-large-japanese')
model = AutoModelForMaskedLM.from_pretrained('ku-nlp/deberta-v2-large-japanese')
sentence = '京都 大学 で 自然 言語 処理 を [MASK] する 。' # input should be segmented into words by Juman++ in advance
encoding = tokenizer(sentence, return_tensors='pt')
...
```
You can also fine-tune this model on downstream tasks.
## Tokenization
The input text should be segmented into words by [Juman++](https://github.com/ku-nlp/jumanpp) in
advance. [Juman++ 2.0.0-rc3](https://github.com/ku-nlp/jumanpp/releases/tag/v2.0.0-rc3) was used for pre-training. Each
word is tokenized into subwords by [sentencepiece](https://github.com/google/sentencepiece).
## Training data
We used the following corpora for pre-training:
- Japanese Wikipedia (as of 20221020, 3.2GB, 27M sentences, 1.3M documents)
- Japanese portion of CC-100 (85GB, 619M sentences, 66M documents)
- Japanese portion of OSCAR (54GB, 326M sentences, 25M documents)
Note that we filtered out documents annotated with "header", "footer", or "noisy" tags in OSCAR.
Also note that Japanese Wikipedia was duplicated 10 times to make the total size of the corpus comparable to that of
CC-100 and OSCAR. As a result, the total size of the training data is 171GB.
## Training procedure
We first segmented texts in the corpora into words using [Juman++](https://github.com/ku-nlp/jumanpp).
Then, we built a sentencepiece model with 32000 tokens including words ([JumanDIC](https://github.com/ku-nlp/JumanDIC))
and subwords induced by the unigram language model of [sentencepiece](https://github.com/google/sentencepiece).
We tokenized the segmented corpora into subwords using the sentencepiece model and trained the Japanese DeBERTa model
using [transformers](https://github.com/huggingface/transformers) library.
The training took 36 days using 8 NVIDIA A100-SXM4-40GB GPUs.
The following hyperparameters were used during pre-training:
- learning_rate: 1e-4
- per_device_train_batch_size: 18
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 16
- total_train_batch_size: 2,304
- max_seq_length: 512
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-06
- lr_scheduler_type: linear schedule with warmup
- training_steps: 300,000
- warmup_steps: 10,000
The accuracy of the trained model on the masked language modeling task was 0.799.
The evaluation set consists of 5,000 randomly sampled documents from each of the training corpora.
## Fine-tuning on NLU tasks
We fine-tuned the following models and evaluated them on the dev set of JGLUE.
We tuned learning rate and training epochs for each model and task
following [the JGLUE paper](https://www.jstage.jst.go.jp/article/jnlp/30/1/30_63/_pdf/-char/ja).
| Model | MARC-ja/acc | JSTS/pearson | JSTS/spearman | JNLI/acc | JSQuAD/EM | JSQuAD/F1 | JComQA/acc |
|-------------------------------|-------------|--------------|---------------|----------|-----------|-----------|------------|
| Waseda RoBERTa base | 0.965 | 0.913 | 0.876 | 0.905 | 0.853 | 0.916 | 0.853 |
| Waseda RoBERTa large (seq512) | 0.969 | 0.925 | 0.890 | 0.928 | 0.910 | 0.955 | 0.900 |
| LUKE Japanese base* | 0.965 | 0.916 | 0.877 | 0.912 | - | - | 0.842 |
| LUKE Japanese large* | 0.965 | 0.932 | 0.902 | 0.927 | - | - | 0.893 |
| DeBERTaV2 base | 0.970 | 0.922 | 0.886 | 0.922 | 0.899 | 0.951 | 0.873 |
| DeBERTaV2 large | 0.968 | 0.925 | 0.892 | 0.924 | 0.912 | 0.959 | 0.890 |
*The scores of LUKE are from [the official repository](https://github.com/studio-ousia/luke).
## Acknowledgments
This work was supported by Joint Usage/Research Center for Interdisciplinary Large-scale Information Infrastructures (
JHPCN) through General Collaboration Project no. jh221004, "Developing a Platform for Constructing and Sharing of
Large-Scale Japanese Language Models".
For training models, we used the mdx: a platform for the data-driven future.
|
dian34323/fionyjkt48 | dian34323 | 2023-05-12T14:10:31Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-12T14:09:05Z | ---
license: creativeml-openrail-m
---
|
seantyh/mpt-1b-rp200b-dolly | seantyh | 2023-05-12T14:10:13Z | 14 | 0 | transformers | [
"transformers",
"pytorch",
"mosaic_gpt",
"text-generation",
"custom_code",
"dataset:togethercomputer/RedPajama-Data-1T",
"arxiv:2302.13971",
"arxiv:2205.14135",
"arxiv:2108.12409",
"license:cc-by-sa-3.0",
"autotrain_compatible",
"region:us"
] | text-generation | 2023-05-12T12:49:07Z | ---
license: cc-by-sa-3.0
datasets:
- togethercomputer/RedPajama-Data-1T
---
[This is a experimental fork of mosaicml/mpt-1b-redpajama-200b-dolly]
# MPT-1b-RedPajama-200b-dolly
MPT-1b-RedPajama-200b-dolly is a 1.3 billion parameter decoder-only transformer pre-trained on the [RedPajama dataset](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) and subsequently fine-tuned on the [Databricks Dolly](https://github.com/databrickslabs/dolly/tree/master/data) instruction dataset.
The model was pre-trained for 200B tokens by sampling from the subsets of the RedPajama dataset in the same proportions as were used by the [Llama series of models](https://arxiv.org/abs/2302.13971).
This model was trained by [MosaicML](https://www.mosaicml.com) and follows a modified decoder-only transformer architecture.
This model is an instruction fine-tuned version of [mpt-1b-redpajama-200b](https://huggingface.co/mosaicml/mpt-1b-redpajama-200b). In other words, the pre-trained version of this model is [mpt-1b-redpajama-200b](https://huggingface.co/mosaicml/mpt-1b-redpajama-200b).
## Model Date
April 20, 2023
## How to Use
Note: This model requires that `trust_remote_code=True` be passed to the `from_pretrained` method.
This is because we use a custom model architecture `MosaicGPT` that is not yet part of the `transformers` package.
`MosaicGPT` includes options for many training efficiency features such as [FlashAttention (Dao et al. 2022)](https://arxiv.org/pdf/2205.14135.pdf), [ALIBI](https://arxiv.org/abs/2108.12409), QK LayerNorm, and more.
```python
import transformers
model = transformers.AutoModelForCausalLM.from_pretrained('mosaicml/mpt-1b-redpajama-200b-dolly', trust_remote_code=True)
```
To use the optimized triton implementation of FlashAttention, you can load with `attn_impl='triton'` and move the model to `bfloat16` like so:
```python
model = transformers.AutoModelForCausalLM.from_pretrained('mosaicml/mpt-1b-redpajama-200b-dolly', trust_remote_code=True, attn_impl='triton')
model.to(device='cuda:0', dtype=torch.bfloat16)
```
## Model Description
This model uses the MosaicML LLM codebase, which can be found in the [MosaicML Examples Repository](https://github.com/mosaicml/examples/tree/v0.0.4/examples/llm).
The architecture is a modification of a standard decoder-only transformer.
The transformer has 24 layers, 16 attention heads, and width 2048.
The model has been modified from a standard transformer in the following ways:
* It uses ALiBi and does not use positional embeddings.
* It uses QK LayerNorm.
* It does not use biases.
## Training Data
### Pre-Training
The model was pre-trained for 200B tokens (batch size 2200, sequence length 2048). It was trained on the following data mix:
* 67% RedPajama Common Crawl
* 15% [C4](https://huggingface.co/datasets/c4)
* 4.5% RedPajama GitHub
* 4.5% RedPajama Wikipedia
* 4.5% RedPajama Books
* 2.5% RedPajama Arxiv
* 2% RedPajama StackExchange
This is the same mix of data as was used in the Llama series of models](https://arxiv.org/abs/2302.13971).
Each sample was chosen from one of the datasets, with the dataset selected with the probability specified above.
The examples were shuffled within each dataset.
Each example was constructed from as many sequences from that dataset as were necessary to fill the 2048 sequence length.
The data was tokenized using the [EleutherAI/gpt-neox-20b](https://huggingface.co/EleutherAI/gpt-neox-20b) tokenizer.
### Fine-Tuning
We fine tuned this model on the [databricks-dolly-15k dataset](https://github.com/databrickslabs/dolly/tree/master/data) released by Databricks, following the same hyperparameters found in their [train_dolly.py](https://github.com/databrickslabs/dolly/blob/master/train_dolly.py) script.
## Training Configuration
This model was pre-trained on 440 A100-40GBs for about half a day using the [MosaicML Platform](https://www.mosaicml.com/platform). The model was pre-trained with sharded data parallelism using FSDP.
## Acknowledgements
This model builds on the work of [Together](https://www.together.xyz), which created the RedPajama dataset with the goal of mimicking the training data used to create the Llama series of models.
We gratefully acknowledge the hard work of the team that put together this dataset, and we hope this model serves as a useful companion to that work.
This model also builds on the work of [Databricks](https://www.databricks.com/), which created the Dolly instruction fine-tuning dataset.
We also gratefully acknowledge the work of the researchers who created the Llama series of models, which was the impetus for our efforts and those who worked on the RedPajama project.
|
asenella/reproducing_mopoe_seed_4 | asenella | 2023-05-12T14:00:49Z | 0 | 0 | null | [
"multivae",
"en",
"license:apache-2.0",
"region:us"
] | null | 2023-05-12T14:00:42Z | ---
language: en
tags:
- multivae
license: apache-2.0
---
### Downloading this model from the Hub
This model was trained with multivae. It can be downloaded or reloaded using the method `load_from_hf_hub`
```python
>>> from multivae.models import AutoModel
>>> model = AutoModel.load_from_hf_hub(hf_hub_path="your_hf_username/repo_name")
```
|
madmaxxed/gpt-work-filter-auto-complete | madmaxxed | 2023-05-12T13:59:23Z | 9 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-05-12T09:15:35Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: gpt-work-filter-auto-complete
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-work-filter-auto-complete
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0939
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 22 | 0.6395 |
| No log | 2.0 | 44 | 0.6235 |
| No log | 3.0 | 66 | 0.6035 |
| No log | 4.0 | 88 | 0.5845 |
| No log | 5.0 | 110 | 0.5645 |
| No log | 6.0 | 132 | 0.5442 |
| No log | 7.0 | 154 | 0.5346 |
| No log | 8.0 | 176 | 0.5167 |
| No log | 9.0 | 198 | 0.5009 |
| No log | 10.0 | 220 | 0.4893 |
| No log | 11.0 | 242 | 0.4676 |
| No log | 12.0 | 264 | 0.4498 |
| No log | 13.0 | 286 | 0.4382 |
| No log | 14.0 | 308 | 0.4276 |
| No log | 15.0 | 330 | 0.4132 |
| No log | 16.0 | 352 | 0.4075 |
| No log | 17.0 | 374 | 0.3952 |
| No log | 18.0 | 396 | 0.3822 |
| No log | 19.0 | 418 | 0.3677 |
| No log | 20.0 | 440 | 0.3563 |
| No log | 21.0 | 462 | 0.3495 |
| No log | 22.0 | 484 | 0.3455 |
| 0.6366 | 23.0 | 506 | 0.3316 |
| 0.6366 | 24.0 | 528 | 0.3126 |
| 0.6366 | 25.0 | 550 | 0.3118 |
| 0.6366 | 26.0 | 572 | 0.3021 |
| 0.6366 | 27.0 | 594 | 0.2944 |
| 0.6366 | 28.0 | 616 | 0.2878 |
| 0.6366 | 29.0 | 638 | 0.2772 |
| 0.6366 | 30.0 | 660 | 0.2701 |
| 0.6366 | 31.0 | 682 | 0.2643 |
| 0.6366 | 32.0 | 704 | 0.2576 |
| 0.6366 | 33.0 | 726 | 0.2514 |
| 0.6366 | 34.0 | 748 | 0.2467 |
| 0.6366 | 35.0 | 770 | 0.2359 |
| 0.6366 | 36.0 | 792 | 0.2326 |
| 0.6366 | 37.0 | 814 | 0.2205 |
| 0.6366 | 38.0 | 836 | 0.2182 |
| 0.6366 | 39.0 | 858 | 0.2137 |
| 0.6366 | 40.0 | 880 | 0.2086 |
| 0.6366 | 41.0 | 902 | 0.2058 |
| 0.6366 | 42.0 | 924 | 0.1979 |
| 0.6366 | 43.0 | 946 | 0.1930 |
| 0.6366 | 44.0 | 968 | 0.1922 |
| 0.6366 | 45.0 | 990 | 0.1853 |
| 0.4122 | 46.0 | 1012 | 0.1800 |
| 0.4122 | 47.0 | 1034 | 0.1787 |
| 0.4122 | 48.0 | 1056 | 0.1738 |
| 0.4122 | 49.0 | 1078 | 0.1689 |
| 0.4122 | 50.0 | 1100 | 0.1670 |
| 0.4122 | 51.0 | 1122 | 0.1583 |
| 0.4122 | 52.0 | 1144 | 0.1560 |
| 0.4122 | 53.0 | 1166 | 0.1540 |
| 0.4122 | 54.0 | 1188 | 0.1507 |
| 0.4122 | 55.0 | 1210 | 0.1475 |
| 0.4122 | 56.0 | 1232 | 0.1452 |
| 0.4122 | 57.0 | 1254 | 0.1458 |
| 0.4122 | 58.0 | 1276 | 0.1425 |
| 0.4122 | 59.0 | 1298 | 0.1377 |
| 0.4122 | 60.0 | 1320 | 0.1338 |
| 0.4122 | 61.0 | 1342 | 0.1365 |
| 0.4122 | 62.0 | 1364 | 0.1278 |
| 0.4122 | 63.0 | 1386 | 0.1272 |
| 0.4122 | 64.0 | 1408 | 0.1253 |
| 0.4122 | 65.0 | 1430 | 0.1251 |
| 0.4122 | 66.0 | 1452 | 0.1217 |
| 0.4122 | 67.0 | 1474 | 0.1219 |
| 0.4122 | 68.0 | 1496 | 0.1177 |
| 0.3005 | 69.0 | 1518 | 0.1174 |
| 0.3005 | 70.0 | 1540 | 0.1155 |
| 0.3005 | 71.0 | 1562 | 0.1144 |
| 0.3005 | 72.0 | 1584 | 0.1127 |
| 0.3005 | 73.0 | 1606 | 0.1106 |
| 0.3005 | 74.0 | 1628 | 0.1098 |
| 0.3005 | 75.0 | 1650 | 0.1092 |
| 0.3005 | 76.0 | 1672 | 0.1067 |
| 0.3005 | 77.0 | 1694 | 0.1086 |
| 0.3005 | 78.0 | 1716 | 0.1042 |
| 0.3005 | 79.0 | 1738 | 0.1051 |
| 0.3005 | 80.0 | 1760 | 0.1038 |
| 0.3005 | 81.0 | 1782 | 0.1022 |
| 0.3005 | 82.0 | 1804 | 0.1015 |
| 0.3005 | 83.0 | 1826 | 0.1004 |
| 0.3005 | 84.0 | 1848 | 0.1003 |
| 0.3005 | 85.0 | 1870 | 0.0978 |
| 0.3005 | 86.0 | 1892 | 0.0987 |
| 0.3005 | 87.0 | 1914 | 0.0974 |
| 0.3005 | 88.0 | 1936 | 0.0975 |
| 0.3005 | 89.0 | 1958 | 0.0965 |
| 0.3005 | 90.0 | 1980 | 0.0960 |
| 0.2455 | 91.0 | 2002 | 0.0958 |
| 0.2455 | 92.0 | 2024 | 0.0952 |
| 0.2455 | 93.0 | 2046 | 0.0952 |
| 0.2455 | 94.0 | 2068 | 0.0944 |
| 0.2455 | 95.0 | 2090 | 0.0943 |
| 0.2455 | 96.0 | 2112 | 0.0940 |
| 0.2455 | 97.0 | 2134 | 0.0942 |
| 0.2455 | 98.0 | 2156 | 0.0940 |
| 0.2455 | 99.0 | 2178 | 0.0939 |
| 0.2455 | 100.0 | 2200 | 0.0939 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AustinCarthy/Base_100Kphish_benignFall_IL_10K_OnlyPhish_from_benign_top_p_0.75 | AustinCarthy | 2023-05-12T13:54:37Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-12T12:53:56Z | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Base_100Kphish_benignFall_IL_10K_OnlyPhish_from_benign_top_p_0.75
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Base_100Kphish_benignFall_IL_10K_OnlyPhish_from_benign_top_p_0.75
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0237
- Accuracy: 0.9975
- F1: 0.9731
- Precision: 0.9983
- Recall: 0.9492
- Roc Auc Score: 0.9746
- Tpr At Fpr 0.01: 0.9508
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0042 | 1.0 | 6563 | 0.0276 | 0.9966 | 0.9628 | 0.9983 | 0.9298 | 0.9649 | 0.9308 |
| 0.0024 | 2.0 | 13126 | 0.0242 | 0.9972 | 0.9698 | 0.9973 | 0.9438 | 0.9718 | 0.927 |
| 0.0026 | 3.0 | 19689 | 0.0244 | 0.9970 | 0.9679 | 0.9987 | 0.939 | 0.9695 | 0.9514 |
| 0.0003 | 4.0 | 26252 | 0.0293 | 0.9968 | 0.9657 | 0.9989 | 0.9346 | 0.9673 | 0.9472 |
| 0.0007 | 5.0 | 32815 | 0.0237 | 0.9975 | 0.9731 | 0.9983 | 0.9492 | 0.9746 | 0.9508 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
|
Amalq/roberta-large-schizophrenia-v2 | Amalq | 2023-05-12T13:45:51Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-05-12T12:54:45Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: roberta-large-schizophrenia-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-schizophrenia-v2
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5239
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 248 | 1.5340 |
| No log | 2.0 | 496 | 1.5273 |
| 1.6401 | 3.0 | 744 | 1.5209 |
| 1.6401 | 4.0 | 992 | 1.5218 |
| 1.5704 | 5.0 | 1240 | 1.5167 |
| 1.5704 | 6.0 | 1488 | 1.5245 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
luigisaetta/whisper-atcosim2 | luigisaetta | 2023-05-12T13:26:03Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-05-12T11:26:48Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-atcosim2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-atcosim2
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0524
- Wer: 0.0304
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 300
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5702 | 0.2 | 50 | 0.2557 | 0.1007 |
| 0.1181 | 0.39 | 100 | 0.1144 | 0.0775 |
| 0.1084 | 0.59 | 150 | 0.0747 | 0.0482 |
| 0.0737 | 0.79 | 200 | 0.0616 | 0.0369 |
| 0.064 | 0.98 | 250 | 0.0556 | 0.0440 |
| 0.0313 | 1.18 | 300 | 0.0524 | 0.0304 |
### Framework versions
- Transformers 4.29.0
- Pytorch 2.0.0+cu117
- Datasets 2.12.0
- Tokenizers 0.11.0
|
LarryAIDraw/asuna1-000004 | LarryAIDraw | 2023-05-12T13:26:00Z | 0 | 1 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-12T13:18:28Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/63637/ichinose-asuna-blue-archive-or-character-lora-1630 |
LarryAIDraw/towerofgod_androssi_zahard | LarryAIDraw | 2023-05-12T13:25:16Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-12T13:17:26Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/63523/androssi-zahard-endorsi-or-tower-of-god |
LarryAIDraw/nanami_mami_v1 | LarryAIDraw | 2023-05-12T13:23:44Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-12T13:14:51Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/64071/nanami-mami-rent-a-girlfriend |
LarryAIDraw/yami-v1 | LarryAIDraw | 2023-05-12T13:23:29Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-12T13:14:31Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/63759/yami-3-in-one-to-love-ru-darkness-tolove |
LarryAIDraw/Amano_Hina | LarryAIDraw | 2023-05-12T13:23:17Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-12T13:14:06Z | ---
license: creativeml-openrail-m
---
https://civitai.com/models/63559/amano-hina-tenki-no-ko-weathering-with-you |
piperunner/ppo-LunarLander-v2 | piperunner | 2023-05-12T13:15:46Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-12T13:15:28Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 265.13 +/- 21.48
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
zetavg/zh-tw-pythia-1b-a12k-f84566-embeddings-gcp-a100-trans-t3-d2ad | zetavg | 2023-05-12T13:07:34Z | 16 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"dataset:zetavg/zh-tw-pythia-te01-zh-tw-pythia-ta12k-f84566-embeddings-tr-ec3f26-c2048",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-05-12T13:04:21Z | ---
datasets:
- zetavg/zh-tw-pythia-te01-zh-tw-pythia-ta12k-f84566-embeddings-tr-ec3f26-c2048
---
# zh-tw-pythia-1b-a12k-f84566-embeddings-gcp-a100-trans-t3-d2ad
This model is a part of the `zh-tw-pythia` project.
* Base model: `EleutherAI/pythia-1b`
* Tokenizer: `zh-tw-pythia-tokenizer-a12k-f84566`
* Vocab size: `62232`
* Train pass: `embeddings`
* Dataset used: `zh-tw-pythia-te01-zh-tw-pythia-ta12k-f84566-embeddings-tr-ec3f26-c2048`
* Full config:
```json
{"project_name": "zh-tw-pythia", "group_name": "te01", "base_tokenizer_name": "EleutherAI/pythia-70m", "base_model_name": "EleutherAI/pythia-1b", "tokenizer_name": "zh-tw-pythia-tokenizer-a12k-f84566", "hf_user_or_org_name": "zetavg", "tokenizer": {"build_with": "word_frequency_list", "tokens_to_add": 12000, "word_frequency_list_settings": {"word_frequency_list_name": "zetavg/tw-sinica-corpus-word-frequency", "include_words": ["。", ",", "、", "?", "!", ";", ":", "……", "~", "「", "」", "『", "』", "【", "】", "〖", "〗", "(", ")", "〔", "〕", "[", "]", "{", "}", "《", "》", "〈", "〉", "——", "──", "-", "−", "_", "・", ".", "·", "/", "\", "|", "<", ">"], "replace_rules": [{"match": {"regex": "�"}, "replace": null}, {"match": {"pos": ["Nb", "FW", null]}, "replace": null, "except": ["奧運", "中共", "國民黨", "民進黨", "新黨", "共產黨", "媽祖", "耶穌"]}, {"match": {"regex": ["^[A-Za-z0-9﹒• ]+$", "^[零一二兩三四五六七八九十廿卅百千萬億兆壹貳參肆伍陸柒捌玖拾佰仟0-9﹒•]{2,}$", "^([零一二兩三四五六七八九十廿卅百千萬億兆壹貳參肆伍陸柒捌玖拾佰仟0-9﹒•]+)$", "^[第數][零一二兩三四五六七八九十百千萬億兆0-9﹒•]+$", "^[零一二兩三四五六七八九十廿卅百千萬億兆0-9﹒•]+分之[零一二兩三四五六七八九十廿卅百千萬億兆0-9﹒•]+$", "^[零一二兩三四五六七八九十廿卅百千萬億兆0-9﹒•]+[多餘來幾成次年月日天時分點世代歲起段樓%]$", "^[零一二三四五六七八九十廿卅0-9]+(月份|年代?|世紀|學?年度|年級)$", "^(星期|週|周)[一二三四五六日]$"]}, "replace": null, "except": ["十分", "一起", "一點", "一時", "千萬", "兩三", "百分之百"]}, {"match": {"pos": "VHC", "regex": "^(.{2,})化$"}, "sub": "\\1"}, {"match": "高爾夫球場", "replace": "高爾夫"}, {"match": {"regex": "^(.+球)場$"}, "sub": "\\1"}, {"match": {"pos": "Nc", "regex": "^(.{2,})園區$"}, "sub": "\\1"}, {"match": {"pos": "Nc", "regex": "^(.{2,})[鄉鎮縣市區]$"}, "sub": "\\1"}, {"match": {"pos": "Nc", "regex": "^(.{2,})[界院部會署局館系所]$"}, "sub": "\\1", "except": ["委員會", "研究所", "中研院", "國科會", "資策會", "經建會", "工研院", "電信總局", "鎮公所", "事務所", "交易所", "農委會", "鄉公所", "地檢署", "警分局", "派出所", "托兒所", "消基會", "文建會", "兩廳院", "陸委會", "市議會"]}, {"match": {"pos": "Na", "regex": "^(.{2,})人$"}, "sub": "\\1", "except": ["年輕人", "負責人", "投資人", "候選人", "一家人", "當地人", "製作人"]}, {"match": {"pos": "Na", "regex": "^(.{2,3})學?家$"}, "sub": "\\1", "except": ["女人家", "婦人家", "新儒家", "窮人家", "縱橫家", "老人家", "老東家", "闊人家", "大戶人家", "婦道人家", "小戶人家", "水上人家", "諸子百家"]}, {"match": {"pos": "Na", "regex": "^副?總?([^副總]{2,})師$"}, "sub": "\\1", "except": ["中醫師", "囝仔師", "正機師", "準教師", "獸醫師", "班導師", "練馬師", "總舖師", "老像師", "新三十師", "至聖先師", "音樂大師"]}, {"match": {"pos": "Na", "regex": "^[原前]?(?:代|代理)?副?總?([^前代副總議警里首院部署局廳司處科組課股]{2,})[院部署局廳司處科組課股]?次?長$"}, "sub": "\\1", "except": ["董事長", "理事長", "秘書長", "執行長", "分局長", "縣市長", "一技之長", "省市長", "負成長", "高成長", "大家長", "小組長", "區組長", "低成長", "偵一組長", "停管隊長", "考選部長", "年增長", "正成長", "支店長", "公賣局長", "中宣部長", "小市長"]}, {"match": {"pos": "Na", "regex": "^副?總?正?([^副總正議委人隊]{2,})[委人隊]?員$"}, "sub": "\\1", "except": ["主跑員", "乘務員", "佐理員", "共黨員", "外務員", "從業員", "特派員", "義服員", "銜道員", "啦啦隊員", "指服團員"]}, {"match": {"pos": "Na", "regex": "^副(.{2,})$"}, "sub": "\\1", "except": ["副作用"]}, {"match": "一剎那", "replace": "剎那"}, {"match": "不能夠", "replace": "能夠"}, {"match": "光碟機", "replace": "光碟"}, {"match": "共和國", "replace": "共和"}, {"match": "原住民", "replace": "住民"}, {"match": "吸引力", "replace": "吸引"}, {"match": "國際性", "replace": "國際"}, {"match": "垃圾場", "replace": "垃圾"}, {"match": "大規模", "replace": "規模"}, {"match": "廢棄物", "replace": "廢棄"}, {"match": "愛滋病", "replace": "愛滋"}, {"match": "成交量", "replace": "成交"}, {"match": "接觸到", "replace": "接觸"}, {"match": "掩埋場", "replace": "掩埋"}, {"match": "正確率", "replace": "正確"}, {"match": "清華園", "replace": "清華"}, {"match": "聯誼會", "replace": "聯誼"}, {"match": "調查站", "replace": "調查"}, {"match": "轉換成", "replace": "轉換"}, {"match": "開放式", "replace": "開放"}, {"match": "開玩笑", "replace": "玩笑"}, {"match": "陽明山", "replace": "陽明"}, {"match": "雜貨店", "replace": "雜貨"}, {"match": "電視機", "replace": "電視"}, {"match": "高品質", "replace": "品質"}, {"match": "鬆弛法", "replace": "鬆弛"}, {"match": "共產主義", "replace": "共產"}, {"match": "資本主義", "replace": "資本"}, {"match": "微處理器", "replace": "處理器"}, {"match": "有線電視", "replace": "電視"}, {"match": "隨選視訊", "replace": "視訊"}, {"match": "電信總局", "replace": "總局"}, {"match": "進一步", "replace": ["一步", "進一步"]}, {"match": "差不多", "replace": ["不多", "差不多"]}, {"match": "忍不住", "replace": ["不住", "忍不住"]}, {"match": "不見得", "replace": ["見得", "不見得"]}, {"match": "有助於", "replace": ["助於", "有助於"]}, {"match": "舊金山", "replace": ["金山", "舊金山"]}, {"match": "大躍進", "replace": ["躍進", "大躍進"]}, {"match": "半導體", "replace": ["導體", "半導體"]}, {"match": "總幹事", "replace": ["幹事", "總幹事"]}, {"match": "兩廳院", "replace": ["廳院", "兩廳院"]}]}}, "training": {"embeddings": {"run_suffix": "gcp-a100-trans-t3", "max_training_text_length": 2048, "dataset": {"build_with": "translations", "preview_length": 64, "translations_settings": {"source_dataset": "zetavg/coct-en-zh-tw-translations-twp-300k", "lang_1_key": "en", "lang_2_key": "ch", "templates": ["English: {lang_1}\\nChinese: {lang_2}", "Chinese: {lang_2}\\nEnglish: {lang_1}"]}}, "only_train_parameters_matching": ["embed"], "training_arguments": {"num_train_epochs": 1, "per_device_train_batch_size": 16, "gradient_accumulation_steps": 1, "optim": "adamw_torch", "learning_rate": 5e-05, "lr_scheduler_type": "constant", "warmup_steps": 100, "logging_steps": 10, "save_steps": 1000, "save_total_limit": 8}}}, "push_outputs_to_hf": true, "report_to_wandb": true, "wandb_project": "zh-tw-llm"}
``` |
Amalq/roberta-large-schizophrenia-v3 | Amalq | 2023-05-12T12:59:02Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-05-12T11:19:09Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: roberta-large-schizophrenia-v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-schizophrenia-v3
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5296
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 248 | 1.5515 |
| No log | 2.0 | 496 | 1.5410 |
| 1.6456 | 3.0 | 744 | 1.5340 |
| 1.6456 | 4.0 | 992 | 1.5326 |
| 1.5589 | 5.0 | 1240 | 1.5248 |
| 1.5589 | 6.0 | 1488 | 1.5308 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
jccj/q-Taxi-v3 | jccj | 2023-05-12T12:56:24Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-05T14:51:41Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="jccj/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
PhDmath/distilbert-base-uncased-finetuned-emotion | PhDmath | 2023-05-12T12:52:33Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-12T11:33:07Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9295
- name: F1
type: f1
value: 0.9293576247301535
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2169
- Accuracy: 0.9295
- F1: 0.9294
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8632 | 1.0 | 250 | 0.3270 | 0.904 | 0.9008 |
| 0.253 | 2.0 | 500 | 0.2169 | 0.9295 | 0.9294 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
|
choohan/finetuned-opt-squad-covidqa-dataset | choohan | 2023-05-12T12:45:25Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"opt",
"question-answering",
"generated_from_trainer",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-05-12T12:41:54Z | ---
license: other
tags:
- generated_from_trainer
model-index:
- name: finetuned-opt-squad-covidqa-dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-opt-squad-covidqa-dataset
This model is a fine-tuned version of [choohan/finetuned-opt-squad-dataset-3](https://huggingface.co/choohan/finetuned-opt-squad-dataset-3) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
choohan/finetuned-opt-covidqa-dataset | choohan | 2023-05-12T12:41:16Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"opt",
"question-answering",
"generated_from_trainer",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-05-12T12:37:42Z | ---
license: other
tags:
- generated_from_trainer
model-index:
- name: finetuned-opt-covidqa-dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-opt-covidqa-dataset
This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
NabeelUppel/ppo-LunaLander-v2 | NabeelUppel | 2023-05-12T12:36:49Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-12T12:36:30Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 275.91 +/- 21.43
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
apcl/jam | apcl | 2023-05-12T12:24:47Z | 0 | 2 | null | [
"dataset:apcl/jm52m",
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-04-29T21:16:07Z | ---
license: bigscience-openrail-m
datasets:
- apcl/jm52m
---
# Jam
Jam is a GPT2-like model for research in fine-grained Java analysis. It is intended for fine-grained analysis of Java source code at the level of methods, statements, and variables, as a foundation for downstream tasks like code completion, comment generation, and automated bug repair.
---
## Jam Training Details
- We trained the jam model using the training procedures from Daniel Grittner's [NanoGPT-LoRA](https://github.com/danielgrittner/nanoGPT-LoRA)
- The dataset used to train our model is our own dataset [jm52m dataset](https://huggingface.co/datasets/apcl/jm52m), which consists of the processed source code of 52 million Java methods.
- We train the model on [training set](https://huggingface.co/datasets/apcl/jm52m/blob/main/train.bin) for 1 epoch, roughly 300,000 training iterations.
- Our [GitHub repo](https://github.com/apcl-research/jam/blob/main) contains the code for re-training using the [raw data](https://huggingface.co/datasets/apcl/jm52m/blob/main/fundats-j1.pkl)
| Hyperparameter | Description | Value |
| ----------- | ----------- |------------|
|e | embedding dimensions | 1024 |
|L | number of layers | 24 |
|h | attention heads | 16 |
|c | block size / context length | 256 |
|b | batch size | 4 |
|a | accumulation steps | 32 |
|d | dropout | 0.20 |
|r | learning rate | 3e-5 |
|y | weight decay | 1e-1 |
We train our models using a single NVidia A5000 GPU.
---
## Jam Projects
Current projects using the JAM pre-trained model can be found at our Github repository:
https://github.com/apcl-research/jam
|
eachadea/ggml-gpt4-x-vicuna-13b | eachadea | 2023-05-12T12:20:07Z | 0 | 13 | null | [
"conversational",
"region:us"
] | text-generation | 2023-05-12T11:07:21Z | ---
pipeline_tag: conversational
---
ggml version (post-PR #1405)
As a base model used https://huggingface.co/eachadea/vicuna-13b-1.1
Finetuned on Teknium's GPTeacher dataset, unreleased Roleplay v2 dataset, GPT-4-LLM dataset, and Nous Research Instruct Dataset
Approx 180k instructions, all from GPT-4, all cleaned of any OpenAI censorship/"As an AI Language Model" etc.
Base model still has OpenAI censorship. Soon, a new version will be released with cleaned vicuna from https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltere
Trained on 8 A100-80GB GPUs for 5 epochs following Alpaca deepspeed training code.
Nous Research Instruct Dataset will be released soon.
GPTeacher, Roleplay v2 by https://huggingface.co/teknium
Wizard LM by https://github.com/nlpxucan
Nous Research Instruct Dataset by https://huggingface.co/karan4d and https://huggingface.co/huemin
Compute provided by our project sponsor https://redmond.ai/ |
trustvare/TrustVare-OST-Duplicate-Remover | trustvare | 2023-05-12T12:15:47Z | 0 | 0 | null | [
"region:us"
] | null | 2023-05-12T11:51:57Z | TrustVare OST Duplicate Remover is a software application that can be used to find and remove duplicate items from an offline Outlook data file (.ost). OST files are created by Microsoft Outlook when it is configured to work in offline mode. They store email messages, contacts, calendars, and other data that are not currently synchronized with the Microsoft Exchange server. Removing duplicate items from an OST file can improve the performance of Microsoft Outlook by reducing the amount of data that the application has to process. Removing duplicate items from an OST file can improve the performance of Microsoft Outlook by reducing the amount of data that the application has to process.The software is easy to use and has a number of features that make it a valuable tool for Microsoft Outlook users.
Read More:- https://www.trustvare.com/duplicate-remover/ost/ |
OliAiart/Stable-diffusion | OliAiart | 2023-05-12T12:11:14Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-12T11:20:47Z | ---
license: creativeml-openrail-m
---
|
choohan/finetuned-opt-squad-dataset-4 | choohan | 2023-05-12T12:05:01Z | 95 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"opt",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-05-12T11:40:39Z | ---
license: other
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: finetuned-opt-squad-dataset-4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-opt-squad-dataset-4
This model is a fine-tuned version of [choohan/finetuned-opt-squad-dataset-3](https://huggingface.co/choohan/finetuned-opt-squad-dataset-3) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
farmerinatechstack/PPO-LunarLander-v2 | farmerinatechstack | 2023-05-12T12:01:22Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-12T12:01:04Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 257.58 +/- 15.29
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
henripett/ppo_lunar_lander_v2 | henripett | 2023-05-12T11:53:01Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-12T11:34:47Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 126.66 +/- 105.78
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
dian34323/cynthiajkt48 | dian34323 | 2023-05-12T11:52:20Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-12T11:50:47Z | ---
license: creativeml-openrail-m
---
|
OliAiart/models | OliAiart | 2023-05-12T11:22:07Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-12T01:04:25Z | ---
license: creativeml-openrail-m
---
|
choohan/finetuned-opt-squad-dataset-2 | choohan | 2023-05-12T11:08:16Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"opt",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-05-12T10:42:34Z | ---
license: other
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: finetuned-opt-squad-dataset-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-opt-squad-dataset-2
This model is a fine-tuned version of [choohan/finetuned-opt-squad-dataset](https://huggingface.co/choohan/finetuned-opt-squad-dataset) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Bennet1996/donut-base-sroie8 | Bennet1996 | 2023-05-12T11:03:16Z | 46 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"dataset:imagefolder",
"endpoints_compatible",
"region:us"
] | image-text-to-text | 2023-05-11T16:41:30Z | ---
tags:
- generated_from_trainer
datasets:
- imagefolder
model-index:
- name: donut-base-sroie8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroie8
This model is a fine-tuned version of [Bennet1996/donut-base-sroie6](https://huggingface.co/Bennet1996/donut-base-sroie6) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 11
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Gayathri142214002/t5-paraphrase | Gayathri142214002 | 2023-05-12T10:42:57Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-05-12T06:01:40Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: t5-paraphrase
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-paraphrase
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.3873
- eval_runtime: 785.1467
- eval_samples_per_second: 19.012
- eval_steps_per_second: 19.012
- epoch: 0.0
- step: 20
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
AXX1995/anastasianadav1 | AXX1995 | 2023-05-12T10:40:59Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-05-12T10:38:10Z | ---
license: creativeml-openrail-m
---
|
iamjoy/ppo-Pyramids | iamjoy | 2023-05-12T10:40:16Z | 2 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2023-05-12T10:40:11Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: iamjoy/ppo-Pyramids
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
seanghay/xlm-roberta-base-imdb | seanghay | 2023-05-12T10:38:06Z | 451 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-12T09:56:06Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.93936
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-imdb
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2223
- Accuracy: 0.9394
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2345 | 1.0 | 1563 | 0.1808 | 0.9306 |
| 0.1612 | 2.0 | 3126 | 0.2223 | 0.9394 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
choohan/finetuned-opt-squad-dataset | choohan | 2023-05-12T10:34:15Z | 12 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"opt",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-05-12T10:08:35Z | ---
license: other
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: finetuned-opt-squad-dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-opt-squad-dataset
This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
shivansh-ka/Multilingual-Toxic-Comment-Roberta-best | shivansh-ka | 2023-05-12T10:15:00Z | 0 | 0 | keras | [
"keras",
"tf-keras",
"region:us"
] | null | 2023-05-12T10:13:13Z | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | 1e-06 |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | False |
| is_legacy_optimizer | False |
| learning_rate | 9.999999747378752e-06 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
|
choohan/opt-finetuned-squad-dataset | choohan | 2023-05-12T10:03:18Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"opt",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:other",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-05-12T06:52:21Z | ---
license: other
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: opt-finetuned-squad-dataset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# opt-finetuned-squad-dataset
This model is a fine-tuned version of [choohan/opt-finetuned-squad-dataset](https://huggingface.co/choohan/opt-finetuned-squad-dataset) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
casellimarco/Taxi | casellimarco | 2023-05-12T09:32:12Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-12T09:32:11Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.73
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="casellimarco/Taxi", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
xkirinx/bart-large-mnli-rotten-tomatoes | xkirinx | 2023-05-12T09:30:26Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text-classification",
"generated_from_trainer",
"dataset:rotten_tomatoes",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-12T08:18:32Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- rotten_tomatoes
model-index:
- name: bart-large-mnli-rotten-tomatoes
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-mnli-rotten-tomatoes
This model is a fine-tuned version of [facebook/bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) on the rotten_tomatoes dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6032
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.1965 | 1.0 | 1067 | 0.6041 |
| 0.1604 | 2.0 | 2134 | 0.6165 |
| 0.0796 | 3.0 | 3201 | 0.6032 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
LinaSaba/BertDisasterTweets | LinaSaba | 2023-05-12T09:28:32Z | 0 | 0 | sklearn | [
"sklearn",
"clim",
"text-classification",
"license:afl-3.0",
"region:us"
] | text-classification | 2023-05-12T08:39:25Z | ---
license: afl-3.0
metrics:
- accuracy
- code_eval
library_name: sklearn
pipeline_tag: text-classification
tags:
- clim
---
# bert-model-disaster-tweets-classification
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the Natural-Language-Processing-with-Disaster-Tweets dataset.
It achieves the following results on the evaluation set:
- Accuracy: 0.82
- F1 Score: 0.82
## Model description
Load BertForSequenceClassification, the pretrained BERT model with a single linear classification layer on top, using an optimizer : incorporates weight decay, which is a regularization technique that helps prevent overfitting during training.
## Intended uses & limitations
Use to classify if a tweet represents a disaster or not.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with epsilon = 1e-8.
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Epoch | Average training loss | Training epoch | Accuracy | F1 |
|:-----:|:---------------------:|:---------------:|:--------:|:----:|
| 1.0 | 0.47 | 0:00:49 | 0.82 | 0.82 |
| 2.0 | 0.36 | 0:00:36 | 0.82 | 0.82 |
| 3.0 | 0.29 | 0:00:51 | 0.82 | 0.82 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
jakubgajski/PyramydsRND | jakubgajski | 2023-05-12T09:15:44Z | 23 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2023-05-12T06:45:28Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: jakubgajski/PyramydsRND
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
CatherineGeng/vit-base-patch16-224-in21k-euroSat | CatherineGeng | 2023-05-12T09:01:57Z | 63 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"vit",
"image-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-05-12T01:29:25Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: CatherineGeng/vit-base-patch16-224-in21k-euroSat
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# CatherineGeng/vit-base-patch16-224-in21k-euroSat
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4854
- Train Accuracy: 0.9415
- Train Top-3-accuracy: 0.9856
- Validation Loss: 0.1574
- Validation Accuracy: 0.9817
- Validation Top-3-accuracy: 0.9988
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 3590, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Train Accuracy | Train Top-3-accuracy | Validation Loss | Validation Accuracy | Validation Top-3-accuracy | Epoch |
|:----------:|:--------------:|:--------------------:|:---------------:|:-------------------:|:-------------------------:|:-----:|
| 0.4854 | 0.9415 | 0.9856 | 0.1574 | 0.9817 | 0.9988 | 0 |
### Framework versions
- Transformers 4.29.1
- TensorFlow 2.4.0
- Datasets 2.12.0
- Tokenizers 0.13.3
|
debin/Taxi-v3 | debin | 2023-05-12T09:01:37Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-12T09:01:35Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.48 +/- 2.75
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="debin/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
jesse0506/test | jesse0506 | 2023-05-12T09:00:43Z | 0 | 0 | null | [
"region:us"
] | null | 2023-05-12T08:58:59Z | import gradio as gr
def process(image):
#image[:,:,0] red
image[:,:,1]=0
image[:,:,2]=0
return image
gr.Interface(process,"image","image").launch(share=True) |
debin/q-FrozenLake-v1-4x4-noSlippery | debin | 2023-05-12T08:58:15Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-05-12T08:58:12Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="debin/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
xqchq/TextClassificationTHUCNews | xqchq | 2023-05-12T08:52:32Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:thuc_news",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-05-12T07:24:31Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- thuc_news
model-index:
- name: TextClassificationTHUCNews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TextClassificationTHUCNews
This model is a fine-tuned version of [hfl/minirbt-h256](https://huggingface.co/hfl/minirbt-h256) on the thuc_news dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Subsets and Splits