modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-23 18:27:52
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 492
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-23 18:25:26
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
zhow/sd-class-butterflies-64 | zhow | 2022-12-16T09:32:19Z | 0 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-12-16T09:31:47Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('zhow/sd-class-butterflies-64')
image = pipeline().images[0]
image
```
|
snehalyelmati/mt5-hindi-to-english | snehalyelmati | 2022-12-16T09:31:09Z | 85 | 6 | transformers | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"google/mt5-small",
"machine_translation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-12-16T08:32:48Z | ---
language: en
tags:
- google/mt5-small
- machine_translation
license: apache-2.0
---
# Hindi-English Translation Model
Based on the "google/mt5-small" pre-trained model. Fine-tuned it on Hindi to English dataset.
### Parameters
- number of epochs = 8
- batch size = 16
- learning rate = 5e-4
- number of batches = int(np.ceil(len(dataset) / batch size))
- n_warmup_steps = int(number of epochs * number of batches * 0.01)
### Training Loss

### Examples



|
lyua1225/clip-huge-zh-75k-steps-bs4096 | lyua1225 | 2022-12-16T09:29:21Z | 12 | 16 | transformers | [
"transformers",
"pytorch",
"clip",
"zero-shot-image-classification",
"zh",
"Chinese",
"license:creativeml-openrail-m",
"endpoints_compatible",
"region:us"
] | zero-shot-image-classification | 2022-12-16T06:36:19Z |
---
language: zh
license: creativeml-openrail-m
tags:
- clip
- zh
- Chinese
---
# clip-huge-zh-75k-steps-bs4096
## Brief Introduction
训练该模型的目的是使用中文文本指导stable diffusion 2模型进行生成。冻结[open_clip的CLIP-VIT-H](https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K)图像编码部分,训练文本编码部分以对齐英文语义空间, 训练样本均来自[LAION-5B](https://laion.ai/blog/laion-5b/)的中文子集
注:由于数据量,bs,step远小于原生clip-h,所以模型远未收敛且远未达到huge模型该有的性能,只是作为stable diffusion 2的文本指导的中间结果, 欢迎基于该模型做二次开发强化其CLIP性能。
The purpose of training this model is to use chinese text guiding stable difussion 2 generation. Freezing only the vision part of [CLIP-VIT-H](https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K) and train the text encoder can align chinese latent space to the original english latent space. All training samples are from chinese subset of [LAION-5B](https://laion.ai/blog/laion-5b/)
Note: Because of smaller dataset size, batch size and steps, this model is still far away from expected performance and convergence. It is only expected as the middle result for stable diffusion 2 text encoder. You are very welcome to do further training based on this model to enhance its 'CLIP' performance.
## Stable Diffusion 2 Guiding Example
赛博朋克风格的城市街道

一只可爱的柴犬

## Training Details
### 文本编码器/Text Encoder
文本编码器采用与stable diffusion 2同样的结构:[open_clip的CLIP-VIT-H](https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K). 为了使中文编码在语义空间内尽量与原来英文编码器的语义距离接近,文本编码器的训练细节如下:
1. 暴力的替换原来英文版本的clip_huge的文本编码器的vocab与tokenizer为chinese roberta的vocab与tokenizer
2. 完整copy原英文编码器的所有权重
3. 冻结图像编码器的全部参数与文本编码器的编码部分与输出映射部分,只训练词嵌入,目的是在保留语义空间尽量不变的情况下,将中文词嵌入对齐英文词嵌入的语义空间。
4. 在训练多个step后,完全解冻文本编码器,使整个文本模型去拟合clip_huge图像编码器的语义空间。
注:训练的loss采用clip loss,数据集采用[LAION-5B](https://laion.ai/blog/laion-5b/)数据集的中文子集部分(由于失效url等原因,共约8500万),模型在4096的batch size下共训练75k步,所以并未完全收敛。
Text encoder is the same structure as [open_clip/CLIP-VIT-H](https://huggingface.co/laion/CLIP-ViT-H-14-laion2B-s32B-b79K) which is used by stable diffusion 2. Our purpose is mapping chinese latent space to the original english one. The training details are listed below:
1. Do brute force in-place vocab substitution: directly use chinese tokened sequence to pick up embedding vectors from the original embedding layer.
2. Copy the original model weights from the text encoder of CLIP-VIT-H
3. Freeze the entire visual model, text encoder layer as well as the text projection layer. Only the text embedding layer is unfrozen. The purpose of this step is to align chinese word embedding with the original english word embedding such that the final projection latent space would not drift far away.
4. After a bunch of steps, unfreeze the entire text encoder for better convergence.
Note: We use clip loss to optimize chinese text encoder. Chinese subset of [LAION-5B](https://laion.ai/blog/laion-5b/) are chosen as our training set (around 85M text-image pairs). This model was trained 75k steps with 4096 batch size so it is still far away from convergence.
## 使用 Usage
### Zero-Shot Classification
```py
import torch
import numpy as np
import requests
from PIL import Image
from transformers import CLIPModel, CLIPFeatureExtractor, AutoTokenizer
model_id = "lyua1225/clip-huge-zh-75k-steps-bs4096"
model = CLIPModel.from_pretrained(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
processor = CLIPFeatureExtractor.from_pretrained(model_id)
# online example from OFA-Sys
url = "https://clip-cn-beijing.oss-cn-beijing.aliyuncs.com/pokemon.jpeg"
image = Image.open(requests.get(url, stream=True).raw).convert("RGB")
texts = ["杰尼龟", "妙蛙种子", "皮卡丘", "小火龙"]
# compute image feature
inputs = torch.from_numpy(processor(image).pixel_values[0]).unsqueeze(0)
image_features = model.get_image_features(pixel_values=inputs)
image_features = image_features / image_features.norm(p=2, dim=-1, keepdim=True)
# compute text features
inputs = tokenizer(text=texts, padding="max_length", max_length=77, return_tensors="pt")
input_ids, attention_mask = inputs.input_ids, inputs.attention_mask
input_dict = dict(input_ids=input_ids, attention_mask=attention_mask)
text_features = model.get_text_features(**input_dict)
text_features = text_features / text_features.norm(p=2, dim=-1, keepdim=True) # normalize
# compute probs for each class
logit_scale = model.logit_scale.exp()
logits_per_image = logit_scale * image_features @ text_features.t()
logits_per_text = logits_per_image.t()
probs = logits_per_image.softmax(dim=-1).detach().numpy()
print(np.around(probs, 3))
```
### Guiding Stable Diffusion V2.1
使用该中文模型可以指导stable diffusion 2 进行生成(在图灵架构或者V100以后的GPU上推荐使用FP16进行推理)
```py
import torch
from diffusers import StableDiffusionPipeline
from transformers import AutoTokenizer, CLIPTextModel
clip_id = "lyua1225/clip-huge-zh-75k-steps-bs4096"
sd2_id = "stabilityai/stable-diffusion-2-1"
text_encoder = CLIPTextModel.from_pretrained(clip_id).half()
tokenizer = AutoTokenizer.from_pretrained(clip_id, trust_remote_code=True)
pipe = StableDiffusionPipeline.from_pretrained(sd2_id, torch_dtype=torch.float16, revision="fp16",
tokenizer=tokenizer, text_encoder=text_encoder)
pipe.to("cuda")
image = pipe("赛博朋克风格的城市街道", num_inference_steps=20).images[0]
image.save("cyberpunk.jpeg")
```
|
Narsil/layoutlmv2-finetuned-funsd | Narsil | 2022-12-16T09:17:12Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"generated_from_trainer",
"object-detection",
"dataset:funsd",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | object-detection | 2022-12-16T09:13:33Z | ---
tags:
- generated_from_trainer
datasets:
- funsd
pipeline_tag: object-detection
widget:
- src: "https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/invoice.png"
example_title: invoice
- src: "https://huggingface.co/spaces/impira/docquery/resolve/2359223c1837a7587402bda0f2643382a6eefeab/contract.jpeg"
example_title: contract
model_index:
- name: layoutlmv2-finetuned-funsd
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: funsd
type: funsd
args: funsd
duplicated_from: nielsr/layoutlmv2-finetuned-funsd
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-finetuned-funsd
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on the funsd dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.9.0.dev0
- Pytorch 1.8.0+cu101
- Datasets 1.9.0
- Tokenizers 0.10.3
|
JabrilJacobs/q-Taxi-v3 | JabrilJacobs | 2022-12-16T08:31:43Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-16T08:31:30Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="JabrilJacobs/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
bheshaj/bart-large-cnn-small-billsum-5epochs | bheshaj | 2022-12-16T08:06:31Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-12-16T07:39:08Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: bart-large-cnn-small-billsum-5epochs
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: train[:1%]
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.5406
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-small-billsum-5epochs
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7206
- Rouge1: 0.5406
- Rouge2: 0.312
- Rougel: 0.3945
- Rougelsum: 0.4566
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3.373e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 16
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|
| 2.3723 | 1.33 | 16 | 1.8534 | 0.5204 | 0.299 | 0.3893 | 0.4441 |
| 1.6579 | 2.67 | 32 | 1.7208 | 0.5427 | 0.3143 | 0.3915 | 0.459 |
| 1.2397 | 4.0 | 48 | 1.7206 | 0.5406 | 0.312 | 0.3945 | 0.4566 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
CreativeEvolution/q-FrozenLake-v1-4x4-noSlippery | CreativeEvolution | 2022-12-16T07:51:22Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-16T07:51:15Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="CreativeEvolution/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Shunian/mbti-classification-roberta-base | Shunian | 2022-12-16T07:37:25Z | 13 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-15T21:25:18Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: mbti-classification-roberta-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbti-classification-roberta-base
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1673
- Accuracy: 0.3031
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:------:|:---------------:|:--------:|
| 2.1161 | 1.0 | 20490 | 2.0814 | 0.2993 |
| 2.0021 | 2.0 | 40980 | 2.0563 | 0.3073 |
| 1.8974 | 3.0 | 61470 | 2.0769 | 0.3074 |
| 1.8346 | 4.0 | 81960 | 2.1221 | 0.3073 |
| 1.7826 | 5.0 | 102450 | 2.1673 | 0.3031 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.12.1+cu102
- Datasets 2.7.1
- Tokenizers 0.13.2
|
marma/whisper-tiny-sv | marma | 2022-12-16T07:35:52Z | 7 | 0 | transformers | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:dataset/riksdagen",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-16T07:20:47Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- dataset/riksdagen
metrics:
- wer
model-index:
- name: whisper-tiny-sv
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: dataset/riksdagen audiofolder
type: dataset/riksdagen
config: audiofolder
split: train
args: audiofolder
metrics:
- name: Wer
type: wer
value: 0.3700987201570632
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-tiny-sv
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the dataset/riksdagen audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6435
- Wer: 0.3701
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 128
- total_eval_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.0032 | 0.08 | 250 | 1.0075 | 0.5063 |
| 0.8983 | 0.17 | 500 | 0.8945 | 0.4649 |
| 0.8227 | 0.25 | 750 | 0.8336 | 0.4491 |
| 0.777 | 0.33 | 1000 | 0.7931 | 0.4314 |
| 0.7728 | 0.42 | 1250 | 0.7640 | 0.4217 |
| 0.7141 | 0.5 | 1500 | 0.7407 | 0.4134 |
| 0.7208 | 0.58 | 1750 | 0.7225 | 0.4023 |
| 0.6911 | 0.66 | 2000 | 0.7083 | 0.3942 |
| 0.6924 | 0.75 | 2250 | 0.6948 | 0.3911 |
| 0.6702 | 0.83 | 2500 | 0.6849 | 0.3884 |
| 0.663 | 0.91 | 2750 | 0.6766 | 0.3769 |
| 0.6548 | 1.0 | 3000 | 0.6686 | 0.3759 |
| 0.638 | 1.08 | 3250 | 0.6627 | 0.3728 |
| 0.6222 | 1.16 | 3500 | 0.6574 | 0.3733 |
| 0.6323 | 1.25 | 3750 | 0.6528 | 0.3691 |
| 0.6192 | 1.33 | 4000 | 0.6498 | 0.3688 |
| 0.633 | 1.41 | 4250 | 0.6469 | 0.3677 |
| 0.6229 | 1.5 | 4500 | 0.6451 | 0.3681 |
| 0.6246 | 1.58 | 4750 | 0.6439 | 0.3706 |
| 0.6214 | 1.66 | 5000 | 0.6435 | 0.3701 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.12.0a0+8a1a93a
- Datasets 2.7.1
- Tokenizers 0.13.2
|
BlueRaccoon/whisper-medium-da | BlueRaccoon | 2022-12-16T07:30:36Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"da",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-14T16:41:09Z | ---
language:
- da
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Medium Danish
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0
type: mozilla-foundation/common_voice_11_0
config: da
split: test
args: da
metrics:
- name: Wer
type: wer
value: 15.36559705418201
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Medium Danish
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the mozilla-foundation/common_voice_11_0 da dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5759
- Wer: 15.3656
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.016 | 7.58 | 1000 | 0.4492 | 15.7391 |
| 0.0014 | 15.15 | 2000 | 0.5306 | 15.4550 |
| 0.0004 | 22.73 | 3000 | 0.5759 | 15.3656 |
| 0.0003 | 30.3 | 4000 | 0.5981 | 15.4655 |
| 0.0002 | 37.88 | 5000 | 0.6072 | 15.5076 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
SiddharthaM/xlm-roberta-targin-final | SiddharthaM | 2022-12-16T07:30:13Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-16T06:44:43Z | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: xlm-roberta-targin-final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-targin-final
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8172
- Accuracy: 0.6873
- Precision: 0.6494
- Recall: 0.6422
- F1: 0.6450
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 296 | 0.6065 | 0.6873 | 0.6537 | 0.5833 | 0.5748 |
| 0.597 | 2.0 | 592 | 0.5822 | 0.7015 | 0.6652 | 0.6279 | 0.6332 |
| 0.597 | 3.0 | 888 | 0.5704 | 0.7015 | 0.6654 | 0.6551 | 0.6589 |
| 0.5156 | 4.0 | 1184 | 0.6393 | 0.7044 | 0.6684 | 0.6552 | 0.6597 |
| 0.5156 | 5.0 | 1480 | 0.5924 | 0.7082 | 0.6752 | 0.6720 | 0.6735 |
| 0.4479 | 6.0 | 1776 | 0.7029 | 0.7006 | 0.6629 | 0.6351 | 0.6408 |
| 0.3783 | 7.0 | 2072 | 0.6963 | 0.7072 | 0.6715 | 0.6554 | 0.6606 |
| 0.3783 | 8.0 | 2368 | 0.7636 | 0.6987 | 0.6627 | 0.6549 | 0.6579 |
| 0.3253 | 9.0 | 2664 | 0.7804 | 0.6901 | 0.6549 | 0.6523 | 0.6535 |
| 0.3253 | 10.0 | 2960 | 0.8172 | 0.6873 | 0.6494 | 0.6422 | 0.6450 |
### Framework versions
- Transformers 4.24.0.dev0
- Pytorch 1.11.0+cu102
- Datasets 2.6.1
- Tokenizers 0.13.1
|
duongkstn/q-FrozenLake-v1-8x8-90000-steps | duongkstn | 2022-12-16T07:05:56Z | 0 | 0 | null | [
"FrozenLake-v1-8x8",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-16T07:05:44Z | ---
tags:
- FrozenLake-v1-8x8
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-8x8-90000-steps
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-8x8
type: FrozenLake-v1-8x8
metrics:
- type: mean_reward
value: 0.18 +/- 0.38
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="duongkstn/q-FrozenLake-v1-8x8-90000-steps", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
huiziy/my_awesome_qa_model | huiziy | 2022-12-16T06:23:54Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-12-16T05:50:16Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: my_awesome_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_qa_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an High School Health Science dataset.
It achieves the following results on the evaluation set:
- Loss: 5.2683
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 3 | 5.6569 |
| No log | 2.0 | 6 | 5.3967 |
| No log | 3.0 | 9 | 5.2683 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
cleanrl/BreakoutNoFrameskip-v4-dqn_atari_jax-seed1 | cleanrl | 2022-12-16T05:37:31Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"BreakoutNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-16T05:37:27Z | ---
tags:
- BreakoutNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BreakoutNoFrameskip-v4
type: BreakoutNoFrameskip-v4
metrics:
- type: mean_reward
value: 291.10 +/- 116.43
name: mean_reward
verified: false
---
# (CleanRL) **DQN** Agent Playing **BreakoutNoFrameskip-v4**
This is a trained model of a DQN agent playing BreakoutNoFrameskip-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/dqn_atari_jax.py).
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/BreakoutNoFrameskip-v4-dqn_atari_jax-seed1/raw/main/dqn.py
curl -OL https://huggingface.co/cleanrl/BreakoutNoFrameskip-v4-dqn_atari_jax-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/BreakoutNoFrameskip-v4-dqn_atari_jax-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqn_atari_jax.py --track --capture-video --save-model --upload-model --hf-entity cleanrl --env-id BreakoutNoFrameskip-v4 --seed 1
```
# Hyperparameters
```python
{'batch_size': 32,
'buffer_size': 1000000,
'capture_video': True,
'end_e': 0.01,
'env_id': 'BreakoutNoFrameskip-v4',
'exp_name': 'dqn_atari_jax',
'exploration_fraction': 0.1,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learning_rate': 0.0001,
'learning_starts': 80000,
'save_model': True,
'seed': 1,
'start_e': 1,
'target_network_frequency': 1000,
'total_timesteps': 10000000,
'track': True,
'train_frequency': 4,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
doctorderp/planet_of_the_apes | doctorderp | 2022-12-16T05:23:28Z | 0 | 2 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-12-15T06:36:19Z | ---
license: creativeml-openrail-m
---
Preview Images
https://imgur.com/a/vwO6f5A
IMPORTANT INSTRUCTIONS!!
This model was trained on SD base 1.5 version BUT It does also work for 1.4 as they both share the same Clip encoder.
Install instructions.
Simply place the chimp.pt file inside the \stable-diffusion-webui\models\hypernetworks folder. Load the model inside the Automatic1111 interface under settings hypernetwork.
Use instructions.
Use between 0.55-1.0 hypernetwork strength, more strength will give a more real chimpl look while .55 gives a more human form chimp look. I find .7 works well enough.
Use DPM++ SDE Karras sampler with 15 steps and CFG of 6.0.
Make sure and always include the word chimp somewhere in the prompt. For people always preface the subject with chimp, example "chimp man walking", "chimp girl playing in the backyard", etc...
VERY IMPORTANT! Always describe the background in some detail or you WILL get a very generic boring background.. So for example DON'T just say "an old chimp man". DO say "an old chimp man inside a rustic hut".
Some fun info. People have been sleeping on hypernetworks and I plan to change that. Hopefully the flexibility of this hypernetwok will show everyone their true potential. Because this model is a hypernetwork it can be used in conjunction with ANY model based on the 1.4 CLIP architecture. That means this model will work on any custom 1.4 or 1.5 model, like the modern disney model, or classic disney, etc… for example, let's say you want to load classic disney as base. Well simply load the classic disney model, make sure and preface every prompt with classic disney. As per instructions of the model. Then follow up with my “chimp” tag as instructed once you have loaded the hypernetwork. So the prompt should look something like this “classic disney. chimp girl playing in the backyard.” Make sure and adjust the hypernetwork strength to .5 for a more cartoon look or .7 for a realistic chimp look. Have fun folks!
|
taskmasterpeace/autotrain-Consequenv05-WEW6KM47ET-2492376867 | taskmasterpeace | 2022-12-16T03:39:39Z | 0 | 0 | diffusers | [
"diffusers",
"autotrain",
"stable-diffusion",
"text-to-image",
"dataset:taskmasterpeace/autotrain-data-Consequenv05-WEW6KM47ET",
"co2_eq_emissions",
"region:us"
] | text-to-image | 2022-12-16T03:18:52Z | ---
tags:
- autotrain
- stable-diffusion
- text-to-image
datasets:
- taskmasterpeace/autotrain-data-Consequenv05-WEW6KM47ET
co2_eq_emissions:
emissions: 39.499488037662175
---
# Model Trained Using AutoTrain
- Problem type: Dreambooth
- Model ID: 2492376867
- CO2 Emissions (in grams): 39.4995 |
NOISK8/laywaxys | NOISK8 | 2022-12-16T03:01:16Z | 1 | 0 | diffusers | [
"diffusers",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2022-12-16T02:56:42Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### laywaxys Dreambooth model trained by NOISK8 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
Sample pictures of this concept:
|
JunHwi/kmhas_binary | JunHwi | 2022-12-16T02:53:53Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"electra",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-16T02:12:57Z | Pretrained K-mHas with binary-label model with "koelectra-v3"
You can use tokenizer of this model with "monologg/koelectra-v3-base-discriminator"
dataset : https://huggingface.co/datasets/jeanlee/kmhas_korean_hate_speech
pretrained_model : https://huggingface.co/monologg/koelectra-base-v3-discriminator
label maps are like this.
>
{0: "not_hate_speech", 1: "hate_speech"} |
RazyDave/deberta-v3-base-finetuned-mrpc | RazyDave | 2022-12-16T02:49:39Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-16T02:21:36Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: deberta-v3-base-finetuned-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: train
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8921568627450981
- name: F1
type: f1
value: 0.9241379310344827
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-base-finetuned-mrpc
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3297
- Accuracy: 0.8922
- F1: 0.9241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 230 | 0.3411 | 0.8725 | 0.9081 |
| No log | 2.0 | 460 | 0.3297 | 0.8922 | 0.9241 |
| 0.3727 | 3.0 | 690 | 0.4133 | 0.8922 | 0.9236 |
| 0.3727 | 4.0 | 920 | 0.5315 | 0.8848 | 0.9174 |
| 0.1068 | 5.0 | 1150 | 0.5898 | 0.8848 | 0.9171 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
Valdimarb13/whisper-small-icelandic | Valdimarb13 | 2022-12-16T02:44:03Z | 4 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"generated_from_trainer",
"is",
"dataset:language-and-voice-lab/samromur_asr",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-15T15:04:39Z | ---
language:
- is
license: apache-2.0
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- language-and-voice-lab/samromur_asr
metrics:
- wer
model-index:
- name: Whisper Small Icelandic
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: samromur
type: language-and-voice-lab/samromur_asr
config: samromur_asr
split: test
args: 'split: test'
metrics:
- name: Wer
type: wer
value: 23.040907733651835
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Icelandic
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the samromur dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2613
- Wer: 23.0409
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.3551 | 0.18 | 1000 | 0.4322 | 35.0421 |
| 0.2541 | 0.36 | 2000 | 0.3249 | 27.4721 |
| 0.231 | 0.53 | 3000 | 0.2781 | 24.2234 |
| 0.2277 | 0.71 | 4000 | 0.2613 | 23.0409 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
RazyDave/deberta-v3-base-finetuned-rte | RazyDave | 2022-12-16T02:12:07Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-16T01:40:02Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: deberta-v3-base-finetuned-rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: rte
split: train
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.8194945848375451
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-base-finetuned-rte
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8234
- Accuracy: 0.8195
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 156 | 0.5610 | 0.7545 |
| No log | 2.0 | 312 | 0.6270 | 0.7617 |
| No log | 3.0 | 468 | 0.6565 | 0.7906 |
| 0.3919 | 4.0 | 624 | 0.8234 | 0.8195 |
| 0.3919 | 5.0 | 780 | 0.9628 | 0.7978 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
ancillaire/ppo-LunarLander-v2 | ancillaire | 2022-12-16T01:35:06Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-16T01:34:32Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: -128.62 +/- 54.34
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Tushybhutt/GlassBiff | Tushybhutt | 2022-12-16T01:19:28Z | 0 | 0 | null | [
"license:cc-by-sa-4.0",
"region:us"
] | null | 2022-12-14T15:09:31Z | ---
license: cc-by-sa-4.0
---
A stained glass themed embedding that was created with 8 vectors.
Textual Inversion Embedding for SD 2.x trained for 500 steps on twenty 768x768 images from various sources.
Install by downloading the step embedding, and put it in the \embeddings folder
Use keyword: GlassBiff



|
suyuanliu/wav2vec2-base-finetuned-stop-classification | suyuanliu | 2022-12-16T01:17:27Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:audiofolder",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | audio-classification | 2022-12-16T00:57:11Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- audiofolder
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-stop-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-stop-classification
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the audiofolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1647
- Accuracy: 0.9470
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.671 | 0.98 | 26 | 0.5553 | 0.8347 |
| 0.3525 | 1.98 | 52 | 0.2647 | 0.9163 |
| 0.291 | 2.98 | 78 | 0.2474 | 0.9070 |
| 0.2733 | 3.98 | 104 | 0.1729 | 0.9439 |
| 0.2467 | 4.98 | 130 | 0.1647 | 0.9470 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0
- Datasets 2.7.1
- Tokenizers 0.13.2
|
cleanrl/BeamRiderNoFrameskip-v4-dqn_atari_jax-seed1 | cleanrl | 2022-12-16T00:47:36Z | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"BeamRiderNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-16T00:47:28Z | ---
tags:
- BeamRiderNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: BeamRiderNoFrameskip-v4
type: BeamRiderNoFrameskip-v4
metrics:
- type: mean_reward
value: 5091.00 +/- 1923.97
name: mean_reward
verified: false
---
# (CleanRL) **DQN** Agent Playing **BeamRiderNoFrameskip-v4**
This is a trained model of a DQN agent playing BeamRiderNoFrameskip-v4.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/dqn_atari_jax.py).
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/BeamRiderNoFrameskip-v4-dqn_atari_jax-seed1/raw/main/dqn.py
curl -OL https://huggingface.co/cleanrl/BeamRiderNoFrameskip-v4-dqn_atari_jax-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/BeamRiderNoFrameskip-v4-dqn_atari_jax-seed1/raw/main/poetry.lock
poetry install --all-extras
python dqn_atari_jax.py --track --capture-video --save-model --upload-model --hf-entity cleanrl --env-id BeamRiderNoFrameskip-v4 --seed 1
```
# Hyperparameters
```python
{'batch_size': 32,
'buffer_size': 1000000,
'capture_video': True,
'end_e': 0.01,
'env_id': 'BeamRiderNoFrameskip-v4',
'exp_name': 'dqn_atari_jax',
'exploration_fraction': 0.1,
'gamma': 0.99,
'hf_entity': 'cleanrl',
'learning_rate': 0.0001,
'learning_starts': 80000,
'save_model': True,
'seed': 1,
'start_e': 1,
'target_network_frequency': 1000,
'total_timesteps': 10000000,
'track': True,
'train_frequency': 4,
'upload_model': True,
'wandb_entity': None,
'wandb_project_name': 'cleanRL'}
```
|
bitcloud2/q-Taxi-v3-hf-class | bitcloud2 | 2022-12-16T00:39:55Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T23:39:37Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-hf-class
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="bitcloud2/q-Taxi-v3-hf-class", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
faisalabidi/rare-puppers | faisalabidi | 2022-12-16T00:35:12Z | 23 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-12-16T00:34:54Z | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: rare-puppers
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.1702127605676651
---
# rare-puppers
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### hyatt drinks

#### hyatt fitness

#### hyatt food

#### hyatt guestroom

#### hyatt pool

#### hyatt restaurant

#### hyatt suite living room
 |
ScrappyCoco666/q-Taxi-v3 | ScrappyCoco666 | 2022-12-16T00:09:00Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-16T00:08:51Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-6
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ScrappyCoco666/q-Taxi-v3-6", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
sasha/autotrain-butterfly-similarity-2490576840 | sasha | 2022-12-16T00:06:07Z | 25 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"vision",
"image-classification",
"dataset:sasha/autotrain-data-butterfly-similarity",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-12-15T23:48:47Z | ---
tags:
- autotrain
- vision
- image-classification
datasets:
- sasha/autotrain-data-butterfly-similarity
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 21.263808199884835
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 2490576840
- CO2 Emissions (in grams): 21.2638
## Validation Metrics
- Loss: 1.818
- Accuracy: 0.609
- Macro F1: 0.409
- Micro F1: 0.609
- Weighted F1: 0.559
- Macro Precision: 0.404
- Micro Precision: 0.609
- Weighted Precision: 0.542
- Macro Recall: 0.446
- Micro Recall: 0.609
- Weighted Recall: 0.609 |
haining/Taxi-v3-500x6 | haining | 2022-12-15T23:56:36Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T23:56:22Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3-500x6
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="haining/Taxi-v3-500x6", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
gagan3012/swin_arocr_tiny | gagan3012 | 2022-12-15T23:50:37Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"swinv2",
"image-feature-extraction",
"masked-image-modeling",
"generated_from_trainer",
"dataset:hindawi",
"endpoints_compatible",
"region:us"
] | image-feature-extraction | 2022-12-15T23:45:22Z | ---
tags:
- masked-image-modeling
- generated_from_trainer
datasets:
- hindawi
model-index:
- name: swinv2_arocr_tiny_encoder
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swinv2_arocr_tiny_encoder
This model is a fine-tuned version of [/lustre07/scratch/gagan30/arocr/models/swinv2_arocr_tiny/config.json](https://huggingface.co//lustre07/scratch/gagan30/arocr/models/swinv2_arocr_tiny/config.json) on the /lustre07/scratch/gagan30/arocr/Hindawi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0519
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1337
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.0891 | 1.0 | 8078 | 0.0628 |
| 0.0465 | 2.0 | 16156 | 0.0595 |
| 0.0639 | 3.0 | 24234 | 0.0570 |
| 0.0608 | 4.0 | 32312 | 0.0548 |
| 0.0487 | 5.0 | 40390 | 0.0554 |
| 0.059 | 6.0 | 48468 | 0.0533 |
| 0.0677 | 7.0 | 56546 | 0.0525 |
| 0.0555 | 8.0 | 64624 | 0.0521 |
| 0.0502 | 9.0 | 72702 | 0.0520 |
| 0.0496 | 10.0 | 80780 | 0.0519 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.12.0
- Datasets 2.7.1
- Tokenizers 0.11.6
|
DrishtiSharma/whisper-large-v2-lithuanian-400-steps | DrishtiSharma | 2022-12-15T23:25:47Z | 4 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"lt",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-15T21:34:01Z | ---
language:
- lt
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Large V2 Lithuanian- Drishti Sharma
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: lt
split: test
args: lt
metrics:
- name: Wer
type: wer
value: 26.152380196132924
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large V2 Lithuanian- Drishti Sharma
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2921
- Wer: 26.1524
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 400
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.2538 | 0.36 | 400 | 0.2921 | 26.1524 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
Seif/ppo-Huggy | Seif | 2022-12-15T23:03:45Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2022-12-15T23:03:33Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: Seif/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
ericntay/sd-class-butterflies-32 | ericntay | 2022-12-15T22:47:05Z | 0 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-12-15T22:18:00Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute 🦋.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('ericntay/sd-class-butterflies-32')
image = pipeline().images[0]
image
```
|
kejian/deliberate-awr | kejian | 2022-12-15T22:28:35Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"generated_from_trainer",
"en",
"dataset:kejian/codeparrot-train-more-filter-3.3b-cleaned",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2022-12-15T09:23:40Z | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- kejian/codeparrot-train-more-filter-3.3b-cleaned
model-index:
- name: deliberate-awr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deliberate-awr
This model was trained from scratch on the kejian/codeparrot-train-more-filter-3.3b-cleaned dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 12589
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.23.0
- Pytorch 1.13.0+cu116
- Datasets 2.0.0
- Tokenizers 0.12.1
# Full config
{'dataset': {'datasets': ['kejian/codeparrot-train-more-filter-3.3b-cleaned'],
'is_split_by_sentences': True,
'skip_tokens': 1649934336},
'generation': {'batch_size': 128,
'every_n_steps': 512,
'force_call_on': [12589],
'metrics_configs': [{}, {'n': 1}, {}],
'scenario_configs': [{'display_as_html': True,
'generate_kwargs': {'do_sample': True,
'eos_token_id': 0,
'max_length': 640,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_hits_threshold': 0,
'num_samples': 2048},
{'display_as_html': True,
'generate_kwargs': {'do_sample': True,
'eos_token_id': 0,
'max_length': 272,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'functions',
'num_hits_threshold': 0,
'num_samples': 2048,
'prompts_path': 'resources/functions_csnet.jsonl',
'use_prompt_for_scoring': True}],
'scorer_config': {}},
'kl_gpt3_callback': {'every_n_steps': 512,
'force_call_on': [12589],
'gpt3_kwargs': {'model_name': 'code-cushman-001'},
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': False,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'model_kwargs': {'revision': '9b71edc6c769705c1ef1955b6f5cfdd5a7d1b802',
'value_head_config': {'is_detached': False}},
'path_or_name': 'kejian/spectacular-awr'},
'objective': {'alpha': 0.05, 'beta': 1, 'name': 'AWR'},
'tokenizer': {'path_or_name': 'codeparrot/codeparrot-small'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 128,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'deliberate-awr',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000.0,
'output_dir': 'training_output',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 12589,
'save_strategy': 'steps',
'seed': 42,
'tokens_already_seen': 1649934336,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/kejian/uncategorized/runs/2qh5z2cm |
AigizK/bashkir-whisper-small | AigizK | 2022-12-15T21:55:26Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"hf-asr-leaderboard",
"ba",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-13T12:16:49Z | ---
language:
- ba
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
- hf-asr-leaderboard
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Bashkir
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 ba
type: mozilla-foundation/common_voice_11_0
config: ba
split: test
args: ba
metrics:
- name: Wer
type: wer
value: 15.072300680807968
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Bashkir
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 ba dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2589
- Wer: 15.0723
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 30000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 0.1637 | 1.01 | 2000 | 0.2555 | 26.4682 |
| 0.1375 | 2.01 | 4000 | 0.2223 | 21.5394 |
| 0.0851 | 3.02 | 6000 | 0.2086 | 19.6725 |
| 0.0573 | 4.02 | 8000 | 0.2178 | 18.4280 |
| 0.036 | 5.03 | 10000 | 0.2312 | 17.8248 |
| 0.0238 | 6.04 | 12000 | 0.2621 | 17.4096 |
| 0.0733 | 7.04 | 14000 | 0.2120 | 16.5656 |
| 0.0111 | 8.05 | 16000 | 0.2682 | 16.2291 |
| 0.0155 | 9.05 | 18000 | 0.2677 | 15.9242 |
| 0.0041 | 10.06 | 20000 | 0.3178 | 15.9534 |
| 0.0023 | 12.01 | 22000 | 0.3218 | 16.0536 |
| 0.0621 | 13.01 | 24000 | 0.2313 | 15.6169 |
| 0.0022 | 14.02 | 26000 | 0.2887 | 15.1083 |
| 0.0199 | 15.02 | 28000 | 0.2553 | 15.1848 |
| 0.0083 | 16.03 | 30000 | 0.2589 | 15.0723 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
farsipal/whisper-md-el-intlv-xs | farsipal | 2022-12-15T21:54:46Z | 16 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"hf-asr-leaderboard",
"greek",
"el",
"dataset:mozilla-foundation/common_voice_11_0",
"dataset:google/fleurs",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-14T15:26:42Z | ---
language:
- el
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
- hf-asr-leaderboard
- automatic-speech-recognition
- greek
datasets:
- mozilla-foundation/common_voice_11_0
- google/fleurs
metrics:
- wer
model-index:
- name: whisper-md-el-intlv-xs
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0
type: mozilla-foundation/common_voice_11_0
config: el
split: test
metrics:
- name: Wer
type: wer
value: 11.3670
---
# whisper-md-el-intlv-xs
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on interleaved mozilla-foundation/common_voice_11_0 (el) and the google/fleurs (el_gr) datasets. It achieves the following results on the mozilla-foundation/common_voice_11_0 test evaluation set:
- Loss: 0.4168
- Wer: 11.3670
## Model description
This model is trained over the two interleaved datasets in the Greek language. Testing used only the common_voice_11_0 (el) test split.
## Intended uses & limitations
The model was trained for transcription in Greek
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 10000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|
| 0.0251 | 2.49 | 1000 | 0.2216 | 12.5836 |
| 0.0051 | 4.98 | 2000 | 0.2874 | 12.2957 |
| 0.0015 | 7.46 | 3000 | 0.3281 | 11.9056 |
| 0.0017 | 9.95 | 4000 | 0.3178 | 12.5929 |
| 0.0008 | 12.44 | 5000 | 0.3449 | 11.9799 |
| 0.0001 | 14.93 | 6000 | 0.3638 | 11.7106 |
| 0.0001 | 17.41 | 7000 | 0.3910 | 11.4970 |
| 0.0 | 19.9 | 8000 | 0.4042 | 11.3949 |
| 0.0 | 22.39 | 9000 | 0.4129 | 11.4134 |
| 0.0 | 24.88 | 10000 | 0.4168 | 11.3670 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
GeneralAwareness/Unddep | GeneralAwareness | 2022-12-15T21:51:19Z | 0 | 12 | null | [
"stable-diffusion",
"v2",
"text-to-image",
"image-to-image",
"Embedding",
"en",
"license:cc-by-nc-sa-4.0",
"region:us"
] | text-to-image | 2022-12-14T07:54:36Z | ---
license: cc-by-nc-sa-4.0
language:
- en
thumbnail: "https://huggingface.co/GeneralAwareness/Unddep/resolve/main/with-1.png"
tags:
- stable-diffusion
- v2
- text-to-image
- image-to-image
- Embedding
---
Textual Inversion Embedding by General Awareness For SD 2.x trained on 768x768 images from various sources.
Install by downloading the .pt embedding, and put it in the \embeddings folder
An undersea/underworld themed embedding that was created with 16 vectors.
Use keyword: unddep
Without this embedding and with this embedding.


Without this embedding and with this embedding.

 |
bakisanlan/q-FrozenLake-v1-4x4-noSlippery | bakisanlan | 2022-12-15T21:49:13Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T21:48:58Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="bakisanlan/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
nefasto/whisper-small-it | nefasto | 2022-12-15T21:22:26Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"it",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-14T17:04:58Z | ---
language:
- it
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Italian
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 it
type: mozilla-foundation/common_voice_11_0
config: it
split: test
args: it
metrics:
- name: Wer
type: wer
value: 12.303981501169467
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Italian
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 it dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2534
- Wer: 12.3040
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-06
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 6000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.2737 | 2.01 | 1000 | 0.2728 | 13.4097 |
| 0.1536 | 4.02 | 2000 | 0.2611 | 12.9897 |
| 0.0905 | 6.03 | 3000 | 0.2686 | 12.9273 |
| 0.1301 | 8.04 | 4000 | 0.2534 | 12.3040 |
| 0.096 | 10.05 | 5000 | 0.2727 | 12.6130 |
| 0.0604 | 12.06 | 6000 | 0.2698 | 12.5027 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
Conflictx/Kipaki-EgyptianSciFi | Conflictx | 2022-12-15T21:17:44Z | 0 | 65 | null | [
"text-to-image",
"v2.0",
"Embedding",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2022-12-01T11:46:45Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
- v2.0
- Embedding
---
Textual Inversion Embedding by ConflictX For SD 2.0 trained on 768x768 images from midjourney.
Install by downloading the step embedding you want, and put it in the \embeddings folder
It is slightly overfit on 150 steps so some concepts/keywords will be harder to prompt for (use negatives or weight Kipaki down) but it works amazing for cityscapes, people, gods, and other scifi genres.
Very stylized on ancient Egypt, scifi, and orange/blue color scheme but other concepts are definitely possible: More images here: https://imgur.com/a/W2bmBaV
Use keyword: Kipaki-xxx
xxx is embedding number
There are multiple versions, the images below were created with the 150 step version.







Highres Images:




|
alexgeh196/test_model_seminar_alex_123 | alexgeh196 | 2022-12-15T21:08:43Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-14T15:08:15Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: test_model_seminar_alex_123
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_model_seminar_alex_123
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5472
- Accuracy: 0.7447
- F1: 0.7451
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Tokenizers 0.13.2
|
miangoar/esm2_t12_35M_UR50D-finetuned-secondary-structure-classification | miangoar | 2022-12-15T21:00:11Z | 10 | 0 | transformers | [
"transformers",
"tf",
"esm",
"token-classification",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-12-15T20:59:58Z | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: esm2_t12_35M_UR50D-finetuned-secondary-structure-classification
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# esm2_t12_35M_UR50D-finetuned-secondary-structure-classification
This model is a fine-tuned version of [facebook/esm2_t12_35M_UR50D](https://huggingface.co/facebook/esm2_t12_35M_UR50D) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4076
- Train Masked Accuracy: 0.8342
- Validation Loss: 0.4714
- Validation Masked Accuracy: 0.8060
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.0}
- training_precision: float32
### Training results
| Train Loss | Train Masked Accuracy | Validation Loss | Validation Masked Accuracy | Epoch |
|:----------:|:---------------------:|:---------------:|:--------------------------:|:-----:|
| 0.5874 | 0.7454 | 0.4908 | 0.7962 | 0 |
| 0.4503 | 0.8156 | 0.4703 | 0.8043 | 1 |
| 0.4076 | 0.8342 | 0.4714 | 0.8060 | 2 |
### Framework versions
- Transformers 4.25.1
- TensorFlow 2.9.2
- Datasets 2.7.1
- Tokenizers 0.13.2
|
fabraz/ppo-LunarLander-v2 | fabraz | 2022-12-15T20:57:52Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T20:57:28Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 286.59 +/- 20.22
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
LuniLand/dqn-LunarLander-v2 | LuniLand | 2022-12-15T20:40:47Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T20:40:16Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 168.44 +/- 106.68
name: mean_reward
verified: false
---
# **DQN** Agent playing **LunarLander-v2**
This is a trained model of a **DQN** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env LunarLander-v2 -orga LuniLand -f logs/
python enjoy.py --algo dqn --env LunarLander-v2 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env LunarLander-v2 -orga LuniLand -f logs/
rl_zoo3 enjoy --algo dqn --env LunarLander-v2 -f logs/
```
## Training (with the RL Zoo)
```
python train.py --algo dqn --env LunarLander-v2 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env LunarLander-v2 -f logs/ -orga LuniLand
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('buffer_size', 50000),
('exploration_final_eps', 0.1),
('exploration_fraction', 0.12),
('gamma', 0.99),
('gradient_steps', -1),
('learning_rate', 0.00063),
('learning_starts', 0),
('n_timesteps', 100000.0),
('policy', 'MlpPolicy'),
('policy_kwargs', 'dict(net_arch=[256, 256])'),
('target_update_interval', 250),
('train_freq', 4),
('normalize', False)])
```
|
miangoar/esm2_t12_35M_UR50D-finetuned-cytosol-membrane-classification | miangoar | 2022-12-15T20:36:03Z | 4 | 0 | transformers | [
"transformers",
"tf",
"esm",
"text-classification",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-15T20:35:45Z | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: esm2_t12_35M_UR50D-finetuned-cytosol-membrane-classification
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# esm2_t12_35M_UR50D-finetuned-cytosol-membrane-classification
This model is a fine-tuned version of [facebook/esm2_t12_35M_UR50D](https://huggingface.co/facebook/esm2_t12_35M_UR50D) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1009
- Train Accuracy: 0.9684
- Validation Loss: 0.2122
- Validation Accuracy: 0.9401
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.0}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2464 | 0.9228 | 0.1954 | 0.9417 | 0 |
| 0.1428 | 0.9565 | 0.1831 | 0.9345 | 1 |
| 0.1009 | 0.9684 | 0.2122 | 0.9401 | 2 |
### Framework versions
- Transformers 4.25.1
- TensorFlow 2.9.2
- Datasets 2.7.1
- Tokenizers 0.13.2
|
bayartsogt/whisper-medium-mn-10 | bayartsogt | 2022-12-15T20:21:04Z | 18 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"hf-asr-leaderboard",
"generated_from_multiple_datasets",
"mn",
"dataset:mozilla-foundation/common_voice_11_0",
"dataset:google/fleurs",
"dataset:bayartsogt/ulaanbal-v0",
"dataset:bayartsogt/youtube-mongolian-v1",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-13T22:01:42Z | ---
language: mn
license: apache-2.0
tags:
- whisper-event
- hf-asr-leaderboard
- generated_from_multiple_datasets
datasets:
- mozilla-foundation/common_voice_11_0
- google/fleurs
- bayartsogt/ulaanbal-v0
- bayartsogt/youtube-mongolian-v1
metrics:
- wer
- cer
model-index:
- name: whisper-medium-mn-10
results:
- task:
type: automatic-speech-recognition
name: Automatic Speech Recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: mn
split: test
metrics:
- type: wer
value: 21.258466244264802
name: Wer
- type: cer
value: 6.875610660018193
name: Cer
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-medium-mn-10
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2103
- Wer: 21.2585
- Cer: 6.8756
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 40000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Cer | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:-------:|:---------------:|:-------:|
| 0.4197 | 0.09 | 1000 | 19.0947 | 0.4462 | 53.9600 |
| 0.3288 | 0.17 | 2000 | 14.8016 | 0.3468 | 44.2102 |
| 0.2737 | 0.26 | 3000 | 12.3471 | 0.3020 | 36.1700 |
| 0.2558 | 0.35 | 4000 | 11.7171 | 0.2824 | 34.1709 |
| 0.2406 | 0.43 | 5000 | 10.3551 | 0.2594 | 31.1230 |
| 0.218 | 0.52 | 6000 | 9.7815 | 0.2452 | 29.6865 |
| 0.2253 | 0.61 | 7000 | 9.6712 | 0.2344 | 29.2932 |
| 0.2071 | 0.69 | 8000 | 9.4261 | 0.2283 | 28.5067 |
| 0.2051 | 0.78 | 9000 | 9.0656 | 0.2224 | 27.4033 |
| 0.2064 | 0.87 | 10000 | 8.7851 | 0.2138 | 26.7206 |
| 0.193 | 0.95 | 11000 | 8.5021 | 0.2089 | 25.5790 |
| 0.1577 | 1.04 | 12000 | 8.2873 | 0.2072 | 25.6118 |
| 0.1397 | 1.13 | 13000 | 8.2368 | 0.2046 | 25.1147 |
| 0.1526 | 1.21 | 14000 | 8.7615 | 0.2065 | 26.4638 |
| 0.1497 | 1.3 | 15000 | 0.2004 | 24.4866 | 7.9588 |
| 0.1569 | 1.39 | 16000 | 0.1990 | 24.2244 | 7.9554 |
| 0.1416 | 1.47 | 17000 | 0.2001 | 24.2298 | 7.8754 |
| 0.1371 | 1.56 | 18000 | 0.1932 | 23.6072 | 7.8072 |
| 0.1379 | 1.65 | 19000 | 0.1916 | 23.1320 | 7.5452 |
| 0.1305 | 1.73 | 20000 | 0.1880 | 23.1101 | 7.4290 |
| 0.1395 | 1.82 | 21000 | 0.1877 | 22.9845 | 7.4635 |
| 0.1418 | 1.91 | 22000 | 0.1862 | 22.9080 | 7.5907 |
| 0.1432 | 1.99 | 23000 | 0.1847 | 22.7114 | 7.4290 |
| 0.0965 | 2.08 | 24000 | 0.1931 | 21.7391 | 7.0399 |
| 0.0723 | 2.17 | 25000 | 0.1961 | 22.3236 | 7.2698 |
| 0.0773 | 2.25 | 26000 | 0.1977 | 22.0505 | 7.0752 |
| 0.0862 | 2.34 | 27000 | 0.1959 | 21.9522 | 7.0820 |
| 0.0739 | 2.43 | 28000 | 0.1982 | 21.7719 | 7.1494 |
| 0.0843 | 2.51 | 29000 | 0.1963 | 21.8921 | 7.1241 |
| 0.0734 | 2.6 | 30000 | 0.1980 | 21.7883 | 7.1317 |
| 0.0785 | 2.69 | 31000 | 0.1955 | 21.8757 | 7.1948 |
| 0.0691 | 2.77 | 32000 | 0.1978 | 21.7446 | 7.0938 |
| 0.0834 | 2.86 | 33000 | 0.1953 | 21.3240 | 7.0121 |
| 0.0675 | 2.95 | 34000 | 0.1958 | 21.7719 | 7.0769 |
| 0.042 | 3.03 | 35000 | 0.2053 | 21.3404 | 6.9624 |
| 0.0474 | 3.12 | 36000 | 0.2097 | 21.5534 | 7.0306 |
| 0.0428 | 3.21 | 37000 | 0.2107 | 21.3185 | 6.9809 |
| 0.0343 | 3.29 | 38000 | 0.2111 | 21.3896 | 6.9514 |
| 0.0378 | 3.38 | 39000 | 0.2103 | 21.2585 | 6.8756 |
| 0.0361 | 3.47 | 40000 | 0.2106 | 21.3677 | 6.9009 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
Jedalc/ppo-LunarLander-v2 | Jedalc | 2022-12-15T20:03:55Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T20:03:31Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 242.34 +/- 12.29
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
alexamiredjibi/Multimodal-Trajectory-Classifier-30 | alexamiredjibi | 2022-12-15T20:02:33Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-12-15T19:00:55Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Multimodal-Trajectory-Classifier-30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Multimodal-Trajectory-Classifier-30
This model is a fine-tuned version of [alexamiredjibi/Multimodal-Trajectory-Classifier](https://huggingface.co/alexamiredjibi/Multimodal-Trajectory-Classifier) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
hendoo/q-FrozenLake-v1-4x4-noSlippery | hendoo | 2022-12-15T20:00:42Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T20:00:37Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="hendoo/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
MinaAlmasi/dknews-NB-BERT-AI-classifier | MinaAlmasi | 2022-12-15T20:00:10Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:cc-by-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-11-18T11:16:56Z | ---
license: cc-by-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: dknews-NB-BERT-AI-classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dknews-NB-BERT-AI-classifier/
This model is a fine-tuned version of [NbAiLab/nb-bert-large](https://huggingface.co/NbAiLab/nb-bert-large) on a custom dataset with Danish News articles either generated by GPT-3 or a Danish journalist from a large Danish news media. The task is then to classify whether the article is written by GPT-3 (label = 0) or human (label = 1)
It achieves the following results on the evaluation set (the best model loaded i.e., after 2 epochs)
- Loss: 0.1804
- Accuracy: 0.9574
- F1: 0.9574
- Precision: 0.9576
- Recall: 0.9574
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
The model is trained on Danish news articles either generated by a fine-tuned GPT-3 or a Danish Journalist from a large Danish News Media TV2.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 2502
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.696 | 1.0 | 39 | 0.4926 | 0.8262 | 0.8211 | 0.8672 | 0.8262 |
| 0.4195 | 2.0 | 78 | 0.1804 | 0.9574 | 0.9574 | 0.9576 | 0.9574 |
| 0.1458 | 3.0 | 117 | 0.2810 | 0.9246 | 0.9241 | 0.9344 | 0.9246 |
| 0.0424 | 4.0 | 156 | 0.5893 | 0.8852 | 0.8838 | 0.9041 | 0.8852 |
| 0.0246 | 5.0 | 195 | 1.4776 | 0.7475 | 0.7301 | 0.8321 | 0.7475 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
daspartho/ppo-Huggy | daspartho | 2022-12-15T19:54:06Z | 2 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2022-12-15T19:53:56Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: daspartho/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
dzegan/unit2-taxi-Qtable-1 | dzegan | 2022-12-15T19:46:06Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T19:46:01Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: unit2-taxi-Qtable-1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.46 +/- 2.65
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="dzegan/unit2-taxi-Qtable-1", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
deepdml/whisper-medium-mix-fr | deepdml | 2022-12-15T19:39:29Z | 27 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"fr",
"dataset:mozilla-foundation/common_voice_11_0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-13T08:05:00Z | ---
language:
- fr
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: deepdml/whisper-medium-mix-fr
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 fr
type: mozilla-foundation/common_voice_11_0
config: fr
split: test
args: fr
metrics:
- name: Wer
type: wer
value: 11.227820307400155
- name: Cer
type: cer
value: 4.2141
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: FLEURS ASR
type: google/fleurs
config: fr_fr
split: test
args: fr
metrics:
- name: WER
type: wer
value: 9.3526
- name: Cer
type: cer
value: 4.144
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Multilingual LibriSpeech
type: facebook/multilingual_librispeech
config: french
split: test
args:
language: fr
metrics:
- name: WER
type: wer
value: 6.3468
- name: Cer
type: cer
value: 3.1561
- task:
type: Automatic Speech Recognition
name: speech-recognition
dataset:
name: VoxPopuli
type: facebook/voxpopuli
config: fr
split: test
args:
language: fr
metrics:
- name: WER
type: wer
value: 10.0653
- name: Cer
type: cer
value: 6.5456
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deepdml/whisper-medium-mix-fr
This model is a fine-tuned version of [deepdml/whisper-medium-mix-fr](https://huggingface.co/deepdml/whisper-medium-mix-fr) on the mozilla-foundation/common_voice_11_0, google/fleurs, facebook/multilingual_librispeech and facebook/voxpopuli datasets.
It achieves the following results on the evaluation set:
- Loss: 0.2599
- Wer: 11.2278
Using the [evalutaion script](https://github.com/huggingface/community-events/blob/main/whisper-fine-tuning-event/run_eval_whisper_streaming.py) provided in the Whisper Sprint the model achieves these results on the test sets (WER):
- **google/fleurs: 9.3526 %**
(python run_eval_whisper_streaming.py --model_id="deepdml/whisper-medium-mix-fr" --dataset="google/fleurs" --config="fr_fr" --device=0 --language="fr")
- **facebook/multilingual_librispeech: 6.3468 %**
(python run_eval_whisper_streaming.py --model_id="deepdml/whisper-medium-mix-fr" --dataset="facebook/multilingual_librispeech" --config="french" --device=0 --language="fr")
- **facebook/voxpopuli: 10.0653 %**
(python run_eval_whisper_streaming.py --model_id="deepdml/whisper-medium-mix-fr" --dataset="facebook/voxpopuli" --config="fr" --device=0 --language="fr")
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
Training data used:
- **mozilla-foundation/common_voice_11_0:** fr, train+validation
- **google/fleurs:** fr_fr, train
- **facebook/multilingual_librispeech:** french, train
- **facebook/voxpopuli:** fr, train
-
Evaluating over test split from mozilla-foundation/common_voice_11_0 dataset.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.0855 | 0.25 | 1000 | 0.2826 | 12.4230 |
| 0.0569 | 0.5 | 2000 | 0.2768 | 11.9577 |
| 0.0724 | 0.75 | 3000 | 0.2670 | 11.6106 |
| 0.069 | 1.0 | 4000 | 0.2599 | 11.2278 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
Efimov6886/row4_98 | Efimov6886 | 2022-12-15T19:24:08Z | 14 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"vision",
"image-classification",
"dataset:Efimov6886/autotrain-data-onlykaggle",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-12-15T19:22:55Z | ---
tags:
- autotrain
- vision
- image-classification
datasets:
- Efimov6886/autotrain-data-onlykaggle
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 1.893737751807574
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 2477076728
- CO2 Emissions (in grams): 1.8937
## Validation Metrics
- Loss: 0.047
- Accuracy: 0.980
- Macro F1: 0.980
- Micro F1: 0.980
- Weighted F1: 0.980
- Macro Precision: 0.980
- Micro Precision: 0.980
- Weighted Precision: 0.980
- Macro Recall: 0.980
- Micro Recall: 0.980
- Weighted Recall: 0.980 |
LuniLand/ppo-Huggy | LuniLand | 2022-12-15T19:23:28Z | 29 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2022-12-15T19:23:20Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: LuniLand/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Efimov6886/row4_accu100 | Efimov6886 | 2022-12-15T19:22:21Z | 17 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"vision",
"image-classification",
"dataset:Efimov6886/autotrain-data-onlykaggle",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-12-15T19:21:39Z | ---
tags:
- autotrain
- vision
- image-classification
datasets:
- Efimov6886/autotrain-data-onlykaggle
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 0.003935079874008164
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 2477076724
- CO2 Emissions (in grams): 0.0039
## Validation Metrics
- Loss: 0.021
- Accuracy: 0.990
- Macro F1: 0.990
- Micro F1: 0.990
- Weighted F1: 0.990
- Macro Precision: 0.990
- Micro Precision: 0.990
- Weighted Precision: 0.990
- Macro Recall: 0.990
- Micro Recall: 0.990
- Weighted Recall: 0.990 |
haining/sas_baseline | haining | 2022-12-15T19:21:26Z | 34 | 4 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"text2text generation",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2022-11-19T04:28:31Z | ---
inference:
parameters:
do_sample: true
max_length: 512
top_p: 0.9
repetition_penalty: 1.0
language:
- en
license: mit
metrics:
- sacrebleu
- bert_score
- rouge
- meteor
- sari
- ari
- "Automated Readability Index"
tags:
- "text2text generation"
task:
name: "scientific abstract simplification"
type: "text2text generation"
widget:
-
text: "summarize, simplify, and contextualize: The COVID-19 pandemic presented enormous data challenges in the United States. Policy makers, epidemiological modelers, and health researchers all require up-to-date data on the pandemic and relevant public behavior, ideally at fine spatial and temporal resolution. The COVIDcast API is our attempt to fill this need: Operational since April 2020, it provides open access to both traditional public health surveillance signals (cases, deaths, and hospitalizations) and many auxiliary indicators of COVID-19 activity, such as signals extracted from deidentified medical claims data, massive online surveys, cell phone mobility data, and internet search trends. These are available at a fine geographic resolution (mostly at the county level) and are updated daily. The COVIDcast API also tracks all revisions to historical data, allowing modelers to account for the frequent revisions and backfill that are common for many public health data sources. All of the data are available in a common format through the API and accompanying R and Python software packages. This paper describes the data sources and signals, and provides examples demonstrating that the auxiliary signals in the COVIDcast API present information relevant to tracking COVID activity, augmenting traditional public health reporting and empowering research and decision-making."
example_title: "covid-api paper, from PNAS"
-
text: "summarize, simplify, and contextualize: Potato mop-top virus (PMTV) is considered an emerging threat to potato production in the United States. PMTV is transmitted by a soil-borne protist, Spongospora subterranean. Rapid, accurate, and sensitive detection of PMTV in leaves and tubers is an essential component in PMTV management program. A rapid test that can be adapted to in-field, on-site testing with minimal sample manipulation could help in ensuring the sanitary status of the produce in situations such as certification programs and shipping point inspections. Toward that goal, a rapid and highly sensitive recombinase polymerase amplification (RPA)-based test was developed for PMTV detection in potato tubers. The test combines the convenience of RPA assay with a simple sample extraction procedure, making it amenable to rapid on-site diagnosis of PMTV. Furthermore, the assay was duplexed with a plant internal control to monitor sample extraction and RPA reaction performance. The method described could detect as little as 10 fg of PMTV RNA transcript in various potato tissues, the diagnostic limit of detection (LOQ) similar to that of traditional molecular methods."
example_title: "potato paper, from PLOS ONE"
-
text: "summarize, simplify, and contextualize: One of the most thrilling cultural experiences is to hear live symphony-orchestra music build up from a whispering passage to a monumental fortissimo. The impact of such a crescendo has been thought to depend only on the musicians’ skill, but here we show that interactions between the concert-hall acoustics and listeners’ hearing also play a major role in musical dynamics. These interactions contribute to the shoebox-type concert hall’s established success, but little prior research has been devoted to dynamic expression in this three-part transmission chain as a complete system. More forceful orchestral playing disproportionately excites high frequency harmonics more than those near the note’s fundamental. This effect results in not only more sound energy, but also a different tone color. The concert hall transmits this sound, and the room geometry defines from which directions acoustic reflections arrive at the listener. Binaural directional hearing emphasizes high frequencies more when sound arrives from the sides of the head rather than from the median plane. Simultaneously, these same frequencies are emphasized by higher orchestral-playing dynamics. When the room geometry provides reflections from these directions, the perceived dynamic range is enhanced. Current room-acoustic evaluation methods assume linear behavior and thus neglect this effect. The hypothesis presented here is that the auditory excitation by reflections is emphasized with an orchestra forte most in concert halls with strong lateral reflections. The enhanced dynamic range provides an explanation for the success of rectangularly shaped concert-hall geometry."
example_title: "music paper, from PNAS"
-
text: "summarize, simplify, and contextualize: Children in industrialized cultures typically succeed on Give-N, a test of counting ability, by age 4. On the other hand, counting appears to be learned much later in the Tsimane’, an indigenous group in the Bolivian Amazon. This study tests three hypotheses for what may cause this difference in timing: (a) Tsimane’ children may be shy in providing behavioral responses to number tasks, (b) Tsimane’ children may not memorize the verbal list of number words early in acquisition, and/or (c) home environments may not support mathematical learning in the same way as in US samples, leading Tsimane’ children to primarily acquire mathematics through formalized schooling. Our results suggest that most of our subjects are not inhibited by shyness in responding to experimental tasks. We also find that Tsimane’ children (N = 100, ages 4-11) learn the verbal list later than US children, but even upon acquiring this list, still take time to pass Give-N tasks. We find that performance in counting varies across tasks and is related to formal schooling. These results highlight the importance of formal education, including instruction in the count list, in learning the meanings of the number words."
example_title: "given-n paper, from PLOS ONE"
---
# TL;DR
**Our [full model](https://huggingface.co/haining/scientific_abstract_simplification) is out!🎉🎉🎉 It leverages the power of multi-instruction finetuning and beats the baseline by a margin. Use the [full model](https://huggingface.co/haining/scientific_abstract_simplification) unless the goal is comparison.**
Scientific Abstract Simplification rewrites hard-to-read scientific abstracts😵 into simpler yet relevant scientific stories😇. We hope our model can make scientific knowledge accessible for everyone🤗.
Try it now with the Hosted inference API on the right.
You can choose an existing example or paste in any (perhaps full-of-jargon) abstract. Remember to prepend the instruction to the abstract ("summarize, simplify, and contextualize: "; notice, there is a whitespace after the colon). Local use refers to Section [Usage](#Usage).
# Model Details
## Model Description
Open science has significantly lowered the barriers to scientific papers.
However, reachable research does not mean accessible knowledge. Scientific papers are usually replete with jargon and hard to read. A lay audience would rather trust little stories on social media than read scientific papers. They are not to blame, we human like stories.
So why do not we "translate" arcane scientific abstracts into simpler yet relevant scientific stories?
Some renowned journals have already taken accessibility into consideration. For example, PNAS asks authors to submit Significance Statements targeting "an undergraduate-educated scientist." Science also includes an editor abstract for a quick dive.
We therefore propose to *rewrite scientific abstracts into understandable scientific stories using AI*.
To this end, we introduce a new corpus comprising PNAS abstract-significance pairs.
We finetune an encoder-decoder Transformer model (a variant of Flan-T5) with the corpus.
Our baseline model (SAS-baseline) shows promising capacity in simplifying and summarizing scientific abstracts.
We hope our work can pave the last mile of scientific understanding and let people better enjoy the fruits of open science.
As an ongoing effort, we are working on re-contextualizating abstracts for better storytelling and avoiding certain jargon tokens during inference time for better readability.
<!-- We hypothesize the last mile of scientific understanding is cognitive. -->
- **Model type:** Language model
- **Developed by:**
- PIs: Jason Clark and Hannah McKelvey, Montana State University
- Fellow: Haining Wang, Indiana University Bloomington
- Collaborator: Zuoyu Tian, Indiana University Bloomington
- [LEADING](https://cci.drexel.edu/mrc/leading/) Montana State University Library, Project "TL;DR it": Automating Article Synopses for Search Engine Optimization and Citizen Science
- **Language(s) (NLP):** English
- **License:** MIT
- **Parent Model:** [FLAN-T5-large](https://huggingface.co/google/flan-t5-large)
# Usage
Use the code below to get started with the model. Remember to prepend the `INSTRUCTION` for best performance.
```python
import torch
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
INSTRUCTION = "summarize, simplify, and contextualize: "
tokenizer = AutoTokenizer.from_pretrained("haining/sas_baseline")
model = AutoModelForSeq2SeqLM.from_pretrained("haining/sas_baseline")
input_text = "The COVID-19 pandemic presented enormous data challenges in the United States. Policy makers, epidemiological modelers, and health researchers all require up-to-date data on the pandemic and relevant public behavior, ideally at fine spatial and temporal resolution. The COVIDcast API is our attempt to fill this need: Operational since April 2020, it provides open access to both traditional public health surveillance signals (cases, deaths, and hospitalizations) and many auxiliary indicators of COVID-19 activity, such as signals extracted from deidentified medical claims data, massive online surveys, cell phone mobility data, and internet search trends. These are available at a fine geographic resolution (mostly at the county level) and are updated daily. The COVIDcast API also tracks all revisions to historical data, allowing modelers to account for the frequent revisions and backfill that are common for many public health data sources. All of the data are available in a common format through the API and accompanying R and Python software packages. This paper describes the data sources and signals, and provides examples demonstrating that the auxiliary signals in the COVIDcast API present information relevant to tracking COVID activity, augmenting traditional public health reporting and empowering research and decision-making."
encoding = tokenizer(INSTRUCTION + input_text,
max_length=672,
padding='max_length',
truncation=True,
return_tensors='pt')
decoded_ids = model.generate(input_ids=encoding['input_ids'],
attention_mask=encoding['attention_mask'],
max_length=512,
top_p=.9,
do_sample=True)
print(tokenizer.decode(decoded_ids[0], skip_special_tokens=True))
```
# Training
## Data
For SAS-baseline, we finetuned Flan-T5 model with the Scientific Abstract-Significance (SAS) corpus.
| Scientific Abstract-Significance | # Training/Dev/Test Samples | # Training Tokens | # Validation Tokens | # Test Tokens | Automated Readability Index (std.) |
|----------------------------------|-----------------------------|-------------------|---------------------|---------------|------------------------------------|
| Abstract | 3030/200/200 | 707,071 | 45,697 | 46,985 | 18.68 (2.85) |
| Significance | 3030/200/200 | 375,433 | 24,901 | 24,426 | 17.89 (3.05) |
## Setup
We finetuned the base model with a standard language modeling objective: the abstracts are sources and the significance statements are targets. We inform the model with a task-spcific prefix ("summarize, simplify, and contextualize: ") during training. The training took roughly 9 hours on two NVIDIA RTX A5000 (24GB memory each) GPUs. We saved the checkpoint with the lowest validation loss for inference. We used the AdamW optimizer and a learning rate of 3e-5 with fully sharded data parallel strategy. The model (\~780M parameter) was trained on Nov. 20, 2022.
Notice, the readability of the significance statements is generally lower than the abstracts', but not by a large margin. Our incoming SAS-full model will leverage more corpora for scientific (re)contextualization, summarization, and simplification.
# Evaluation
The model is evaluated on the SAS test set using the following metrics.
## Metrics
- [SacreBLEU](https://huggingface.co/spaces/evaluate-metric/sacrebleu): SacreBLEU provides hassle-free computation of shareable, comparable, and reproducible BLEU scores. Inspired by Rico Sennrich’s multi-bleu-detok.perl, it produces the official WMT scores but works with plain text. It also knows all the standard test sets and handles downloading, processing, and tokenization for you.
- [BERTScore](https://huggingface.co/spaces/evaluate-metric/bertscore): BERTScore leverages the pre-trained contextual embeddings from BERT and matches words in candidate and reference sentences by cosine similarity. It has been shown to correlate with human judgment on sentence-level and system-level evaluation. Moreover, BERTScore computes precision, recall, and F1 measure, which can be useful for evaluating different language generation tasks.
- [ROUGLE](https://huggingface.co/spaces/evaluate-metric/rouge)-1/2/L: ROUGE is a set of metrics and a software package used for evaluating automatic summarization and machine translation software in natural language processing. The metrics compare an automatically produced summary or translation against a reference or a set of references (human-produced) summary or translation.
- [METEOR](https://huggingface.co/spaces/evaluate-metric/meteor): METEOR, an automatic metric for machine translation evaluation that is based on a generalized concept of unigram matching between the machine-produced translation and human-produced reference translations. Unigrams can be matched based on their surface forms, stemmed forms, and meanings; furthermore, METEOR can be easily extended to include more advanced matching strategies. Once all generalized unigram matches between the two strings have been found, METEOR computes a score for this matching using a combination of unigram-precision, unigram-recall, and a measure of fragmentation that is designed to directly capture how well-ordered the matched words in the machine translation are in relation to the reference.
- [SARI](https://huggingface.co/spaces/evaluate-metric/sari): SARI is a metric used for evaluating automatic text simplification systems. The metric compares the predicted simplified sentences against the reference and the source sentences. It explicitly measures the goodness of words that are added, deleted and kept by the system. Sari = (F1_add + F1_keep + P_del) / 3 where F1_add: n-gram F1 score for add operation F1_keep: n-gram F1 score for keep operation P_del: n-gram precision score for delete operation n = 4, as in the original paper.
- [The Automated Readability Index (ARI)](https://www.readabilityformulas.com/automated-readability-index.php): ARI is a readability test designed to assess the understandability of a text. Like other popular readability formulas, the ARI formula outputs a number which approximates the grade level needed to comprehend the text. For example, if the ARI outputs the number 10, this equates to a high school student, ages 15-16 years old; a number 3 means students in 3rd grade (ages 8-9 yrs. old) should be able to comprehend the text.
Implementations of SacreBLEU, BERT Score, ROUGLE, METEOR, and SARI are from Huggingface [`evaluate`](https://pypi.org/project/evaluate/) v.0.3.0. ARI is from [`py-readability-metrics`](https://pypi.org/project/py-readability-metrics/) v.1.4.5.
## Results
We tested our model on the SAS test set (200 samples). We generate 10 lay summaries based on each sample's abstract. During generation, we used top-p sampling with p=0.9. The mean performance is reported below.
| Metrics | SAS-baseline |
|----------------|-------------------|
| SacreBLEU↑ | 18.43 |
| BERT Score F1↑ | 89.31 |
| ROUGLE-1↑ | 48.14 |
| ROUGLE-2↑ | 22.96 |
| ROUGLE-L↑ | 32.29 |
| METEOR↑ | 39.04 |
| SARI↑ | 46.68 |
| ARI↓ | 17.27 |
Note: 1. Some generated texts are too short (less than 100 words) to calcualte meaningful ARI. We therefore concatenated adjecent five texts and compute ARI for the 400 longer texts (instead of original 2,000 texts). 2. BERT score, ROUGE, and METEOR are multiplied by 100.
# Contact
Please [contact us](mailto:[email protected]) for any questions or suggestions.
# Disclaimer
This model is created for making scientific abstracts more accessible. Its outputs should not be used or trusted outside of its scope. There is no guarantee that the generated text is perfectly aligned with the research. Resort to human experts or original papers when a decision is critical.
# Acknowledgement
This research is supported by the Institute of Museum and Library Services (IMLS) RE-246450-OLS-20.
|
Herrydaniel/distilbert-base-uncased-finetuned-squad | Herrydaniel | 2022-12-15T19:03:46Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2022-12-15T16:01:05Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1619
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2251 | 1.0 | 5533 | 1.1670 |
| 0.9612 | 2.0 | 11066 | 1.1385 |
| 0.758 | 3.0 | 16599 | 1.1619 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
tzvc/1a765cc4-702d-4d60-bdf9-df352c214b7b | tzvc | 2022-12-15T19:00:30Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2022-12-15T18:30:36Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: [V]
---
### training params
```json
{
"pretrained_model_name_or_path": "runwayml/stable-diffusion-v1-5",
"instance_data_dir": "./1a765cc4-702d-4d60-bdf9-df352c214b7b/instance_data",
"class_data_dir": "./class_data/a-portrait-of-a-person",
"output_dir": "./1a765cc4-702d-4d60-bdf9-df352c214b7b/",
"train_text_encoder": true,
"with_prior_preservation": false,
"prior_loss_weight": 1.0,
"instance_prompt": "[V]",
"class_prompt": "a portrait of a person",
"resolution": 512,
"train_batch_size": 1,
"gradient_accumulation_steps": 2,
"gradient_checkpointing": true,
"use_8bit_adam": true,
"learning_rate": 2e-06,
"lr_scheduler": "polynomial",
"lr_warmup_steps": 0,
"num_class_images": 200,
"max_train_steps": 1190,
"mixed_precision": "fp16"
}
```
|
GV05/sd-anime-64 | GV05 | 2022-12-15T18:58:31Z | 5 | 0 | diffusers | [
"diffusers",
"pytorch",
"unconditional-image-generation",
"diffusion-models-class",
"license:mit",
"diffusers:DDPMPipeline",
"region:us"
] | unconditional-image-generation | 2022-12-15T18:57:21Z | ---
license: mit
tags:
- pytorch
- diffusers
- unconditional-image-generation
- diffusion-models-class
---
# Model Card for Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class)
This model is a diffusion model for unconditional image generation of cute ANIME FACES.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained(GV05/sd-anime-64)
image = pipeline().images[0]
image
```
|
SirVeggie/wlop-nixeu-robutts | SirVeggie | 2022-12-15T18:55:39Z | 0 | 6 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2022-11-03T00:04:02Z | ---
license: creativeml-openrail-m
---
Artist 1: WLOP\
Patreon: https://www.patreon.com/wlop/posts
Artist 2: Nixeu\
Patreon: https://www.patreon.com/nixeu/posts
Artist 3: Cutesexyrobutts\
Patreon: https://www.patreon.com/cutesexyrobutts
## Basic explanation
Token words are what guide the AI to produce images similar to the trained style/object/character.
Include any mix of these words in the prompt to produce verying results, or exclude them to have a less pronounced effect.
There is usually at least a slight stylistic effect even without the words, but it is recommended to include at least one.
Adding token word/phrase at the start of the prompt produces results most similar to the trained concept, but they can be included elsewhere as well.
## Model info
model: wlop-nixeu-robutts\
tokens: m-wlop, m-nixeu, m-robutts\
base: waifu diffusion 1.3-full\
|
kpriyanshu256/whisper-large-v2-as-75-32-1e-05-bn | kpriyanshu256 | 2022-12-15T18:48:06Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"as",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-15T17:31:08Z | ---
language:
- as
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: openai/whisper-large-v2-Assamese
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 11.0
type: mozilla-foundation/common_voice_11_0
config: as
split: test
args: as
metrics:
- name: Wer
type: wer
value: 60.99981952716116
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai/whisper-large-v2-Assamese
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1749
- Wer: 60.9998
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- training_steps: 75
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.2123 | 1.0 | 75 | 1.1749 | 60.9998 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
admarcosai/taxi-v3-qlearning_200 | admarcosai | 2022-12-15T18:41:14Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T18:41:07Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3-qlearning_200
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="dmarcos/taxi-v3-qlearning_200", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
luisgasco/biomedical-roberta-finetuned-cantemist-test | luisgasco | 2022-12-15T18:32:51Z | 6 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"dataset:cantemist-ner",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | 2022-12-15T18:19:23Z | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- cantemist-ner
metrics:
- f1
model-index:
- name: biomedical-roberta-finetuned-cantemist-test
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: cantemist-ner
type: cantemist-ner
config: CantemistNer
split: train
args: CantemistNer
metrics:
- name: F1
type: f1
value: 0.8379235519946587
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biomedical-roberta-finetuned-cantemist-test
This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-ehr-es-cantemist](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es-cantemist) on the cantemist-ner dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0597
- F1: 0.8379
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.0015 | 1.0 | 607 | 0.0597 | 0.8379 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
admarcosai/taxi-v3-qlearning | admarcosai | 2022-12-15T18:31:59Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T18:31:54Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3-qlearning
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="dmarcos/taxi-v3-qlearning", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
hazrulakmal/ppo-LunarLander-v2 | hazrulakmal | 2022-12-15T18:31:17Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T18:30:49Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 288.90 +/- 19.59
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
vitorhgomes/q-Taxi-v3-2 | vitorhgomes | 2022-12-15T18:23:52Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T18:23:46Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3-2
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="vitorhgomes/q-Taxi-v3-2", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
admarcosai/q-FrozenLake-v1-4x4-noSlippery | admarcosai | 2022-12-15T18:09:28Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T18:09:25Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="dmarcos/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
alexamiredjibi/Multimodal-Trajectory-Classifier | alexamiredjibi | 2022-12-15T17:17:00Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2022-12-14T21:37:08Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Multimodal-Trajectory-Classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Multimodal-Trajectory-Classifier
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
vitorhgomes/q-FrozenLake-v1-4x4-noSlippery | vitorhgomes | 2022-12-15T17:16:28Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T17:16:23Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="vitorhgomes/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
maxspad/nlp-qual-q1 | maxspad | 2022-12-15T17:10:17Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2022-12-15T16:55:54Z | # Model Card for nlp-qual-q1
<!-- Provide a quick summary of what the model is/does. [Optional] -->
QuAL Score Q1 Subscore
# Table of Contents
- [Model Card for nlp-qual-q1](#model-card-for--model_id-)
- [Table of Contents](#table-of-contents)
- [Table of Contents](#table-of-contents-1)
- [Model Details](#model-details)
- [Model Description](#model-description)
- [Uses](#uses)
- [Direct Use](#direct-use)
- [Downstream Use [Optional]](#downstream-use-optional)
- [Out-of-Scope Use](#out-of-scope-use)
- [Bias, Risks, and Limitations](#bias-risks-and-limitations)
- [Recommendations](#recommendations)
- [Training Details](#training-details)
- [Training Data](#training-data)
- [Training Procedure](#training-procedure)
- [Preprocessing](#preprocessing)
- [Speeds, Sizes, Times](#speeds-sizes-times)
- [Evaluation](#evaluation)
- [Testing Data, Factors & Metrics](#testing-data-factors--metrics)
- [Testing Data](#testing-data)
- [Factors](#factors)
- [Metrics](#metrics)
- [Results](#results)
- [Model Examination](#model-examination)
- [Environmental Impact](#environmental-impact)
- [Technical Specifications [optional]](#technical-specifications-optional)
- [Model Architecture and Objective](#model-architecture-and-objective)
- [Compute Infrastructure](#compute-infrastructure)
- [Hardware](#hardware)
- [Software](#software)
- [Citation](#citation)
- [Glossary [optional]](#glossary-optional)
- [More Information [optional]](#more-information-optional)
- [Model Card Authors [optional]](#model-card-authors-optional)
- [Model Card Contact](#model-card-contact)
- [How to Get Started with the Model](#how-to-get-started-with-the-model)
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is/does. -->
QuAL Score Q1 Subscore
- **Developed by:** More information needed
- **Shared by [Optional]:** More information needed
- **Model type:** Language model
- **Language(s) (NLP):** en
- **License:** unknown
- **Parent Model:** More information needed
- **Resources for more information:** More information needed
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
## Downstream Use [Optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
<!-- If the user enters content, print that. If not, but they enter a task in the list, use that. If neither, say "more info needed." -->
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
More information on training data needed
## Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing
More information needed
### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
More information needed
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
<!-- This should link to a Data Card if possible. -->
More information needed
### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
More information needed
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
More information needed
## Results
More information needed
# Model Examination
More information needed
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
More information needed
### Software
More information needed
# Citation
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
More information needed
**APA:**
More information needed
# Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
More information needed
# More Information [optional]
More information needed
# Model Card Authors [optional]
<!-- This section provides another layer of transparency and accountability. Whose views is this model card representing? How many voices were included in its construction? Etc. -->
More information needed
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
More information needed
</details> |
abhishek/autotrain-butterflies-new-17716423 | abhishek | 2022-12-15T17:04:25Z | 15 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"vision",
"image-classification",
"dataset:abhishek/autotrain-data-butterflies-new",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-12-15T14:41:05Z | ---
tags:
- autotrain
- vision
- image-classification
datasets:
- abhishek/autotrain-data-butterflies-new
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 185.36475571171792
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 17716423
- CO2 Emissions (in grams): 185.3648
## Validation Metrics
- Loss: 3.193
- Accuracy: 0.460
- Macro F1: 0.146
- Micro F1: 0.460
- Weighted F1: 0.392
- Macro Precision: 0.145
- Micro Precision: 0.460
- Weighted Precision: 0.360
- Macro Recall: 0.166
- Micro Recall: 0.460
- Weighted Recall: 0.460 |
AymanMansour/Whisper-Sudanese-Dialect-large-v2 | AymanMansour | 2022-12-15T16:57:40Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-12-15T09:45:20Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: openai/whisper-large-v2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# openai/whisper-large-v2
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9317
- Wer: 41.0267
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.5167 | 1.08 | 1000 | 0.7033 | 67.2465 |
| 0.0886 | 3.04 | 2000 | 0.7730 | 51.1880 |
| 0.0808 | 4.12 | 3000 | 0.7812 | 52.5880 |
| 0.0232 | 6.08 | 4000 | 0.8798 | 40.8570 |
| 0.001 | 8.04 | 5000 | 0.9317 | 41.0267 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
emilios/whisper-medium-el | emilios | 2022-12-15T16:55:48Z | 51 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"hf-asr-leaderboard",
"whisper-medium",
"mozilla-foundation/common_voice_11_0",
"greek",
"whisper-event",
"generated_from_trainer",
"el",
"dataset:mozilla-foundation/common_voice_11_0",
"dataset:google/fleurs",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2022-11-09T11:40:02Z | ---
language:
- el
license: apache-2.0
tags:
- hf-asr-leaderboard
- whisper-medium
- mozilla-foundation/common_voice_11_0
- greek
- whisper-event
- generated_from_trainer
- whisper-event
datasets:
- mozilla-foundation/common_voice_11_0
- google/fleurs
metrics:
- wer
model-index:
- name: Whisper Medium El Greco
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0
type: mozilla-foundation/common_voice_11_0
config: el
split: test
metrics:
- name: Wer
type: wer
value: 10.7448
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Medium El Greco
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Common Voice 11.0 dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.4245
- eval_wer: 10.7448
- eval_runtime: 1107.1212
- eval_samples_per_second: 1.532
- eval_steps_per_second: 0.096
- epoch: 33.98
- step: 7000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 7000
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
abhishek/autotrain-butterflies-new-17716424 | abhishek | 2022-12-15T16:52:39Z | 14 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"vision",
"image-classification",
"dataset:abhishek/autotrain-data-butterflies-new",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-12-15T14:41:02Z | ---
tags:
- autotrain
- vision
- image-classification
datasets:
- abhishek/autotrain-data-butterflies-new
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 111.21012328795237
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 17716424
- CO2 Emissions (in grams): 111.2101
## Validation Metrics
- Loss: 4.305
- Accuracy: 0.317
- Macro F1: 0.043
- Micro F1: 0.317
- Weighted F1: 0.224
- Macro Precision: 0.044
- Micro Precision: 0.317
- Weighted Precision: 0.192
- Macro Recall: 0.053
- Micro Recall: 0.317
- Weighted Recall: 0.317 |
Graylien/ppo-LunarLander-v2 | Graylien | 2022-12-15T16:48:00Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T16:47:35Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 253.01 +/- 11.43
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
abhishek/autotrain-butterflies-new-17716422 | abhishek | 2022-12-15T16:40:50Z | 14 | 0 | transformers | [
"transformers",
"pytorch",
"autotrain",
"vision",
"image-classification",
"dataset:abhishek/autotrain-data-butterflies-new",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | image-classification | 2022-12-15T14:41:02Z | ---
tags:
- autotrain
- vision
- image-classification
datasets:
- abhishek/autotrain-data-butterflies-new
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 138.53332005624384
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 17716422
- CO2 Emissions (in grams): 138.5333
## Validation Metrics
- Loss: 2.762
- Accuracy: 0.496
- Macro F1: 0.204
- Micro F1: 0.496
- Weighted F1: 0.438
- Macro Precision: 0.199
- Micro Precision: 0.496
- Weighted Precision: 0.409
- Macro Recall: 0.230
- Micro Recall: 0.496
- Weighted Recall: 0.496 |
Thiefwerty/ppo-LunarLander-v2 | Thiefwerty | 2022-12-15T16:36:57Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T16:15:10Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 222.98 +/- 20.29
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).

|
rin2401/ppo-LunarLander-v2 | rin2401 | 2022-12-15T16:33:53Z | 2 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T16:33:28Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 257.51 +/- 21.87
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
wanxiangche/q-FrozenLake-v1-4x4-noSlippery | wanxiangche | 2022-12-15T16:07:53Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T16:07:46Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="wanxiangche/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
wooihen/ppo-LunarLander-v2-TEST | wooihen | 2022-12-15T16:00:11Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T15:59:45Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 262.79 +/- 27.06
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
lnros/Taxi-v3 | lnros | 2022-12-15T15:58:32Z | 0 | 0 | null | [
"Taxi-v3-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T15:54:34Z | ---
tags:
- Taxi-v3-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3-4x4-no_slippery
type: Taxi-v3-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="lnros/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
aabayomi/Taxi-v3 | aabayomi | 2022-12-15T15:56:36Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T15:56:10Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.56 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="aabayomi/Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
greedypiggy/ppo-LunarLander-v2 | greedypiggy | 2022-12-15T15:56:13Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T15:55:42Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 243.78 +/- 26.03
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
harikc456/q-FrozenLake-v1-4x4-noSlippery | harikc456 | 2022-12-15T15:51:28Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T15:49:25Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="harikc456/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
evaluate_agent(env, model["max_steps"], model["n_eval_episodes"], model["qtable"], model["eval_seed"])
```
|
ubiest/ppo-Huggy | ubiest | 2022-12-15T15:46:00Z | 7 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2022-12-15T15:45:24Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: ubiest/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
cmenasse/ppo-Huggy | cmenasse | 2022-12-15T15:42:02Z | 12 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"unity-ml-agents",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2022-12-15T15:41:52Z |
---
tags:
- unity-ml-agents
- ml-agents
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
library_name: ml-agents
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy
2. Step 1: Write your model_id: cmenasse/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
betbhai9/betbhailogin | betbhai9 | 2022-12-15T15:36:45Z | 0 | 0 | null | [
"region:us"
] | null | 2022-12-15T15:35:14Z | [Betbhai 9 login id](https://betbhai9.app/) is a website that has been in business for a long time and provides a variety of services, including [Betbhai 9 login id](https://betbhai9.app/)services. Our site is a great site and we tell you why in this Betbhai 9 login id India review. We also provide a services review, which will show you how convenient and competitive their banking and sports offerings are, as well as how prompt their customer support is. |
harikc456/lunar-lander-v2-ppo | harikc456 | 2022-12-15T15:19:32Z | 4 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T14:34:30Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 278.85 +/- 18.15
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
clp/setfit-ethos-multilabel-example | clp | 2022-12-15T15:18:05Z | 5 | 0 | sentence-transformers | [
"sentence-transformers",
"pytorch",
"mpnet",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | sentence-similarity | 2022-12-15T15:17:52Z | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 230 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 230,
"warmup_steps": 23,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
ntinosmg/taxi-v3 | ntinosmg | 2022-12-15T15:16:35Z | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T15:16:31Z | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.54 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="ntinosmg/taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
SuburbanLion/q-FrozenLake-v1-4x4-noSlippery | SuburbanLion | 2022-12-15T15:12:56Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T15:12:50Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="SuburbanLion/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
dbaibak/q-FrozenLake-v1-4x4-noSlippery | dbaibak | 2022-12-15T15:11:57Z | 0 | 1 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T15:09:01Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="dbaibak/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
tomekkorbak/stupefied_brattain | tomekkorbak | 2022-12-15T15:04:40Z | 0 | 0 | null | [
"generated_from_trainer",
"en",
"dataset:tomekkorbak/pii-pile-chunk3-0-50000",
"dataset:tomekkorbak/pii-pile-chunk3-50000-100000",
"dataset:tomekkorbak/pii-pile-chunk3-100000-150000",
"dataset:tomekkorbak/pii-pile-chunk3-150000-200000",
"dataset:tomekkorbak/pii-pile-chunk3-200000-250000",
"dataset:tomekkorbak/pii-pile-chunk3-250000-300000",
"dataset:tomekkorbak/pii-pile-chunk3-300000-350000",
"dataset:tomekkorbak/pii-pile-chunk3-350000-400000",
"dataset:tomekkorbak/pii-pile-chunk3-400000-450000",
"dataset:tomekkorbak/pii-pile-chunk3-450000-500000",
"dataset:tomekkorbak/pii-pile-chunk3-500000-550000",
"dataset:tomekkorbak/pii-pile-chunk3-550000-600000",
"dataset:tomekkorbak/pii-pile-chunk3-600000-650000",
"dataset:tomekkorbak/pii-pile-chunk3-650000-700000",
"dataset:tomekkorbak/pii-pile-chunk3-700000-750000",
"dataset:tomekkorbak/pii-pile-chunk3-750000-800000",
"dataset:tomekkorbak/pii-pile-chunk3-800000-850000",
"dataset:tomekkorbak/pii-pile-chunk3-850000-900000",
"dataset:tomekkorbak/pii-pile-chunk3-900000-950000",
"dataset:tomekkorbak/pii-pile-chunk3-950000-1000000",
"dataset:tomekkorbak/pii-pile-chunk3-1000000-1050000",
"dataset:tomekkorbak/pii-pile-chunk3-1050000-1100000",
"dataset:tomekkorbak/pii-pile-chunk3-1100000-1150000",
"dataset:tomekkorbak/pii-pile-chunk3-1150000-1200000",
"dataset:tomekkorbak/pii-pile-chunk3-1200000-1250000",
"dataset:tomekkorbak/pii-pile-chunk3-1250000-1300000",
"dataset:tomekkorbak/pii-pile-chunk3-1300000-1350000",
"dataset:tomekkorbak/pii-pile-chunk3-1350000-1400000",
"dataset:tomekkorbak/pii-pile-chunk3-1400000-1450000",
"dataset:tomekkorbak/pii-pile-chunk3-1450000-1500000",
"dataset:tomekkorbak/pii-pile-chunk3-1500000-1550000",
"dataset:tomekkorbak/pii-pile-chunk3-1550000-1600000",
"dataset:tomekkorbak/pii-pile-chunk3-1600000-1650000",
"dataset:tomekkorbak/pii-pile-chunk3-1650000-1700000",
"dataset:tomekkorbak/pii-pile-chunk3-1700000-1750000",
"dataset:tomekkorbak/pii-pile-chunk3-1750000-1800000",
"dataset:tomekkorbak/pii-pile-chunk3-1800000-1850000",
"dataset:tomekkorbak/pii-pile-chunk3-1850000-1900000",
"dataset:tomekkorbak/pii-pile-chunk3-1900000-1950000",
"license:mit",
"region:us"
] | null | 2022-12-15T15:04:22Z | ---
language:
- en
license: mit
tags:
- generated_from_trainer
datasets:
- tomekkorbak/pii-pile-chunk3-0-50000
- tomekkorbak/pii-pile-chunk3-50000-100000
- tomekkorbak/pii-pile-chunk3-100000-150000
- tomekkorbak/pii-pile-chunk3-150000-200000
- tomekkorbak/pii-pile-chunk3-200000-250000
- tomekkorbak/pii-pile-chunk3-250000-300000
- tomekkorbak/pii-pile-chunk3-300000-350000
- tomekkorbak/pii-pile-chunk3-350000-400000
- tomekkorbak/pii-pile-chunk3-400000-450000
- tomekkorbak/pii-pile-chunk3-450000-500000
- tomekkorbak/pii-pile-chunk3-500000-550000
- tomekkorbak/pii-pile-chunk3-550000-600000
- tomekkorbak/pii-pile-chunk3-600000-650000
- tomekkorbak/pii-pile-chunk3-650000-700000
- tomekkorbak/pii-pile-chunk3-700000-750000
- tomekkorbak/pii-pile-chunk3-750000-800000
- tomekkorbak/pii-pile-chunk3-800000-850000
- tomekkorbak/pii-pile-chunk3-850000-900000
- tomekkorbak/pii-pile-chunk3-900000-950000
- tomekkorbak/pii-pile-chunk3-950000-1000000
- tomekkorbak/pii-pile-chunk3-1000000-1050000
- tomekkorbak/pii-pile-chunk3-1050000-1100000
- tomekkorbak/pii-pile-chunk3-1100000-1150000
- tomekkorbak/pii-pile-chunk3-1150000-1200000
- tomekkorbak/pii-pile-chunk3-1200000-1250000
- tomekkorbak/pii-pile-chunk3-1250000-1300000
- tomekkorbak/pii-pile-chunk3-1300000-1350000
- tomekkorbak/pii-pile-chunk3-1350000-1400000
- tomekkorbak/pii-pile-chunk3-1400000-1450000
- tomekkorbak/pii-pile-chunk3-1450000-1500000
- tomekkorbak/pii-pile-chunk3-1500000-1550000
- tomekkorbak/pii-pile-chunk3-1550000-1600000
- tomekkorbak/pii-pile-chunk3-1600000-1650000
- tomekkorbak/pii-pile-chunk3-1650000-1700000
- tomekkorbak/pii-pile-chunk3-1700000-1750000
- tomekkorbak/pii-pile-chunk3-1750000-1800000
- tomekkorbak/pii-pile-chunk3-1800000-1850000
- tomekkorbak/pii-pile-chunk3-1850000-1900000
- tomekkorbak/pii-pile-chunk3-1900000-1950000
model-index:
- name: stupefied_brattain
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# stupefied_brattain
This model was trained from scratch on the tomekkorbak/pii-pile-chunk3-0-50000, the tomekkorbak/pii-pile-chunk3-50000-100000, the tomekkorbak/pii-pile-chunk3-100000-150000, the tomekkorbak/pii-pile-chunk3-150000-200000, the tomekkorbak/pii-pile-chunk3-200000-250000, the tomekkorbak/pii-pile-chunk3-250000-300000, the tomekkorbak/pii-pile-chunk3-300000-350000, the tomekkorbak/pii-pile-chunk3-350000-400000, the tomekkorbak/pii-pile-chunk3-400000-450000, the tomekkorbak/pii-pile-chunk3-450000-500000, the tomekkorbak/pii-pile-chunk3-500000-550000, the tomekkorbak/pii-pile-chunk3-550000-600000, the tomekkorbak/pii-pile-chunk3-600000-650000, the tomekkorbak/pii-pile-chunk3-650000-700000, the tomekkorbak/pii-pile-chunk3-700000-750000, the tomekkorbak/pii-pile-chunk3-750000-800000, the tomekkorbak/pii-pile-chunk3-800000-850000, the tomekkorbak/pii-pile-chunk3-850000-900000, the tomekkorbak/pii-pile-chunk3-900000-950000, the tomekkorbak/pii-pile-chunk3-950000-1000000, the tomekkorbak/pii-pile-chunk3-1000000-1050000, the tomekkorbak/pii-pile-chunk3-1050000-1100000, the tomekkorbak/pii-pile-chunk3-1100000-1150000, the tomekkorbak/pii-pile-chunk3-1150000-1200000, the tomekkorbak/pii-pile-chunk3-1200000-1250000, the tomekkorbak/pii-pile-chunk3-1250000-1300000, the tomekkorbak/pii-pile-chunk3-1300000-1350000, the tomekkorbak/pii-pile-chunk3-1350000-1400000, the tomekkorbak/pii-pile-chunk3-1400000-1450000, the tomekkorbak/pii-pile-chunk3-1450000-1500000, the tomekkorbak/pii-pile-chunk3-1500000-1550000, the tomekkorbak/pii-pile-chunk3-1550000-1600000, the tomekkorbak/pii-pile-chunk3-1600000-1650000, the tomekkorbak/pii-pile-chunk3-1650000-1700000, the tomekkorbak/pii-pile-chunk3-1700000-1750000, the tomekkorbak/pii-pile-chunk3-1750000-1800000, the tomekkorbak/pii-pile-chunk3-1800000-1850000, the tomekkorbak/pii-pile-chunk3-1850000-1900000 and the tomekkorbak/pii-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 12588
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.24.0
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'datasets': ['tomekkorbak/pii-pile-chunk3-0-50000',
'tomekkorbak/pii-pile-chunk3-50000-100000',
'tomekkorbak/pii-pile-chunk3-100000-150000',
'tomekkorbak/pii-pile-chunk3-150000-200000',
'tomekkorbak/pii-pile-chunk3-200000-250000',
'tomekkorbak/pii-pile-chunk3-250000-300000',
'tomekkorbak/pii-pile-chunk3-300000-350000',
'tomekkorbak/pii-pile-chunk3-350000-400000',
'tomekkorbak/pii-pile-chunk3-400000-450000',
'tomekkorbak/pii-pile-chunk3-450000-500000',
'tomekkorbak/pii-pile-chunk3-500000-550000',
'tomekkorbak/pii-pile-chunk3-550000-600000',
'tomekkorbak/pii-pile-chunk3-600000-650000',
'tomekkorbak/pii-pile-chunk3-650000-700000',
'tomekkorbak/pii-pile-chunk3-700000-750000',
'tomekkorbak/pii-pile-chunk3-750000-800000',
'tomekkorbak/pii-pile-chunk3-800000-850000',
'tomekkorbak/pii-pile-chunk3-850000-900000',
'tomekkorbak/pii-pile-chunk3-900000-950000',
'tomekkorbak/pii-pile-chunk3-950000-1000000',
'tomekkorbak/pii-pile-chunk3-1000000-1050000',
'tomekkorbak/pii-pile-chunk3-1050000-1100000',
'tomekkorbak/pii-pile-chunk3-1100000-1150000',
'tomekkorbak/pii-pile-chunk3-1150000-1200000',
'tomekkorbak/pii-pile-chunk3-1200000-1250000',
'tomekkorbak/pii-pile-chunk3-1250000-1300000',
'tomekkorbak/pii-pile-chunk3-1300000-1350000',
'tomekkorbak/pii-pile-chunk3-1350000-1400000',
'tomekkorbak/pii-pile-chunk3-1400000-1450000',
'tomekkorbak/pii-pile-chunk3-1450000-1500000',
'tomekkorbak/pii-pile-chunk3-1500000-1550000',
'tomekkorbak/pii-pile-chunk3-1550000-1600000',
'tomekkorbak/pii-pile-chunk3-1600000-1650000',
'tomekkorbak/pii-pile-chunk3-1650000-1700000',
'tomekkorbak/pii-pile-chunk3-1700000-1750000',
'tomekkorbak/pii-pile-chunk3-1750000-1800000',
'tomekkorbak/pii-pile-chunk3-1800000-1850000',
'tomekkorbak/pii-pile-chunk3-1850000-1900000',
'tomekkorbak/pii-pile-chunk3-1900000-1950000'],
'filter_threshold': 0.000286,
'is_split_by_sentences': True,
'skip_tokens': 1649999872},
'generation': {'force_call_on': [25177],
'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}],
'scenario_configs': [{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048}],
'scorer_config': {}},
'kl_gpt3_callback': {'force_call_on': [25177],
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': False,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'model_kwargs': {'revision': '9e6c78543a6ff1e4089002c38864d5a9cf71ec90'},
'path_or_name': 'tomekkorbak/nervous_wozniak'},
'objective': {'name': 'MLE'},
'tokenizer': {'path_or_name': 'gpt2'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 128,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'stupefied_brattain',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0001,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output2',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25177,
'save_strategy': 'steps',
'seed': 42,
'tokens_already_seen': 1649999872,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/1p2767nv |
sanjin7/distilbert-base-uncased_proba | sanjin7 | 2022-12-15T15:04:14Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-12-15T15:01:16Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased_proba
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_proba
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.25.1
- Pytorch 1.14.0.dev20221202
- Datasets 2.7.1
- Tokenizers 0.13.2
|
tayfen/rl_course_1 | tayfen | 2022-12-15T15:00:59Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T15:00:08Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 241.25 +/- 20.45
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
tzvc/b04a0039-8a77-4468-98db-73928b38c382 | tzvc | 2022-12-15T14:25:48Z | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2022-12-15T13:44:05Z | ---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: a portrait of [V]
---
### training params
```json
{
"pretrained_model_name_or_path": "runwayml/stable-diffusion-v1-5",
"instance_data_dir": "./b04a0039-8a77-4468-98db-73928b38c382/instance_data",
"class_data_dir": "./class_data/a-portrait-of-a-person",
"output_dir": "./b04a0039-8a77-4468-98db-73928b38c382/",
"train_text_encoder": true,
"with_prior_preservation": true,
"prior_loss_weight": 1.0,
"instance_prompt": "a portrait of [V]",
"class_prompt": "a portrait of a person",
"resolution": 512,
"train_batch_size": 1,
"gradient_accumulation_steps": 2,
"gradient_checkpointing": true,
"use_8bit_adam": true,
"learning_rate": 2e-06,
"lr_scheduler": "constant",
"lr_warmup_steps": 0,
"num_class_images": 200,
"max_train_steps": 1050,
"mixed_precision": "fp16"
}
```
|
kejian/curious-rwr | kejian | 2022-12-15T14:12:06Z | 1 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"generated_from_trainer",
"en",
"dataset:kejian/codeparrot-train-more-filter-3.3b-cleaned",
"license:apache-2.0",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2022-12-14T01:24:27Z | ---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- kejian/codeparrot-train-more-filter-3.3b-cleaned
model-index:
- name: curious-rwr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# curious-rwr
This model was trained from scratch on the kejian/codeparrot-train-more-filter-3.3b-cleaned dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 12588
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.23.0
- Pytorch 1.13.0+cu116
- Datasets 2.0.0
- Tokenizers 0.12.1
# Full config
{'dataset': {'datasets': ['kejian/codeparrot-train-more-filter-3.3b-cleaned'],
'is_split_by_sentences': True,
'skip_tokens': 1649999872},
'generation': {'batch_size': 128,
'every_n_steps': 512,
'force_call_on': [12588],
'metrics_configs': [{}, {'n': 1}, {}],
'scenario_configs': [{'display_as_html': True,
'generate_kwargs': {'do_sample': True,
'eos_token_id': 0,
'max_length': 640,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_hits_threshold': 0,
'num_samples': 2048},
{'display_as_html': True,
'generate_kwargs': {'do_sample': True,
'eos_token_id': 0,
'max_length': 272,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'functions',
'num_hits_threshold': 0,
'num_samples': 2048,
'prompts_path': 'resources/functions_csnet.jsonl',
'use_prompt_for_scoring': True}],
'scorer_config': {}},
'kl_gpt3_callback': {'every_n_steps': 512,
'force_call_on': [12588],
'gpt3_kwargs': {'model_name': 'code-cushman-001'},
'max_tokens': 64,
'num_samples': 4096},
'model': {'from_scratch': False,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'model_kwargs': {'revision': 'c38e2b6acf17781918d39a310ee1adc4674a8225',
'value_head_config': {'is_detached': False}},
'path_or_name': 'kejian/mighty-rwr'},
'objective': {'alpha': 1, 'beta': 10, 'name': 'AWR'},
'tokenizer': {'path_or_name': 'codeparrot/codeparrot-small'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 128,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'curious-rwr',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000.0,
'output_dir': 'training_output',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 12588,
'save_strategy': 'steps',
'seed': 42,
'tokens_already_seen': 1649999872,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/kejian/uncategorized/runs/39mf4btg |
gpfl/lunarlander | gpfl | 2022-12-15T14:07:07Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2022-12-15T14:06:20Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 270.23 +/- 16.35
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
Subsets and Splits