modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-06-02 18:27:22
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 464
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-06-02 18:27:15
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
jeff-RQ/blip2-opt-6.7b-coco | jeff-RQ | 2023-07-09T05:10:48Z | 8 | 2 | transformers | [
"transformers",
"pytorch",
"blip-2",
"visual-question-answering",
"vision",
"image-to-text",
"image-captioning",
"en",
"arxiv:2301.12597",
"license:mit",
"endpoints_compatible",
"region:us"
] | image-to-text | 2023-07-09T05:02:54Z | ---
language: en
license: mit
tags:
- vision
- image-to-text
- image-captioning
- visual-question-answering
pipeline_tag: image-to-text
duplicated_from: Salesforce/blip2-opt-6.7b-coco
---
# BLIP-2, OPT-6.7b, fine-tuned on COCO
BLIP-2 model, leveraging [OPT-6.7b](https://huggingface.co/facebook/opt-6.7b) (a large language model with 6.7 billion parameters).
It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2).
Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model.
The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen
while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings,
which bridge the gap between the embedding space of the image encoder and the large language model.
The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text.
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg"
alt="drawing" width="600"/>
This allows the model to be used for tasks like:
- image captioning
- visual question answering (VQA)
- chat-like conversations by feeding the image and the previous conversation as prompt to the model
## Direct Use and Downstream Use
You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for
fine-tuned versions on a task that interests you.
## Bias, Risks, Limitations, and Ethical Considerations
BLIP2-OPT uses off-the-shelf OPT as the language model. It inherits the same risks and limitations as mentioned in Meta's model card.
> Like other large language models for which the diversity (or lack thereof) of training
> data induces downstream impact on the quality of our model, OPT-175B has limitations in terms
> of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and
> hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern
> large language models.
>
BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data.
BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within.
### How to use
For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example). |
luhx/Reinforce-CartPole-v1 | luhx | 2023-07-09T05:09:01Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-09T05:08:52Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 486.50 +/- 40.50
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
Winmodel/ppo-Huggy | Winmodel | 2023-07-09T05:04:35Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-07-09T05:04:30Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Winmodel/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
BauyrjanQ/whisper-kk | BauyrjanQ | 2023-07-09T04:14:39Z | 74 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-07-07T09:49:25Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-kk
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-kk
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1070
- Wer: 24.8145
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1912 | 0.46 | 1000 | 0.1793 | 31.2210 |
| 0.1314 | 0.92 | 2000 | 0.1307 | 20.8113 |
| 0.096 | 1.38 | 3000 | 0.1136 | 28.8680 |
| 0.0845 | 1.84 | 4000 | 0.1070 | 24.8145 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu117
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Drawzipink/AesopCarlV2 | Drawzipink | 2023-07-09T03:59:29Z | 0 | 0 | null | [
"license:openrail",
"region:us"
] | null | 2023-07-09T03:38:01Z | ---
license: openrail
---
***Note***: This model was made using Yuki Hirai's interpretation of Aesop Carl from the Game Identity V in the unofficial stageplay.
Should he see this and ask for anything using this model be taken down I ask that you oblige.
This model is for fun and personal use only.
Thank you. |
PhantasyMaker/Kate | PhantasyMaker | 2023-07-09T03:55:50Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-09T03:55:50Z | ---
license: creativeml-openrail-m
---
|
NasimB/gpt2-concat-aochildes-length-15k | NasimB | 2023-07-09T03:36:06Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-09T01:38:55Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-aochildes-length-15k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-aochildes-length-15k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1875
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.7208 | 0.29 | 500 | 5.6413 |
| 5.3798 | 0.59 | 1000 | 5.2022 |
| 5.026 | 0.88 | 1500 | 4.9544 |
| 4.7535 | 1.18 | 2000 | 4.8031 |
| 4.5938 | 1.47 | 2500 | 4.6839 |
| 4.4847 | 1.76 | 3000 | 4.5811 |
| 4.3568 | 2.06 | 3500 | 4.5046 |
| 4.1613 | 2.35 | 4000 | 4.4593 |
| 4.1394 | 2.65 | 4500 | 4.4021 |
| 4.0897 | 2.94 | 5000 | 4.3497 |
| 3.874 | 3.24 | 5500 | 4.3454 |
| 3.8331 | 3.53 | 6000 | 4.3191 |
| 3.8104 | 3.82 | 6500 | 4.2890 |
| 3.6885 | 4.12 | 7000 | 4.2909 |
| 3.5369 | 4.41 | 7500 | 4.2866 |
| 3.5339 | 4.71 | 8000 | 4.2735 |
| 3.5159 | 5.0 | 8500 | 4.2598 |
| 3.3458 | 5.29 | 9000 | 4.2780 |
| 3.3397 | 5.59 | 9500 | 4.2764 |
| 3.3365 | 5.88 | 10000 | 4.2765 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
FinalIroha/Ryuuou_no_Oshigoto_SoVITS4.1_Model | FinalIroha | 2023-07-09T03:27:29Z | 3 | 0 | transformers | [
"transformers",
"license:cc-by-nc-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | 2023-07-08T11:30:12Z | ---
license: cc-by-nc-sa-4.0
---
# SoVITS 4.1龙王的工作多人模型
<!-- Provide a quick summary of what the model is/does. -->
此模型由[SoVITS4.1](https://github.com/svc-develop-team/so-vits-svc/blob/4.1-Stable/)生成。
## 模型人物名
<!-- Provide a quick summary of what the model is/does. -->
- **Yaichi Kuzuryuu:** 九頭竜八一/九头龙八一 CV:内田雄马
- **Ai Hinatsuru:** 雛鶴あい/雏鹤爱 CV:日高里菜
- **Ai Yashajin:** 夜叉神天衣/夜叉神天衣 CV:佐仓绫音
- **Ginko Sora:** 空銀子/空银子 CV:金元寿子
- **Keika Kiyotaki:** 清滝桂香/清泷桂香 CV:茅野爱衣
- **Mio Mizukoshi:** 水越澪/水越澪 CV:久保百合花
- **Ayano Sadatou:** 貞任綾乃/贞任绫乃 CV:桥本千波
- **Charlotte Izoard:** シャルロット・イゾアール/夏洛特·伊索亚尔 CV:小仓唯 |
Splend1dchan/h-p-test | Splend1dchan | 2023-07-09T03:24:07Z | 50 | 0 | transformers | [
"transformers",
"pytorch",
"gpt_neox",
"text2text-generation",
"generated_from_trainer",
"dataset:arrow",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-09T03:17:38Z | ---
tags:
- generated_from_trainer
datasets:
- arrow
model-index:
- name: hubert-pythia-70m_librispeech.train.mix
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hubert-pythia-70m_librispeech.train.mix
This model is a fine-tuned version of [speechmix/pythia-70m-test](https://huggingface.co/speechmix/pythia-70m-test) on the arrow dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 50000
### Training results
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.0+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
jordyvl/vit-base_rvl-cdip | jordyvl | 2023-07-09T02:43:51Z | 165 | 0 | transformers | [
"transformers",
"pytorch",
"vit",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-07T12:36:14Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: vit-base_rvl-cdip
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base_rvl-cdip
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5535
- Accuracy: 0.897
- Brier Loss: 0.1768
- Nll: 1.0978
- F1 Micro: 0.897
- F1 Macro: 0.8972
- Ece: 0.0801
- Aurc: 0.0180
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Brier Loss | Nll | F1 Micro | F1 Macro | Ece | Aurc |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:----------:|:------:|:--------:|:--------:|:------:|:------:|
| 0.676 | 1.0 | 5000 | 0.6451 | 0.8230 | 0.2574 | 1.2627 | 0.8230 | 0.8237 | 0.0458 | 0.0425 |
| 0.4207 | 2.0 | 10000 | 0.4251 | 0.8766 | 0.1800 | 1.2821 | 0.8766 | 0.8779 | 0.0154 | 0.0218 |
| 0.3335 | 3.0 | 15000 | 0.3914 | 0.8861 | 0.1676 | 1.2589 | 0.8861 | 0.8858 | 0.0252 | 0.0192 |
| 0.2447 | 4.0 | 20000 | 0.3687 | 0.8934 | 0.1574 | 1.2243 | 0.8934 | 0.8937 | 0.0331 | 0.0164 |
| 0.1623 | 5.0 | 25000 | 0.3843 | 0.8976 | 0.1583 | 1.1553 | 0.8976 | 0.8973 | 0.0461 | 0.0159 |
| 0.1083 | 6.0 | 30000 | 0.4131 | 0.8964 | 0.1624 | 1.1514 | 0.8964 | 0.8967 | 0.0581 | 0.0163 |
| 0.0652 | 7.0 | 35000 | 0.4633 | 0.8966 | 0.1690 | 1.1300 | 0.8966 | 0.8967 | 0.0692 | 0.0169 |
| 0.0361 | 8.0 | 40000 | 0.5068 | 0.8976 | 0.1723 | 1.1161 | 0.8976 | 0.8976 | 0.0737 | 0.0175 |
| 0.0192 | 9.0 | 45000 | 0.5418 | 0.8982 | 0.1748 | 1.1015 | 0.8982 | 0.8983 | 0.0779 | 0.0179 |
| 0.0111 | 10.0 | 50000 | 0.5535 | 0.897 | 0.1768 | 1.0978 | 0.897 | 0.8972 | 0.0801 | 0.0180 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1.post200
- Datasets 2.9.0
- Tokenizers 0.13.2
|
syberkrime99/angiestwn | syberkrime99 | 2023-07-09T02:13:54Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-09T02:11:30Z | ---
license: creativeml-openrail-m
---
|
sachiniyer/tweet_toxicity | sachiniyer | 2023-07-09T02:13:20Z | 128 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"distilbert",
"text-classification",
"dataset:jigsaw_toxicity_pred",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-04-23T09:37:30Z | ---
datasets:
- jigsaw_toxicity_pred
metrics:
- accuracy
- bertscore
--- |
espnet/Wangyou_Zhang_wsj0_2mix_train_enh_tse_td_speakerbeam_raw | espnet | 2023-07-09T01:59:33Z | 3 | 0 | espnet | [
"espnet",
"audio",
"audio-to-audio",
"en",
"dataset:wsj0_2mix",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | audio-to-audio | 2023-07-09T01:25:48Z | ---
tags:
- espnet
- audio
- audio-to-audio
language: en
datasets:
- wsj0_2mix
license: cc-by-4.0
---
## ESPnet2 ENH model
### `espnet/Wangyou_Zhang_wsj0_2mix_train_enh_tse_td_speakerbeam_raw`
This model was trained by Wangyou Zhang using the wsj0_2mix recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
pip install -e .
cd egs2/wsj0_2mix/tse1
./run.sh --skip_data_prep false --skip_train true --is_tse_task true --download_model espnet/Wangyou_Zhang_wsj0_2mix_train_enh_tse_td_speakerbeam_raw
```
<!-- Generated by ./scripts/utils/show_enh_score.sh -->
# RESULTS
## Environments
- date: `Sun Jul 9 09:23:16 CST 2023`
- python version: `3.8.16 (default, Mar 2 2023, 03:21:46) [GCC 11.2.0]`
- espnet version: `espnet 202301`
- pytorch version: `pytorch 2.0.1`
- Git hash: ``
- Commit date: ``
## enh_train_enh_tse_td_speakerbeam_raw
config: conf/tuning/train_enh_tse_td_speakerbeam.yaml
|dataset|PESQ_NB|STOI|SAR|SDR|SIR|SI_SNR|
|---|---|---|---|---|---|---|
|enhanced_cv_min_8k|3.54|96.41|18.75|18.75|0.00|18.37|
|enhanced_tt_min_8k|3.46|96.35|17.51|17.51|0.00|17.11|
## ENH config
<details><summary>expand</summary>
```
config: conf/tuning/train_enh_tse_td_speakerbeam.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: chunk
output_dir: exp/enh_train_enh_tse_td_speakerbeam_raw
ngpu: 1
seed: 0
num_workers: 4
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: true
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
skip_stats_npz: false
max_epoch: 100
patience: 20
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- snr
- max
- - valid
- loss
- min
keep_nbest_models: 1
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
create_graph_in_tensorboard: false
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 4
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/enh_stats_tr_min_8k_cv_min_8k_8k/train/speech_mix_shape
- exp/enh_stats_tr_min_8k_cv_min_8k_8k/train/speech_ref1_shape
- exp/enh_stats_tr_min_8k_cv_min_8k_8k/train/enroll_ref1_shape
valid_shape_file:
- exp/enh_stats_tr_min_8k_cv_min_8k_8k/valid/speech_mix_shape
- exp/enh_stats_tr_min_8k_cv_min_8k_8k/valid/speech_ref1_shape
- exp/enh_stats_tr_min_8k_cv_min_8k_8k/valid/enroll_ref1_shape
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 80000
- 80000
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 32000
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
chunk_excluded_key_prefixes:
- enroll_ref
train_data_path_and_name_and_type:
- - dump/raw/tr_min_8k/wav.scp
- speech_mix
- sound
- - dump/raw/tr_min_8k/spk1.scp
- speech_ref1
- sound
- - dump/raw/tr_min_8k/enroll_spk1.scp
- enroll_ref1
- text
valid_data_path_and_name_and_type:
- - dump/raw/cv_min_8k/wav.scp
- speech_mix
- sound
- - dump/raw/cv_min_8k/spk1.scp
- speech_ref1
- sound
- - dump/raw/cv_min_8k/enroll_spk1.scp
- enroll_ref1
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
exclude_weight_decay: false
exclude_weight_decay_conf: {}
optim: adam
optim_conf:
lr: 0.001
eps: 1.0e-08
weight_decay: 0
scheduler: reducelronplateau
scheduler_conf:
mode: min
factor: 0.7
patience: 3
init: null
model_conf:
num_spk: 1
share_encoder: true
criterions:
- name: snr
conf:
eps: 1.0e-07
wrapper: fixed_order
wrapper_conf:
weight: 1.0
train_spk2enroll: null
enroll_segment: 16000
load_spk_embedding: false
load_all_speakers: false
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
short_noise_thres: 0.5
speech_volume_normalize: null
use_reverberant_ref: false
num_spk: 1
num_noise_type: 1
sample_rate: 8000
force_single_channel: false
channel_reordering: false
categories: []
encoder: conv
encoder_conf:
channel: 256
kernel_size: 16
stride: 8
extractor: td_speakerbeam
extractor_conf:
layer: 8
stack: 4
bottleneck_dim: 256
hidden_dim: 512
skip_dim: 256
kernel: 3
causal: false
norm_type: gLN
nonlinear: relu
i_adapt_layer: 7
adapt_layer_type: mul
adapt_enroll_dim: 256
use_spk_emb: false
spk_emb_dim: 256
decoder: conv
decoder_conf:
channel: 256
kernel_size: 16
stride: 8
preprocessor: tse
preprocessor_conf: {}
required:
- output_dir
version: '202301'
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{ESPnet-SE,
author = {Chenda Li and Jing Shi and Wangyou Zhang and Aswin Shanmugam Subramanian and Xuankai Chang and
Naoyuki Kamo and Moto Hira and Tomoki Hayashi and Christoph B{"{o}}ddeker and Zhuo Chen and Shinji Watanabe},
title = {ESPnet-SE: End-To-End Speech Enhancement and Separation Toolkit Designed for {ASR} Integration},
booktitle = {{IEEE} Spoken Language Technology Workshop, {SLT} 2021, Shenzhen, China, January 19-22, 2021},
pages = {785--792},
publisher = {{IEEE}},
year = {2021},
url = {https://doi.org/10.1109/SLT48900.2021.9383615},
doi = {10.1109/SLT48900.2021.9383615},
timestamp = {Mon, 12 Apr 2021 17:08:59 +0200},
biburl = {https://dblp.org/rec/conf/slt/Li0ZSCKHHBC021.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
SKumari/Llama_train_sk | SKumari | 2023-07-09T01:55:54Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-09T01:55:47Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: float16
### Framework versions
- PEFT 0.4.0.dev0
|
nolanaatama/ncrcnrmlrvcv1300pchjlbdxcyn | nolanaatama | 2023-07-09T01:14:31Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-09T01:08:54Z | ---
license: creativeml-openrail-m
---
|
karanv/videomae-base-finetuned-ucf101-subset | karanv | 2023-07-09T01:08:30Z | 67 | 0 | transformers | [
"transformers",
"pytorch",
"videomae",
"video-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | 2023-07-07T05:40:02Z | ---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5091
- Accuracy: 0.8581
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 148
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.2064 | 0.26 | 38 | 1.7430 | 0.5714 |
| 0.8959 | 1.26 | 76 | 0.8178 | 0.8 |
| 0.5001 | 2.26 | 114 | 0.4717 | 0.8143 |
| 0.3355 | 3.23 | 148 | 0.3959 | 0.8857 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
|
saintzeno/dqn-SpaceInvadersNoFrameskip-v4 | saintzeno | 2023-07-08T23:48:09Z | 1 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-25T23:41:41Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 643.50 +/- 182.65
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga saintzeno -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga saintzeno -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga saintzeno
```
## Hyperparameters
```python
OrderedDict([('batch_size', 48),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 50000),
('n_timesteps', 2000000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
skywalker7/LunarWalker | skywalker7 | 2023-07-08T23:40:42Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-08T23:40:22Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 257.93 +/- 17.44
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
ABDUULAHH/ABDULLAH-GPT | ABDUULAHH | 2023-07-08T23:23:25Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-07-08T23:23:25Z | ---
license: bigscience-openrail-m
---
|
renatostrianese/ppo-Huggy | renatostrianese | 2023-07-08T23:20:21Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-07-08T23:20:16Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: renatostrianese/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
NasimB/gpt2-concat-guten-rarity-all-3p5k-1p8k | NasimB | 2023-07-08T22:49:08Z | 5 | 0 | transformers | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:generator",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-08T20:51:13Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- generator
model-index:
- name: gpt2-concat-guten-rarity-all-3p5k-1p8k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-concat-guten-rarity-all-3p5k-1p8k
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1924
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 6
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 6.702 | 0.29 | 500 | 5.6455 |
| 5.3702 | 0.59 | 1000 | 5.2062 |
| 5.0235 | 0.88 | 1500 | 4.9548 |
| 4.7448 | 1.18 | 2000 | 4.8046 |
| 4.5901 | 1.47 | 2500 | 4.6826 |
| 4.4798 | 1.77 | 3000 | 4.5785 |
| 4.3425 | 2.06 | 3500 | 4.5017 |
| 4.1565 | 2.36 | 4000 | 4.4481 |
| 4.1361 | 2.65 | 4500 | 4.3913 |
| 4.0872 | 2.95 | 5000 | 4.3408 |
| 3.8648 | 3.24 | 5500 | 4.3344 |
| 3.8269 | 3.54 | 6000 | 4.3033 |
| 3.812 | 3.83 | 6500 | 4.2685 |
| 3.682 | 4.12 | 7000 | 4.2696 |
| 3.5391 | 4.42 | 7500 | 4.2633 |
| 3.534 | 4.71 | 8000 | 4.2464 |
| 3.5219 | 5.01 | 8500 | 4.2386 |
| 3.346 | 5.3 | 9000 | 4.2473 |
| 3.3421 | 5.6 | 9500 | 4.2453 |
| 3.3464 | 5.89 | 10000 | 4.2450 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.11.0+cu113
- Datasets 2.13.0
- Tokenizers 0.13.3
|
BigSalmon/InformalToFormalLincoln102Paraphrase | BigSalmon | 2023-07-08T22:40:54Z | 195 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-26T19:59:24Z | data: https://github.com/BigSalmon2/InformalToFormalDataset
Text Generation Informal Formal
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("BigSalmon/InformalToFormalLincoln102Paraphrase")
model = AutoModelForCausalLM.from_pretrained("BigSalmon/InformalToFormalLincoln102Paraphrase")
```
```
Demo:
https://huggingface.co/spaces/BigSalmon/FormalInformalConciseWordy
```
```
prompt = """informal english: corn fields are all across illinois, visible once you leave chicago.\nTranslated into the Style of Abraham Lincoln:"""
input_ids = tokenizer.encode(prompt, return_tensors='pt')
outputs = model.generate(input_ids=input_ids,
max_length=10 + len(prompt),
temperature=1.0,
top_k=50,
top_p=0.95,
do_sample=True,
num_return_sequences=5,
early_stopping=True)
for i in range(5):
print(tokenizer.decode(outputs[i]))
```
Most likely outputs (Disclaimer: I highly recommend using this over just generating):
```
prompt = """informal english: corn fields are all across illinois, visible once you leave chicago.\nTranslated into the Style of Abraham Lincoln:"""
text = tokenizer.encode(prompt)
myinput, past_key_values = torch.tensor([text]), None
myinput = myinput
myinput= myinput.to(device)
logits, past_key_values = model(myinput, past_key_values = past_key_values, return_dict=False)
logits = logits[0,-1]
probabilities = torch.nn.functional.softmax(logits)
best_logits, best_indices = logits.topk(250)
best_words = [tokenizer.decode([idx.item()]) for idx in best_indices]
text.append(best_indices[0].item())
best_probabilities = probabilities[best_indices].tolist()
words = []
print(best_words)
```
```
How To Make Prompt:
informal english: i am very ready to do that just that.
Translated into the Style of Abraham Lincoln: you can assure yourself of my readiness to work toward this end.
Translated into the Style of Abraham Lincoln: please be assured that i am most ready to undertake this laborious task.
***
informal english: space is huge and needs to be explored.
Translated into the Style of Abraham Lincoln: space awaits traversal, a new world whose boundaries are endless.
Translated into the Style of Abraham Lincoln: space is a ( limitless / boundless ) expanse, a vast virgin domain awaiting exploration.
***
informal english: corn fields are all across illinois, visible once you leave chicago.
Translated into the Style of Abraham Lincoln: corn fields ( permeate illinois / span the state of illinois / ( occupy / persist in ) all corners of illinois / line the horizon of illinois / envelop the landscape of illinois ), manifesting themselves visibly as one ventures beyond chicago.
informal english:
```
```
original: microsoft word's [MASK] pricing invites competition.
Translated into the Style of Abraham Lincoln: microsoft word's unconscionable pricing invites competition.
***
original: the library’s quiet atmosphere encourages visitors to [blank] in their work.
Translated into the Style of Abraham Lincoln: the library’s quiet atmosphere encourages visitors to immerse themselves in their work.
```
```
Essay Intro (Warriors vs. Rockets in Game 7):
text: eagerly anticipated by fans, game 7's are the highlight of the post-season.
text: ever-building in suspense, game 7's have the crowd captivated.
***
Essay Intro (South Korean TV Is Becoming Popular):
text: maturing into a bona fide paragon of programming, south korean television ( has much to offer / entertains without fail / never disappoints ).
text: increasingly held in critical esteem, south korean television continues to impress.
text: at the forefront of quality content, south korea is quickly achieving celebrity status.
***
Essay Intro (
```
```
Search: What is the definition of Checks and Balances?
https://en.wikipedia.org/wiki/Checks_and_balances
Checks and Balances is the idea of having a system where each and every action in government should be subject to one or more checks that would not allow one branch or the other to overly dominate.
https://www.harvard.edu/glossary/Checks_and_Balances
Checks and Balances is a system that allows each branch of government to limit the powers of the other branches in order to prevent abuse of power
https://www.law.cornell.edu/library/constitution/Checks_and_Balances
Checks and Balances is a system of separation through which branches of government can control the other, thus preventing excess power.
***
Search: What is the definition of Separation of Powers?
https://en.wikipedia.org/wiki/Separation_of_powers
The separation of powers is a principle in government, whereby governmental powers are separated into different branches, each with their own set of powers, that are prevent one branch from aggregating too much power.
https://www.yale.edu/tcf/Separation_of_Powers.html
Separation of Powers is the division of governmental functions between the executive, legislative and judicial branches, clearly demarcating each branch's authority, in the interest of ensuring that individual liberty or security is not undermined.
***
Search: What is the definition of Connection of Powers?
https://en.wikipedia.org/wiki/Connection_of_powers
Connection of Powers is a feature of some parliamentary forms of government where different branches of government are intermingled, typically the executive and legislative branches.
https://simple.wikipedia.org/wiki/Connection_of_powers
The term Connection of Powers describes a system of government in which there is overlap between different parts of the government.
***
Search: What is the definition of
```
```
Search: What are phrase synonyms for "second-guess"?
https://www.powerthesaurus.org/second-guess/synonyms
Shortest to Longest:
- feel dubious about
- raise an eyebrow at
- wrinkle their noses at
- cast a jaundiced eye at
- teeter on the fence about
***
Search: What are phrase synonyms for "mean to newbies"?
https://www.powerthesaurus.org/mean_to_newbies/synonyms
Shortest to Longest:
- readiness to balk at rookies
- absence of tolerance for novices
- hostile attitude toward newcomers
***
Search: What are phrase synonyms for "make use of"?
https://www.powerthesaurus.org/make_use_of/synonyms
Shortest to Longest:
- call upon
- glean value from
- reap benefits from
- derive utility from
- seize on the merits of
- draw on the strength of
- tap into the potential of
***
Search: What are phrase synonyms for "hurting itself"?
https://www.powerthesaurus.org/hurting_itself/synonyms
Shortest to Longest:
- erring
- slighting itself
- forfeiting its integrity
- doing itself a disservice
- evincing a lack of backbone
***
Search: What are phrase synonyms for "
```
```
- nebraska
- unicamerical legislature
- different from federal house and senate
text: featuring a unicameral legislature, nebraska's political system stands in stark contrast to the federal model, comprised of a house and senate.
***
- penny has practically no value
- should be taken out of circulation
- just as other coins have been in us history
- lost use
- value not enough
- to make environmental consequences worthy
text: all but valueless, the penny should be retired. as with other coins in american history, it has become defunct. too minute to warrant the environmental consequences of its production, it has outlived its usefulness.
***
-
```
```
original: sports teams are profitable for owners. [MASK], their valuations experience a dramatic uptick.
infill: sports teams are profitable for owners. ( accumulating vast sums / stockpiling treasure / realizing benefits / cashing in / registering robust financials / scoring on balance sheets ), their valuations experience a dramatic uptick.
***
original:
```
```
wordy: classical music is becoming less popular more and more.
Translate into Concise Text: interest in classic music is fading.
***
wordy:
```
```
sweet: savvy voters ousted him.
longer: voters who were informed delivered his defeat.
***
sweet:
```
```
1: commercial space company spacex plans to launch a whopping 52 flights in 2022.
2: spacex, a commercial space company, intends to undertake a total of 52 flights in 2022.
3: in 2022, commercial space company spacex has its sights set on undertaking 52 flights.
4: 52 flights are in the pipeline for 2022, according to spacex, a commercial space company.
5: a commercial space company, spacex aims to conduct 52 flights in 2022.
***
1:
```
Keywords to sentences or sentence.
```
ngos are characterized by:
□ voluntary citizens' group that is organized on a local, national or international level
□ encourage political participation
□ often serve humanitarian functions
□ work for social, economic, or environmental change
***
what are the drawbacks of living near an airbnb?
□ noise
□ parking
□ traffic
□ security
□ strangers
***
```
```
original: musicals generally use spoken dialogue as well as songs to convey the story. operas are usually fully sung.
adapted: musicals generally use spoken dialogue as well as songs to convey the story. ( in a stark departure / on the other hand / in contrast / by comparison / at odds with this practice / far from being alike / in defiance of this standard / running counter to this convention ), operas are usually fully sung.
***
original: akoya and tahitian are types of pearls. akoya pearls are mostly white, and tahitian pearls are naturally dark.
adapted: akoya and tahitian are types of pearls. ( a far cry from being indistinguishable / easily distinguished / on closer inspection / setting them apart / not to be mistaken for one another / hardly an instance of mere synonymy / differentiating the two ), akoya pearls are mostly white, and tahitian pearls are naturally dark.
***
original:
```
```
original: had trouble deciding.
translated into journalism speak: wrestled with the question, agonized over the matter, furrowed their brows in contemplation.
***
original:
```
```
input: not loyal
1800s english: ( two-faced / inimical / perfidious / duplicitous / mendacious / double-dealing / shifty ).
***
input:
```
```
first: ( was complicit in / was involved in ).
antonym: ( was blameless / was not an accomplice to / had no hand in / was uninvolved in ).
***
first: ( have no qualms about / see no issue with ).
antonym: ( are deeply troubled by / harbor grave reservations about / have a visceral aversion to / take ( umbrage at / exception to ) / are wary of ).
***
first: ( do not see eye to eye / disagree often ).
antonym: ( are in sync / are united / have excellent rapport / are like-minded / are in step / are of one mind / are in lockstep / operate in perfect harmony / march in lockstep ).
***
first:
```
```
stiff with competition, law school {A} is the launching pad for countless careers, {B} is a crowded field, {C} ranks among the most sought-after professional degrees, {D} is a professional proving ground.
***
languishing in viewership, saturday night live {A} is due for a creative renaissance, {B} is no longer a ratings juggernaut, {C} has been eclipsed by its imitators, {C} can still find its mojo.
***
dubbed the "manhattan of the south," atlanta {A} is a bustling metropolis, {B} is known for its vibrant downtown, {C} is a city of rich history, {D} is the pride of georgia.
***
embattled by scandal, harvard {A} is feeling the heat, {B} cannot escape the media glare, {C} is facing its most intense scrutiny yet, {D} is in the spotlight for all the wrong reasons.
```
Infill / Infilling / Masking / Phrase Masking (Works pretty decently actually, especially when you use logprobs code from above):
```
his contention [blank] by the evidence [sep] was refuted [answer]
***
few sights are as [blank] new york city as the colorful, flashing signage of its bodegas [sep] synonymous with [answer]
***
when rick won the lottery, all of his distant relatives [blank] his winnings [sep] clamored for [answer]
***
the library’s quiet atmosphere encourages visitors to [blank] in their work [sep] immerse themselves [answer]
***
the joy of sport is that no two games are alike. for every exhilarating experience, however, there is an interminable one. the national pastime, unfortunately, has a penchant for the latter. what begins as a summer evening at the ballpark can quickly devolve into a game of tedium. the primary culprit is the [blank] of play. from batters readjusting their gloves to fielders spitting on their mitts, the action is [blank] unnecessary interruptions. the sport's future is [blank] if these tendencies are not addressed [sep] plodding pace [answer] riddled with [answer] bleak [answer]
***
microsoft word's [blank] pricing [blank] competition [sep] unconscionable [answer] invites [answer]
***
```
```
original: microsoft word's [MASK] pricing invites competition.
Translated into the Style of Abraham Lincoln: microsoft word's unconscionable pricing invites competition.
***
original: the library’s quiet atmosphere encourages visitors to [blank] in their work.
Translated into the Style of Abraham Lincoln: the library’s quiet atmosphere encourages visitors to immerse themselves in their work.
```
Backwards
```
Essay Intro (National Parks):
text: tourists are at ease in the national parks, ( swept up in the beauty of their natural splendor ).
***
Essay Intro (D.C. Statehood):
washington, d.c. is a city of outsize significance, ( ground zero for the nation's political life / center stage for the nation's political machinations ).
```
```
topic: the Golden State Warriors.
characterization 1: the reigning kings of the NBA.
characterization 2: possessed of a remarkable cohesion.
characterization 3: helmed by superstar Stephen Curry.
characterization 4: perched atop the league’s hierarchy.
characterization 5: boasting a litany of hall-of-famers.
***
topic: emojis.
characterization 1: shorthand for a digital generation.
characterization 2: more versatile than words.
characterization 3: the latest frontier in language.
characterization 4: a form of self-expression.
characterization 5: quintessentially millennial.
characterization 6: reflective of a tech-centric world.
***
topic:
```
```
regular: illinois went against the census' population-loss prediction by getting more residents.
VBG: defying the census' prediction of population loss, illinois experienced growth.
***
regular: microsoft word’s high pricing increases the likelihood of competition.
VBG: extortionately priced, microsoft word is inviting competition.
***
regular:
```
```
source: badminton should be more popular in the US.
QUERY: Based on the given topic, can you develop a story outline?
target: (1) games played with racquets are popular, (2) just look at tennis and ping pong, (3) but badminton underappreciated, (4) fun, fast-paced, competitive, (5) needs to be marketed more
text: the sporting arena is dominated by games that are played with racquets. tennis and ping pong, in particular, are immensely popular. somewhat curiously, however, badminton is absent from this pantheon. exciting, fast-paced, and competitive, it is an underappreciated pastime. all that it lacks is more effective marketing.
***
source: movies in theaters should be free.
QUERY: Based on the given topic, can you develop a story outline?
target: (1) movies provide vital life lessons, (2) many venues charge admission, (3) those without much money
text: the lessons that movies impart are far from trivial. the vast catalogue of cinematic classics is replete with inspiring sagas of friendship, bravery, and tenacity. it is regrettable, then, that admission to theaters is not free. in their current form, the doors of this most vital of institutions are closed to those who lack the means to pay.
***
source:
```
```
in the private sector, { transparency } is vital to the business’s credibility. the { disclosure of information } can be the difference between success and failure.
***
the labor market is changing, with { remote work } now the norm. this { flexible employment } allows the individual to design their own schedule.
***
the { cubicle } is the locus of countless grievances. many complain that the { enclosed workspace } restricts their freedom of movement.
***
```
```
it would be natural to assume that americans, as a people whose ancestors { immigrated to this country }, would be sympathetic to those seeking to do likewise.
question: what does “do likewise” mean in the above context?
(a) make the same journey
(b) share in the promise of the american dream
(c) start anew in the land of opportunity
(d) make landfall on the united states
***
in the private sector, { transparency } is vital to the business’s credibility. this orientation can be the difference between success and failure.
question: what does “this orientation” mean in the above context?
(a) visible business practices
(b) candor with the public
(c) open, honest communication
(d) culture of accountability
```
```
example: suppose you are a teacher. further suppose you want to tell an accurate telling of history. then suppose a parent takes offense. they do so in the name of name of their kid. this happens a lot.
text: educators' responsibility to remain true to the historical record often clashes with the parent's desire to shelter their child from uncomfortable realities.
***
example: suppose you are a student at college. now suppose you have to buy textbooks. that is going to be worth hundreds of dollars. given how much you already spend on tuition, that is going to hard cost to bear.
text: the exorbitant cost of textbooks, which often reaches hundreds of dollars, imposes a sizable financial burden on the already-strapped college student.
```
```
clarify: international ( {working together} / cooperation ) is called for when ( {issue go beyond lots of borders} / an issue transcends borders / a given matter has transnational implications ).
```
```
description: when someone thinks that their view is the only right one.
synonyms: intolerant, opinionated, narrow-minded, insular, self-righteous.
***
description: when you put something off.
synonyms: shelve, defer, table, postpone.
```
```
organic sentence: crowdfunding is about winner of best ideas and it can test an entrepreneur’s idea.
rewrite phrases: meritocratic, viability, vision
rewritten with phrases: the meritocratic nature of crowdfunding empowers entrepreneurs to test their vision's viability.
```
```
essence: when someone's views are keeping within reasonable.
refine: the senator's voting record is ( moderate / centrist / pragmatic / balanced / fair-minded / even-handed ).
***
essence: when things are worked through in a petty way.
refine: the propensity of the u.s. congress to settle every dispute by way of ( mudslinging / bickering / demagoguery / name-calling / finger-pointing / vilification ) is appalling.
```
```
description: when someone thinks that their view is the only right one.
synonyms: intolerant, opinionated, narrow-minded, insular, self-righteous.
***
description: when you put something off.
synonyms: shelve, defer, table, postpone.
```
```
organic sentence: crowdfunding is about winner of best ideas and it can test an entrepreneur’s idea.
rewrite phrases: meritocratic, viability, vision
rewritten with phrases: the meritocratic nature of crowdfunding empowers entrepreneurs to test their vision's viability.
```
```
music before bedtime [makes for being able to relax] -> is a recipe for relaxation.
```
```
[people wanting entertainment love traveling new york city] -> travelers flock to new york city in droves, drawn to its iconic entertainment scene. [cannot blame them] -> one cannot fault them [broadway so fun] -> when it is home to such thrilling fare as Broadway.
```
```
in their ( ‖ when you are rushing because you want to get there on time ‖ / haste to arrive punctually / mad dash to be timely ), morning commuters are too rushed to whip up their own meal.
***
politicians prefer to author vague plans rather than ( ‖ when you can make a plan without many unknowns ‖ / actionable policies / concrete solutions ).
```
```
Q: What is whistleblower protection?
A: Whistleblower protection is a form of legal immunity granted to employees who expose the unethical practices of their employer.
Q: Why are whistleblower protections important?
A: Absent whistleblower protections, employees would be deterred from exposing their employer’s wrongdoing for fear of retribution.
Q: Why would an employer engage in retribution?
A: An employer who has acted unethically stands to suffer severe financial and reputational damage were their transgressions to become public. To safeguard themselves from these consequences, they might seek to dissuade employees from exposing their wrongdoing.
```
```
original: the meritocratic nature of crowdfunding [MASK] into their vision's viability.
infill: the meritocratic nature of crowdfunding [gives investors idea of how successful] -> ( offers entrepreneurs a window ) into their vision's viability.
```
```
Leadership | Lecture 17: Worker Morale
What Workers Look for in Companies:
• Benefits
o Tuition reimbursement
o Paid parental leave
o 401K matching
o Profit sharing
o Pension plans
o Free meals
• Social responsibility
o Environmental stewardship
o Charitable contributions
o Diversity
• Work-life balance
o Telecommuting
o Paid holidays and vacation
o Casual dress
• Growth opportunities
• Job security
• Competitive compensation
• Recognition
o Open-door policies
o Whistleblower protection
o Employee-of-the-month awards
o Positive performance reviews
o Bonuses
```
```
description: business
keywords: for-profit, fiduciary duty, monopolistic, bottom line, return on investment, short-term thinking, capital-intensive, self-interested, risk-taking, fiduciary duty, merger, speculation, profiteering, oversight, capitalism, diversification
```
```
3. In this task, you are given a company name and you need to find its industry.
McDonalds -- Restaurant
Facebook -- Social Network
IKEA -- Furniture
American Express -- Credit Services
Nokia -- Telecom
Nintendo -- Entertainment
4. In this task, you are given a Month and you need to convert it to its corresponding season
April -- Spring
December -- Winter
July -- Summer
October -- Fall
February -- Winter
5. In this task, you are given a sentence with a missing word and you need to predict the correct word.
Managers should set an _____ for their employees. -- example
Some people spend more than four _____ in the gym. -- hours
The police were on the _____ of arresting the suspect. -- verge
They were looking for _____ on how to solve the problem. -- guidance
What is the _____ of the coffee? -- price
6. In this task, you are given a paragraph and you need to reorder it to make it logical.
It was first proposed in 1987. The total length of the bridge is 1,828 meters. The idea of a bridge connects Hong Kong to Macau. -- The idea of bridge connecting Hong Kong and Macau was first proposed in 1987. The total length of the bridge is 1,828 meters.
It is a movie about a brave and noble policeman. The film was produced by Americans. They were Kevin Lima and Chris Buck. They are directors. The movie is called Tarzan. -- Produced by Americans Kevin Lima and Chris Buck, Tarzan is a movie about a brave and noble policeman.
It was first discovered in the mountains of India. The active ingredients in this plant can stimulate hair growth. The plant is called "Hair Plus." -- First discovered in the mountains of India, Hair Plus is a plant whose active ingredients can stimulate hair growth.
```
```
trivia: What is the population of South Korea?
response: 51 million.
***
trivia: What is the minimum voting age in the US?
response: 18.
***
trivia: What are the first ten amendments of the US constitution called?
response: Bill of Rights.
```
```
ideas: in modern-day america, it is customary for the commander-in-chief to conduct regular press conferences
related keywords: transparency, check and balance, sacrosanct, public accountability, adversarial, unscripted, direct access, open government, watchdog, healthy democracy, institutional integrity, right to know, direct line of communication, behind closed doors, updates, track progress, instill confidence, reassure, humanize, leadership style, day-to-day, forthcoming, demystify, ask hard questions
***
ideas: i know this one guy who retired so young, attesting to how careful they were with money.
related keywords: money management, resourceful, penny-pinching, live below their means, frugal, financial discipline, financial independence, conservative, long-term vision, discretionary spending, deferred gratification, preparedness, self-control, cushion
```
```
less specific: actors and musicians should ( support democracy ).
clarifies: actors and musicians should ( wield their celebrity to amplify pro-democracy messaging / marshal their considerable influence in the service of the democratic cause ).
***
less specific: amid a contemporary culture that thrives on profligacy, the discipline necessary to retire early is a vanishing quality. rather than yielding to the lure of indulgence, the aspiring retiree must ( be careful ).
clarifies: amid a contemporary culture that thrives on profligacy, the discipline necessary to retire early is a vanishing quality. rather than yielding to the lure of indulgence, the aspiring retiree must ( master their desires / exercise self-restraint / embrace frugality / restrain their appetite for splendor ).
```
```
dull: clean
emotional heft: spotless, immaculate, pristine
***
dull: hot
emotional heft: scorching, searing, blistering
***
dull: happy
emotional heft: euphoric
```
```
text: {guide: vividly describe the premise of the show "seinfield"} -> set in the heart of bustling new york city, the sitcom "seinfield" follows the everyday {restrict: term that implies they get into trouble but in a comical way} -> misadventures of four neurotic friends. on any given episode, one can find them quarreling over their favorite diner's latest menu change, haggling over the division of household expenses, or contriving a scheme to spy on the apartment's newest resident. mundane as their exploits may be, they never fail to elicit a hearty laugh. {guide: mention how larry david is responsible} -> behind the show's witty, incisive dialogue lies the sharp pen of larry david, who co-created the show with jerry seinfeld. {guide: mention how larry david came up with the most recognizable lines} -> it is his genius that conjured such instantly {restrict: term that imply everybody knows them} -> recognizable quips as "no soup for you!" and "not that there's anything wrong with that!". {guide: mention how humanity should revel in having such good comedians these days} -> as this list of laugh-out-loud lines attests, the world is fortunate to ( count such a sharp comedic mind among its ranks / have such a talented humorist in its midst / have such comedic talent gracing its airwaves ).
```
```
system: i'm not sure i agree with your assessment of the situation.
user: replace "assessment" with a more casual term.
chatbot: ( take / read / view ).
system: i've been feeling really busy lately, trying to balance work and family responsibilities.
user: replace "busy" with a more intense word that conveys it is too much.
chatbot: overwhelmed.
system: the candidate's stance on immigration is quite nebulous, and it's difficult to discern exactly what they would do if elected.
user: replace "nebulous" with a word that conveys a lack of detail.
chatbot: ( vague / ambiguous ).
```
```
user: exorbitant in price, scholarly journals unconscionably define the breadth of one's knowledge awareness by the contents of their wallet. [replace “knowledge awareness” with a more natural expression]
chatbot: intellectual horizons.
user: can you do another alternative to “intellectual horizons” that has more relation to “scholarly journals”?
chatbot: academic enlightenment.
```
```
key: calculate.
syn: estimate, consider, weigh, number, count, apportion, proportion, investigate, reckon, rate, compute.
ant: guess, conjecture, hit, chance, risk, stake, miscalculate.
```
```
description: more forceful version of curious that is less forceful than nosy
answer: inquisitive
description: more forceful version of hopeful that is less forceful than overconfident
answer: optimistic
```
```
key: inquisitive
positive: curious, interested
negative: nosy, prying
***
key: witty
positive: clever, humorous
negative: sarcastic, caustic
***
key: influential
positive: impactful, powerful
negative: overbearing, domineering
```
```
defective: the blogger's { use of language imprecise } confused an already complicated issue.
precise: the blogger's ( vague wording ) confused an already complicated issue.
defective: the senator's speech was high on { words sounding dignified } but low on concrete proposals.
precise: the senator's speech was high on ( lofty rhetoric ) but low on concrete proposals.
```
```
example: the new car uses gas.
boring: uses
stronger: guzzles
example: he hates people that are rude.
boring: hates
stronger: loathes, abhors, despises, scorns, detests
``` |
hbenitez/food_classifier | hbenitez | 2023-07-08T22:37:36Z | 63 | 0 | transformers | [
"transformers",
"tf",
"vit",
"image-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | 2023-07-06T21:28:12Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: hbenitez/food_classifier
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# hbenitez/food_classifier
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.3735
- Validation Loss: 2.5622
- Train Accuracy: 0.0769
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 3e-05, 'decay_steps': 260, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 2.5417 | 2.5922 | 0.0 | 0 |
| 2.5103 | 2.5856 | 0.0 | 1 |
| 2.4593 | 2.5738 | 0.0 | 2 |
| 2.4104 | 2.5671 | 0.0 | 3 |
| 2.3735 | 2.5622 | 0.0769 | 4 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.13.0-rc2
- Datasets 2.13.1
- Tokenizers 0.13.3
|
miki-kawa/roberta-large-lora-token-classification | miki-kawa | 2023-07-08T22:36:04Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-08T22:35:59Z | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.4.0.dev0
|
renatostrianese/ppo-LunarLander-v2 | renatostrianese | 2023-07-08T22:11:51Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-08T22:11:33Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 258.79 +/- 20.64
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jncraton/codet5p-770m-py-ct2-int8 | jncraton | 2023-07-08T21:44:13Z | 600 | 0 | transformers | [
"transformers",
"arxiv:2305.07922",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
] | null | 2023-07-08T20:47:01Z | ---
license: bsd-3-clause
---
# CodeT5+ 770M (further tuned on Python)
## Model description
[CodeT5+](https://github.com/salesforce/CodeT5/tree/main/CodeT5+) is a new family of open code large language models with an encoder-decoder architecture that can flexibly operate in different modes (i.e. _encoder-only_, _decoder-only_, and _encoder-decoder_) to support a wide range of code understanding and generation tasks.
It is introduced in the paper:
[CodeT5+: Open Code Large Language Models for Code Understanding and Generation](https://arxiv.org/pdf/2305.07922.pdf)
by [Yue Wang](https://yuewang-cuhk.github.io/)\*, [Hung Le](https://sites.google.com/view/henryle2018/home?pli=1)\*, [Akhilesh Deepak Gotmare](https://akhileshgotmare.github.io/), [Nghi D.Q. Bui](https://bdqnghi.github.io/), [Junnan Li](https://sites.google.com/site/junnanlics), [Steven C.H. Hoi](https://sites.google.com/view/stevenhoi/home) (* indicates equal contribution).
Compared to the original CodeT5 family (base: `220M`, large: `770M`), CodeT5+ is pretrained with a diverse set of pretraining tasks including _span denoising_, _causal language modeling_, _contrastive learning_, and _text-code matching_ to learn rich representations from both unimodal code data and bimodal code-text data.
Additionally, it employs a simple yet effective _compute-efficient pretraining_ method to initialize the model components with frozen off-the-shelf LLMs such as [CodeGen](https://github.com/salesforce/CodeGen) to efficiently scale up the model (i.e. `2B`, `6B`, `16B`), and adopts a "shallow encoder and deep decoder" architecture.
Furthermore, it is instruction-tuned to align with natural language instructions (see our InstructCodeT5+ 16B) following [Code Alpaca](https://github.com/sahil280114/codealpaca).
## How to use
This model can be easily loaded using the `T5ForConditionalGeneration` functionality and employs the same tokenizer as original [CodeT5](https://github.com/salesforce/CodeT5).
```python
from transformers import T5ForConditionalGeneration, AutoTokenizer
checkpoint = "Salesforce/codet5p-770m-py"
device = "cuda" # for GPU usage or "cpu" for CPU usage
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model = T5ForConditionalGeneration.from_pretrained(checkpoint).to(device)
inputs = tokenizer.encode("def print_hello_world():", return_tensors="pt").to(device)
outputs = model.generate(inputs, max_length=10)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
# ==> print('Hello World!')
```
## Pretraining data
This checkpoint is trained on the stricter permissive subset of the deduplicated version of the [github-code dataset](https://huggingface.co/datasets/codeparrot/github-code).
The data is preprocessed by reserving only permissively licensed code ("mit" “apache-2”, “bsd-3-clause”, “bsd-2-clause”, “cc0-1.0”, “unlicense”, “isc”).
Supported languages (9 in total) are as follows:
`c`, `c++`, `c-sharp`, `go`, `java`, `javascript`, `php`, `python`, `ruby.`
## Training procedure
This checkpoint is first trained on the multilingual unimodal code data at the first-stage pretraining, which includes a diverse set of pretraining tasks including _span denoising_ and two variants of _causal language modeling_.
After that, it is further trained on the Python subset with the causal language modeling objective for another epoch to better adapt for Python code generation. Please refer to the paper for more details.
## Evaluation results
CodeT5+ models have been comprehensively evaluated on a wide range of code understanding and generation tasks in various settings: _zero-shot_, _finetuning_, and _instruction-tuning_.
Specifically, CodeT5+ yields substantial performance gains on many downstream tasks compared to their SoTA baselines, e.g.,
8 text-to-code retrieval tasks (+3.2 avg. MRR), 2 line-level code completion tasks (+2.1 avg. Exact Match), and 2 retrieval-augmented code generation tasks (+5.8 avg. BLEU-4).
In 2 math programming tasks on MathQA-Python and GSM8K-Python, CodeT5+ models of below billion-parameter sizes significantly outperform many LLMs of up to 137B parameters.
Particularly, in the zero-shot text-to-code generation task on HumanEval benchmark, InstructCodeT5+ 16B sets new SoTA results of 35.0% pass@1 and 54.5% pass@10 against other open code LLMs, even surpassing the closed-source OpenAI code-cushman-001 mode
Please refer to the [paper](https://arxiv.org/pdf/2305.07922.pdf) for more details.
Specifically for this checkpoint, it achieves 15.5% pass@1 on HumanEval in the zero-shot setting, which is comparable to much larger LLMs such as Incoder 6B’s 15.2%, GPT-NeoX 20B’s 15.4%, and PaLM 62B’s 15.9%.
## BibTeX entry and citation info
```bibtex
@article{wang2023codet5plus,
title={CodeT5+: Open Code Large Language Models for Code Understanding and Generation},
author={Wang, Yue and Le, Hung and Gotmare, Akhilesh Deepak and Bui, Nghi D.Q. and Li, Junnan and Hoi, Steven C. H.},
journal={arXiv preprint},
year={2023}
}
``` |
rsilg/Reinforce-CartPole-v1 | rsilg | 2023-07-08T21:28:20Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-08T21:28:11Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-CartPole-v1
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
voyzan/unit1-lunar_lander_v2-A01 | voyzan | 2023-07-08T21:03:16Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-08T21:02:53Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 236.36 +/- 17.72
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
skrl/IsaacGymEnvs-Humanoid-PPO | skrl | 2023-07-08T20:59:46Z | 0 | 0 | skrl | [
"skrl",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-01-24T20:44:07Z | ---
library_name: skrl
tags:
- deep-reinforcement-learning
- reinforcement-learning
- skrl
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 6524.74 +/- 570.54
name: Total reward (mean)
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: IsaacGymEnvs-Humanoid
type: IsaacGymEnvs-Humanoid
---
<!-- ---
torch: 6524.74 +/- 570.54
jax: 6265.95 +/- 280.11
numpy: 5727.54 +/- 406.96
--- -->
# IsaacGymEnvs-Humanoid-PPO
Trained agent for [NVIDIA Isaac Gym Preview](https://github.com/NVIDIA-Omniverse/IsaacGymEnvs) environments.
- **Task:** Humanoid
- **Agent:** [PPO](https://skrl.readthedocs.io/en/latest/api/agents/ppo.html)
# Usage (with skrl)
Note: Visit the skrl [Examples](https://skrl.readthedocs.io/en/latest/intro/examples.html) section to access the scripts.
* PyTorch
```python
from skrl.utils.huggingface import download_model_from_huggingface
# assuming that there is an agent named `agent`
path = download_model_from_huggingface("skrl/IsaacGymEnvs-Humanoid-PPO", filename="agent.pt")
agent.load(path)
```
* JAX
```python
from skrl.utils.huggingface import download_model_from_huggingface
# assuming that there is an agent named `agent`
path = download_model_from_huggingface("skrl/IsaacGymEnvs-Humanoid-PPO", filename="agent.pickle")
agent.load(path)
```
# Hyperparameters
Note: Undefined parameters keep their values by default.
```python
# https://skrl.readthedocs.io/en/latest/api/agents/ppo.html#configuration-and-hyperparameters
cfg = PPO_DEFAULT_CONFIG.copy()
cfg["rollouts"] = 32 # memory_size
cfg["learning_epochs"] = 5
cfg["mini_batches"] = 4 # 32 * 4096 / 32768
cfg["discount_factor"] = 0.99
cfg["lambda"] = 0.95
cfg["learning_rate"] = 5e-4
cfg["learning_rate_scheduler"] = KLAdaptiveRL
cfg["learning_rate_scheduler_kwargs"] = {"kl_threshold": 0.008}
cfg["random_timesteps"] = 0
cfg["learning_starts"] = 0
cfg["grad_norm_clip"] = 1.0
cfg["ratio_clip"] = 0.2
cfg["value_clip"] = 0.2
cfg["clip_predicted_values"] = True
cfg["entropy_loss_scale"] = 0.0
cfg["value_loss_scale"] = 2.0
cfg["kl_threshold"] = 0
cfg["rewards_shaper"] = lambda rewards, timestep, timesteps: rewards * 0.01
cfg["state_preprocessor"] = RunningStandardScaler
cfg["state_preprocessor_kwargs"] = {"size": env.observation_space, "device": device}
cfg["value_preprocessor"] = RunningStandardScaler
cfg["value_preprocessor_kwargs"] = {"size": 1, "device": device}
```
|
snousias/bert-base-greek-uncased-v2-finetuned-polylex | snousias | 2023-07-08T20:51:04Z | 4 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-08T20:01:40Z | ---
tags:
- generated_from_trainer
model-index:
- name: bert-base-greek-uncased-v2-finetuned-polylex
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-greek-uncased-v2-finetuned-polylex
This model is a fine-tuned version of [nlpaueb/bert-base-greek-uncased-v1](https://huggingface.co/nlpaueb/bert-base-greek-uncased-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5614
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:-----:|:---------------:|
| 4.7613 | 1.0 | 12 | 3.7659 |
| 3.8949 | 2.0 | 24 | 3.2678 |
| 3.223 | 3.0 | 36 | 2.5675 |
| 2.9941 | 4.0 | 48 | 2.6363 |
| 3.1597 | 5.0 | 60 | 2.8368 |
| 2.8535 | 6.0 | 72 | 2.8220 |
| 2.9492 | 7.0 | 84 | 3.0838 |
| 2.6935 | 8.0 | 96 | 2.6604 |
| 2.8037 | 9.0 | 108 | 2.4602 |
| 3.101 | 10.0 | 120 | 2.6140 |
| 2.4546 | 11.0 | 132 | 2.6074 |
| 2.6299 | 12.0 | 144 | 2.5843 |
| 2.4703 | 13.0 | 156 | 2.6383 |
| 2.4184 | 14.0 | 168 | 2.3316 |
| 2.6144 | 15.0 | 180 | 2.0832 |
| 2.6209 | 16.0 | 192 | 2.3583 |
| 2.451 | 17.0 | 204 | 2.9010 |
| 2.4358 | 18.0 | 216 | 3.0525 |
| 2.4198 | 19.0 | 228 | 2.6463 |
| 2.3365 | 20.0 | 240 | 2.7683 |
| 2.2167 | 21.0 | 252 | 2.9289 |
| 2.4412 | 22.0 | 264 | 2.0613 |
| 2.3041 | 23.0 | 276 | 2.6865 |
| 2.381 | 24.0 | 288 | 2.4213 |
| 2.3244 | 25.0 | 300 | 2.3309 |
| 2.2025 | 26.0 | 312 | 3.8109 |
| 2.3091 | 27.0 | 324 | 3.1869 |
| 2.2988 | 28.0 | 336 | 1.9325 |
| 2.2883 | 29.0 | 348 | 2.0473 |
| 2.2323 | 30.0 | 360 | 2.6196 |
| 2.1218 | 31.0 | 372 | 2.3249 |
| 2.138 | 32.0 | 384 | 2.4549 |
| 2.0153 | 33.0 | 396 | 2.0830 |
| 1.8986 | 34.0 | 408 | 2.3666 |
| 2.0264 | 35.0 | 420 | 2.3655 |
| 2.0425 | 36.0 | 432 | 2.6095 |
| 2.0762 | 37.0 | 444 | 2.4949 |
| 2.0342 | 38.0 | 456 | 1.5367 |
| 1.8288 | 39.0 | 468 | 2.6941 |
| 1.9419 | 40.0 | 480 | 2.5493 |
| 2.0241 | 41.0 | 492 | 2.6684 |
| 1.9002 | 42.0 | 504 | 2.3222 |
| 1.9645 | 43.0 | 516 | 2.8538 |
| 1.6755 | 44.0 | 528 | 1.7693 |
| 1.9111 | 45.0 | 540 | 2.3962 |
| 2.0126 | 46.0 | 552 | 2.2722 |
| 2.032 | 47.0 | 564 | 2.2347 |
| 2.0232 | 48.0 | 576 | 1.7626 |
| 1.8135 | 49.0 | 588 | 2.5355 |
| 1.6517 | 50.0 | 600 | 2.9392 |
| 1.6788 | 51.0 | 612 | 1.9630 |
| 1.6126 | 52.0 | 624 | 2.1936 |
| 1.8367 | 53.0 | 636 | 3.4687 |
| 1.8566 | 54.0 | 648 | 2.0458 |
| 1.6203 | 55.0 | 660 | 2.1171 |
| 1.6941 | 56.0 | 672 | 1.9957 |
| 1.5142 | 57.0 | 684 | 2.2677 |
| 1.7009 | 58.0 | 696 | 2.8793 |
| 1.6105 | 59.0 | 708 | 2.1910 |
| 1.6282 | 60.0 | 720 | 1.9620 |
| 1.7587 | 61.0 | 732 | 3.4591 |
| 1.6177 | 62.0 | 744 | 2.0555 |
| 1.5287 | 63.0 | 756 | 2.9750 |
| 1.6862 | 64.0 | 768 | 2.2498 |
| 1.5724 | 65.0 | 780 | 2.5222 |
| 1.705 | 66.0 | 792 | 2.4491 |
| 1.6787 | 67.0 | 804 | 2.4474 |
| 1.665 | 68.0 | 816 | 2.3176 |
| 1.3825 | 69.0 | 828 | 2.5131 |
| 1.4641 | 70.0 | 840 | 2.0134 |
| 1.3444 | 71.0 | 852 | 2.7905 |
| 1.6672 | 72.0 | 864 | 3.0861 |
| 1.5524 | 73.0 | 876 | 2.3998 |
| 1.4178 | 74.0 | 888 | 2.8779 |
| 1.4374 | 75.0 | 900 | 2.3486 |
| 1.2693 | 76.0 | 912 | 2.6789 |
| 1.5111 | 77.0 | 924 | 2.4917 |
| 1.3847 | 78.0 | 936 | 2.0904 |
| 1.3115 | 79.0 | 948 | 2.7551 |
| 1.5094 | 80.0 | 960 | 2.4040 |
| 1.3265 | 81.0 | 972 | 2.6506 |
| 1.226 | 82.0 | 984 | 3.0660 |
| 1.3867 | 83.0 | 996 | 1.8890 |
| 1.2752 | 84.0 | 1008 | 2.9983 |
| 1.3847 | 85.0 | 1020 | 2.7811 |
| 1.3903 | 86.0 | 1032 | 2.9952 |
| 1.3858 | 87.0 | 1044 | 2.1377 |
| 1.2792 | 88.0 | 1056 | 2.9294 |
| 1.3319 | 89.0 | 1068 | 2.5720 |
| 1.1521 | 90.0 | 1080 | 2.4535 |
| 1.2619 | 91.0 | 1092 | 2.1846 |
| 1.2885 | 92.0 | 1104 | 2.0970 |
| 1.1852 | 93.0 | 1116 | 2.2783 |
| 1.3225 | 94.0 | 1128 | 2.7983 |
| 1.1694 | 95.0 | 1140 | 2.0372 |
| 1.1184 | 96.0 | 1152 | 2.7704 |
| 1.1852 | 97.0 | 1164 | 2.8402 |
| 1.2402 | 98.0 | 1176 | 2.2748 |
| 1.1182 | 99.0 | 1188 | 2.7973 |
| 1.2023 | 100.0 | 1200 | 2.1480 |
| 1.0637 | 101.0 | 1212 | 2.1987 |
| 1.1003 | 102.0 | 1224 | 1.9750 |
| 1.2729 | 103.0 | 1236 | 2.6881 |
| 1.0963 | 104.0 | 1248 | 2.5819 |
| 1.2034 | 105.0 | 1260 | 2.8611 |
| 1.038 | 106.0 | 1272 | 1.8322 |
| 1.3583 | 107.0 | 1284 | 2.7330 |
| 1.1453 | 108.0 | 1296 | 2.5139 |
| 1.1593 | 109.0 | 1308 | 2.4409 |
| 1.1126 | 110.0 | 1320 | 2.3118 |
| 0.9801 | 111.0 | 1332 | 2.1956 |
| 1.2605 | 112.0 | 1344 | 2.8087 |
| 1.1756 | 113.0 | 1356 | 2.1508 |
| 0.8898 | 114.0 | 1368 | 2.8882 |
| 1.1959 | 115.0 | 1380 | 2.6419 |
| 1.0536 | 116.0 | 1392 | 2.2053 |
| 1.1508 | 117.0 | 1404 | 2.4917 |
| 0.9824 | 118.0 | 1416 | 2.8271 |
| 1.2391 | 119.0 | 1428 | 2.0959 |
| 0.9495 | 120.0 | 1440 | 2.5855 |
| 0.9823 | 121.0 | 1452 | 2.3001 |
| 0.9818 | 122.0 | 1464 | 2.4058 |
| 1.0764 | 123.0 | 1476 | 2.7615 |
| 1.1002 | 124.0 | 1488 | 2.2705 |
| 0.9838 | 125.0 | 1500 | 2.4089 |
| 1.1747 | 126.0 | 1512 | 2.2487 |
| 0.9397 | 127.0 | 1524 | 2.3436 |
| 0.7915 | 128.0 | 1536 | 2.7810 |
| 0.8227 | 129.0 | 1548 | 2.9488 |
| 1.0162 | 130.0 | 1560 | 1.9826 |
| 1.038 | 131.0 | 1572 | 2.3104 |
| 0.7145 | 132.0 | 1584 | 3.1713 |
| 0.9299 | 133.0 | 1596 | 2.4383 |
| 1.1 | 134.0 | 1608 | 2.7588 |
| 0.7346 | 135.0 | 1620 | 2.4870 |
| 0.898 | 136.0 | 1632 | 2.3211 |
| 1.0406 | 137.0 | 1644 | 2.1006 |
| 0.7669 | 138.0 | 1656 | 2.6216 |
| 0.8182 | 139.0 | 1668 | 2.6548 |
| 0.9577 | 140.0 | 1680 | 3.0709 |
| 0.843 | 141.0 | 1692 | 2.0712 |
| 0.8871 | 142.0 | 1704 | 2.0269 |
| 0.8183 | 143.0 | 1716 | 2.1832 |
| 0.9048 | 144.0 | 1728 | 2.3581 |
| 0.8197 | 145.0 | 1740 | 2.5645 |
| 0.7477 | 146.0 | 1752 | 3.4650 |
| 0.8257 | 147.0 | 1764 | 3.0643 |
| 0.801 | 148.0 | 1776 | 2.6476 |
| 0.8802 | 149.0 | 1788 | 2.5711 |
| 0.7332 | 150.0 | 1800 | 2.7936 |
| 0.825 | 151.0 | 1812 | 2.9548 |
| 0.7226 | 152.0 | 1824 | 2.2194 |
| 0.6707 | 153.0 | 1836 | 2.0006 |
| 0.6401 | 154.0 | 1848 | 2.7826 |
| 0.9888 | 155.0 | 1860 | 2.1371 |
| 0.6399 | 156.0 | 1872 | 2.1082 |
| 0.7128 | 157.0 | 1884 | 2.7275 |
| 0.684 | 158.0 | 1896 | 2.0162 |
| 0.7906 | 159.0 | 1908 | 1.9985 |
| 0.8381 | 160.0 | 1920 | 2.6745 |
| 0.7233 | 161.0 | 1932 | 2.7703 |
| 0.6977 | 162.0 | 1944 | 2.2407 |
| 0.7948 | 163.0 | 1956 | 2.5955 |
| 0.7616 | 164.0 | 1968 | 2.3938 |
| 0.8808 | 165.0 | 1980 | 2.5147 |
| 0.8188 | 166.0 | 1992 | 1.6625 |
| 0.6083 | 167.0 | 2004 | 3.1102 |
| 0.7814 | 168.0 | 2016 | 2.7221 |
| 0.6402 | 169.0 | 2028 | 2.4840 |
| 0.7722 | 170.0 | 2040 | 2.2021 |
| 0.7887 | 171.0 | 2052 | 3.1279 |
| 0.7313 | 172.0 | 2064 | 2.1820 |
| 0.7924 | 173.0 | 2076 | 1.7631 |
| 0.6142 | 174.0 | 2088 | 2.7580 |
| 0.7562 | 175.0 | 2100 | 2.0954 |
| 0.5619 | 176.0 | 2112 | 2.3388 |
| 0.9217 | 177.0 | 2124 | 3.4578 |
| 0.6253 | 178.0 | 2136 | 1.9490 |
| 0.6385 | 179.0 | 2148 | 1.9926 |
| 0.7452 | 180.0 | 2160 | 3.1260 |
| 0.5797 | 181.0 | 2172 | 2.7739 |
| 0.6138 | 182.0 | 2184 | 2.8513 |
| 0.5669 | 183.0 | 2196 | 2.4326 |
| 0.6944 | 184.0 | 2208 | 2.7487 |
| 0.7057 | 185.0 | 2220 | 2.4420 |
| 0.8157 | 186.0 | 2232 | 2.8531 |
| 0.5743 | 187.0 | 2244 | 3.0470 |
| 0.595 | 188.0 | 2256 | 2.8035 |
| 0.7408 | 189.0 | 2268 | 2.7126 |
| 0.5912 | 190.0 | 2280 | 3.7428 |
| 0.5725 | 191.0 | 2292 | 2.3815 |
| 0.6521 | 192.0 | 2304 | 2.7721 |
| 0.7074 | 193.0 | 2316 | 2.5499 |
| 0.5764 | 194.0 | 2328 | 2.6066 |
| 0.5298 | 195.0 | 2340 | 2.2085 |
| 0.6197 | 196.0 | 2352 | 2.4815 |
| 0.4731 | 197.0 | 2364 | 2.8488 |
| 0.619 | 198.0 | 2376 | 3.2678 |
| 0.5954 | 199.0 | 2388 | 2.1428 |
| 0.5277 | 200.0 | 2400 | 2.7153 |
| 0.7886 | 201.0 | 2412 | 2.2156 |
| 0.512 | 202.0 | 2424 | 2.2840 |
| 0.55 | 203.0 | 2436 | 2.7672 |
| 0.4958 | 204.0 | 2448 | 1.6703 |
| 0.7151 | 205.0 | 2460 | 2.1373 |
| 0.5112 | 206.0 | 2472 | 2.7734 |
| 0.6594 | 207.0 | 2484 | 2.5554 |
| 0.4422 | 208.0 | 2496 | 1.8383 |
| 0.5405 | 209.0 | 2508 | 2.9803 |
| 0.555 | 210.0 | 2520 | 2.4756 |
| 0.605 | 211.0 | 2532 | 2.6883 |
| 0.5143 | 212.0 | 2544 | 3.2208 |
| 0.5458 | 213.0 | 2556 | 2.6816 |
| 0.5469 | 214.0 | 2568 | 3.0502 |
| 0.5425 | 215.0 | 2580 | 2.8781 |
| 0.4458 | 216.0 | 2592 | 2.8725 |
| 0.4986 | 217.0 | 2604 | 2.6287 |
| 0.8714 | 218.0 | 2616 | 3.2690 |
| 0.4996 | 219.0 | 2628 | 3.1879 |
| 0.4841 | 220.0 | 2640 | 3.0364 |
| 0.4745 | 221.0 | 2652 | 2.5914 |
| 0.4609 | 222.0 | 2664 | 2.6385 |
| 0.4058 | 223.0 | 2676 | 2.9445 |
| 0.4653 | 224.0 | 2688 | 2.6551 |
| 0.4246 | 225.0 | 2700 | 3.2083 |
| 0.6041 | 226.0 | 2712 | 3.2518 |
| 0.6409 | 227.0 | 2724 | 2.2092 |
| 0.5091 | 228.0 | 2736 | 2.6145 |
| 0.5917 | 229.0 | 2748 | 2.6990 |
| 0.533 | 230.0 | 2760 | 2.9442 |
| 0.4637 | 231.0 | 2772 | 2.5754 |
| 0.5876 | 232.0 | 2784 | 3.3697 |
| 0.5068 | 233.0 | 2796 | 2.1599 |
| 0.5561 | 234.0 | 2808 | 2.4411 |
| 0.3852 | 235.0 | 2820 | 2.1660 |
| 0.5038 | 236.0 | 2832 | 2.5145 |
| 0.4498 | 237.0 | 2844 | 2.9055 |
| 0.3932 | 238.0 | 2856 | 2.0346 |
| 0.4701 | 239.0 | 2868 | 2.4029 |
| 0.554 | 240.0 | 2880 | 3.2398 |
| 0.4836 | 241.0 | 2892 | 2.6803 |
| 0.4752 | 242.0 | 2904 | 2.5135 |
| 0.4507 | 243.0 | 2916 | 1.9342 |
| 0.316 | 244.0 | 2928 | 3.2635 |
| 0.4807 | 245.0 | 2940 | 2.6797 |
| 0.5369 | 246.0 | 2952 | 3.3722 |
| 0.4434 | 247.0 | 2964 | 2.9754 |
| 0.5113 | 248.0 | 2976 | 2.7636 |
| 0.4765 | 249.0 | 2988 | 2.5710 |
| 0.517 | 250.0 | 3000 | 2.6230 |
| 0.4156 | 251.0 | 3012 | 2.7318 |
| 0.4041 | 252.0 | 3024 | 2.9123 |
| 0.4076 | 253.0 | 3036 | 2.5130 |
| 0.4224 | 254.0 | 3048 | 2.4242 |
| 0.464 | 255.0 | 3060 | 2.4092 |
| 0.4631 | 256.0 | 3072 | 2.8105 |
| 0.3792 | 257.0 | 3084 | 2.4955 |
| 0.4282 | 258.0 | 3096 | 2.6907 |
| 0.5803 | 259.0 | 3108 | 2.8609 |
| 0.5043 | 260.0 | 3120 | 3.0090 |
| 0.4026 | 261.0 | 3132 | 3.1805 |
| 0.5926 | 262.0 | 3144 | 2.6541 |
| 0.4021 | 263.0 | 3156 | 2.2630 |
| 0.462 | 264.0 | 3168 | 3.3067 |
| 0.4701 | 265.0 | 3180 | 2.9675 |
| 0.4706 | 266.0 | 3192 | 3.2344 |
| 0.5196 | 267.0 | 3204 | 2.7747 |
| 0.491 | 268.0 | 3216 | 2.5085 |
| 0.4152 | 269.0 | 3228 | 2.5357 |
| 0.4402 | 270.0 | 3240 | 2.6906 |
| 0.4152 | 271.0 | 3252 | 3.1434 |
| 0.4487 | 272.0 | 3264 | 3.2802 |
| 0.3956 | 273.0 | 3276 | 3.3766 |
| 0.3623 | 274.0 | 3288 | 2.8253 |
| 0.3994 | 275.0 | 3300 | 2.2845 |
| 0.4035 | 276.0 | 3312 | 2.5307 |
| 0.3815 | 277.0 | 3324 | 3.3093 |
| 0.4519 | 278.0 | 3336 | 2.2202 |
| 0.3118 | 279.0 | 3348 | 2.7818 |
| 0.5191 | 280.0 | 3360 | 2.3814 |
| 0.3194 | 281.0 | 3372 | 2.3144 |
| 0.5671 | 282.0 | 3384 | 3.4033 |
| 0.4217 | 283.0 | 3396 | 1.9681 |
| 0.3587 | 284.0 | 3408 | 2.9843 |
| 0.3914 | 285.0 | 3420 | 3.1635 |
| 0.3667 | 286.0 | 3432 | 2.7571 |
| 0.3781 | 287.0 | 3444 | 2.5881 |
| 0.3868 | 288.0 | 3456 | 1.8389 |
| 0.4172 | 289.0 | 3468 | 2.6809 |
| 0.5089 | 290.0 | 3480 | 2.4618 |
| 0.3181 | 291.0 | 3492 | 2.1054 |
| 0.3276 | 292.0 | 3504 | 2.9944 |
| 0.4051 | 293.0 | 3516 | 2.8520 |
| 0.3435 | 294.0 | 3528 | 3.0985 |
| 0.3241 | 295.0 | 3540 | 2.6323 |
| 0.2532 | 296.0 | 3552 | 2.9059 |
| 0.2732 | 297.0 | 3564 | 2.5619 |
| 0.4181 | 298.0 | 3576 | 2.5687 |
| 0.3725 | 299.0 | 3588 | 3.3169 |
| 0.3949 | 300.0 | 3600 | 2.0620 |
| 0.4684 | 301.0 | 3612 | 2.3878 |
| 0.4122 | 302.0 | 3624 | 3.4867 |
| 0.3338 | 303.0 | 3636 | 3.0578 |
| 0.3546 | 304.0 | 3648 | 3.3269 |
| 0.3833 | 305.0 | 3660 | 2.2698 |
| 0.2897 | 306.0 | 3672 | 2.9015 |
| 0.3912 | 307.0 | 3684 | 3.4569 |
| 0.3951 | 308.0 | 3696 | 2.5743 |
| 0.3086 | 309.0 | 3708 | 2.2319 |
| 0.481 | 310.0 | 3720 | 1.7550 |
| 0.3579 | 311.0 | 3732 | 2.4885 |
| 0.4271 | 312.0 | 3744 | 3.2511 |
| 0.3864 | 313.0 | 3756 | 2.4219 |
| 0.3008 | 314.0 | 3768 | 3.2937 |
| 0.3279 | 315.0 | 3780 | 2.9278 |
| 0.3845 | 316.0 | 3792 | 3.7233 |
| 0.3158 | 317.0 | 3804 | 2.1792 |
| 0.3906 | 318.0 | 3816 | 2.3364 |
| 0.3159 | 319.0 | 3828 | 3.7451 |
| 0.2773 | 320.0 | 3840 | 2.6364 |
| 0.2867 | 321.0 | 3852 | 2.6699 |
| 0.3253 | 322.0 | 3864 | 2.7289 |
| 0.4208 | 323.0 | 3876 | 2.5447 |
| 0.4343 | 324.0 | 3888 | 3.1167 |
| 0.3126 | 325.0 | 3900 | 3.4110 |
| 0.2433 | 326.0 | 3912 | 2.1796 |
| 0.2964 | 327.0 | 3924 | 2.1766 |
| 0.4289 | 328.0 | 3936 | 3.5455 |
| 0.3391 | 329.0 | 3948 | 2.5795 |
| 0.3505 | 330.0 | 3960 | 2.3377 |
| 0.4084 | 331.0 | 3972 | 2.9658 |
| 0.4365 | 332.0 | 3984 | 2.5202 |
| 0.3573 | 333.0 | 3996 | 3.2768 |
| 0.2813 | 334.0 | 4008 | 2.7073 |
| 0.2531 | 335.0 | 4020 | 2.3548 |
| 0.2535 | 336.0 | 4032 | 2.8820 |
| 0.3038 | 337.0 | 4044 | 2.6777 |
| 0.2861 | 338.0 | 4056 | 2.8631 |
| 0.2717 | 339.0 | 4068 | 2.7445 |
| 0.3495 | 340.0 | 4080 | 2.9722 |
| 0.2775 | 341.0 | 4092 | 3.1350 |
| 0.3661 | 342.0 | 4104 | 2.7601 |
| 0.348 | 343.0 | 4116 | 2.6642 |
| 0.3556 | 344.0 | 4128 | 1.9807 |
| 0.3072 | 345.0 | 4140 | 2.6037 |
| 0.3114 | 346.0 | 4152 | 2.7645 |
| 0.3527 | 347.0 | 4164 | 2.8360 |
| 0.2903 | 348.0 | 4176 | 2.0667 |
| 0.2449 | 349.0 | 4188 | 2.3573 |
| 0.2089 | 350.0 | 4200 | 2.6189 |
| 0.3894 | 351.0 | 4212 | 2.5689 |
| 0.3061 | 352.0 | 4224 | 2.7638 |
| 0.3221 | 353.0 | 4236 | 2.4668 |
| 0.2434 | 354.0 | 4248 | 2.3994 |
| 0.1777 | 355.0 | 4260 | 2.6408 |
| 0.3809 | 356.0 | 4272 | 2.9841 |
| 0.3237 | 357.0 | 4284 | 2.7111 |
| 0.1947 | 358.0 | 4296 | 3.5881 |
| 0.3112 | 359.0 | 4308 | 3.6076 |
| 0.299 | 360.0 | 4320 | 2.5547 |
| 0.354 | 361.0 | 4332 | 1.9077 |
| 0.2733 | 362.0 | 4344 | 3.1406 |
| 0.4962 | 363.0 | 4356 | 2.3770 |
| 0.3272 | 364.0 | 4368 | 3.0437 |
| 0.2858 | 365.0 | 4380 | 2.7978 |
| 0.3685 | 366.0 | 4392 | 2.3725 |
| 0.2707 | 367.0 | 4404 | 2.4587 |
| 0.3137 | 368.0 | 4416 | 2.1862 |
| 0.2781 | 369.0 | 4428 | 1.8312 |
| 0.2658 | 370.0 | 4440 | 2.4720 |
| 0.3014 | 371.0 | 4452 | 2.3532 |
| 0.24 | 372.0 | 4464 | 3.4097 |
| 0.2413 | 373.0 | 4476 | 3.2338 |
| 0.3055 | 374.0 | 4488 | 3.4269 |
| 0.3781 | 375.0 | 4500 | 2.8758 |
| 0.2224 | 376.0 | 4512 | 2.2171 |
| 0.2463 | 377.0 | 4524 | 3.2768 |
| 0.4141 | 378.0 | 4536 | 2.9136 |
| 0.2102 | 379.0 | 4548 | 2.8798 |
| 0.2164 | 380.0 | 4560 | 2.5821 |
| 0.2742 | 381.0 | 4572 | 2.0458 |
| 0.2007 | 382.0 | 4584 | 3.8119 |
| 0.2494 | 383.0 | 4596 | 3.0835 |
| 0.2533 | 384.0 | 4608 | 2.5633 |
| 0.3137 | 385.0 | 4620 | 2.2415 |
| 0.2686 | 386.0 | 4632 | 2.2489 |
| 0.2425 | 387.0 | 4644 | 2.1750 |
| 0.2561 | 388.0 | 4656 | 2.8167 |
| 0.3485 | 389.0 | 4668 | 3.4358 |
| 0.2746 | 390.0 | 4680 | 2.3380 |
| 0.3538 | 391.0 | 4692 | 2.9940 |
| 0.3989 | 392.0 | 4704 | 2.7560 |
| 0.2414 | 393.0 | 4716 | 3.4802 |
| 0.2888 | 394.0 | 4728 | 2.5955 |
| 0.3162 | 395.0 | 4740 | 2.3060 |
| 0.2435 | 396.0 | 4752 | 3.8333 |
| 0.2796 | 397.0 | 4764 | 2.1767 |
| 0.2588 | 398.0 | 4776 | 2.6988 |
| 0.209 | 399.0 | 4788 | 2.4999 |
| 0.2602 | 400.0 | 4800 | 2.6636 |
| 0.2114 | 401.0 | 4812 | 3.2272 |
| 0.2226 | 402.0 | 4824 | 2.5983 |
| 0.1681 | 403.0 | 4836 | 2.3867 |
| 0.2025 | 404.0 | 4848 | 3.0062 |
| 0.2769 | 405.0 | 4860 | 2.9767 |
| 0.3267 | 406.0 | 4872 | 2.6960 |
| 0.252 | 407.0 | 4884 | 2.6078 |
| 0.257 | 408.0 | 4896 | 2.1594 |
| 0.306 | 409.0 | 4908 | 3.3544 |
| 0.2329 | 410.0 | 4920 | 2.6371 |
| 0.3732 | 411.0 | 4932 | 2.8729 |
| 0.3233 | 412.0 | 4944 | 3.6352 |
| 0.2822 | 413.0 | 4956 | 3.0374 |
| 0.2796 | 414.0 | 4968 | 2.8686 |
| 0.2606 | 415.0 | 4980 | 2.8761 |
| 0.2048 | 416.0 | 4992 | 2.5680 |
| 0.2088 | 417.0 | 5004 | 2.4540 |
| 0.2301 | 418.0 | 5016 | 2.4787 |
| 0.1594 | 419.0 | 5028 | 2.9355 |
| 0.3399 | 420.0 | 5040 | 2.8312 |
| 0.2322 | 421.0 | 5052 | 1.9368 |
| 0.2066 | 422.0 | 5064 | 3.2728 |
| 0.2254 | 423.0 | 5076 | 3.0105 |
| 0.1818 | 424.0 | 5088 | 2.8390 |
| 0.3191 | 425.0 | 5100 | 2.9756 |
| 0.1961 | 426.0 | 5112 | 3.4510 |
| 0.2014 | 427.0 | 5124 | 3.4363 |
| 0.184 | 428.0 | 5136 | 3.1381 |
| 0.2722 | 429.0 | 5148 | 3.4780 |
| 0.2607 | 430.0 | 5160 | 2.9650 |
| 0.3515 | 431.0 | 5172 | 2.8692 |
| 0.2011 | 432.0 | 5184 | 2.7564 |
| 0.2555 | 433.0 | 5196 | 3.5317 |
| 0.2802 | 434.0 | 5208 | 1.9900 |
| 0.227 | 435.0 | 5220 | 3.3691 |
| 0.2833 | 436.0 | 5232 | 3.0117 |
| 0.2368 | 437.0 | 5244 | 2.6631 |
| 0.2159 | 438.0 | 5256 | 2.3868 |
| 0.2139 | 439.0 | 5268 | 2.8382 |
| 0.2739 | 440.0 | 5280 | 2.9267 |
| 0.234 | 441.0 | 5292 | 2.9501 |
| 0.2315 | 442.0 | 5304 | 3.3317 |
| 0.2538 | 443.0 | 5316 | 3.1168 |
| 0.2535 | 444.0 | 5328 | 2.8070 |
| 0.2711 | 445.0 | 5340 | 2.0824 |
| 0.2963 | 446.0 | 5352 | 1.7310 |
| 0.2559 | 447.0 | 5364 | 3.3832 |
| 0.3184 | 448.0 | 5376 | 2.6107 |
| 0.2383 | 449.0 | 5388 | 2.3923 |
| 0.4352 | 450.0 | 5400 | 3.1145 |
| 0.1892 | 451.0 | 5412 | 3.0184 |
| 0.1899 | 452.0 | 5424 | 2.9772 |
| 0.3766 | 453.0 | 5436 | 3.3416 |
| 0.211 | 454.0 | 5448 | 2.9356 |
| 0.2387 | 455.0 | 5460 | 2.5284 |
| 0.2322 | 456.0 | 5472 | 2.8084 |
| 0.2003 | 457.0 | 5484 | 3.0678 |
| 0.2604 | 458.0 | 5496 | 2.4424 |
| 0.2614 | 459.0 | 5508 | 2.6966 |
| 0.2026 | 460.0 | 5520 | 2.7806 |
| 0.4175 | 461.0 | 5532 | 2.9597 |
| 0.1676 | 462.0 | 5544 | 2.8175 |
| 0.2646 | 463.0 | 5556 | 3.1038 |
| 0.2514 | 464.0 | 5568 | 2.2243 |
| 0.1483 | 465.0 | 5580 | 2.6416 |
| 0.233 | 466.0 | 5592 | 3.0405 |
| 0.2788 | 467.0 | 5604 | 2.1676 |
| 0.2339 | 468.0 | 5616 | 3.1575 |
| 0.2735 | 469.0 | 5628 | 1.7335 |
| 0.1639 | 470.0 | 5640 | 2.7019 |
| 0.24 | 471.0 | 5652 | 2.2920 |
| 0.2341 | 472.0 | 5664 | 2.8358 |
| 0.1978 | 473.0 | 5676 | 2.9339 |
| 0.2517 | 474.0 | 5688 | 2.4914 |
| 0.188 | 475.0 | 5700 | 2.2767 |
| 0.1138 | 476.0 | 5712 | 2.3833 |
| 0.1809 | 477.0 | 5724 | 2.6821 |
| 0.3134 | 478.0 | 5736 | 2.1710 |
| 0.1848 | 479.0 | 5748 | 3.3586 |
| 0.252 | 480.0 | 5760 | 2.7309 |
| 0.193 | 481.0 | 5772 | 2.8318 |
| 0.2284 | 482.0 | 5784 | 3.4643 |
| 0.2058 | 483.0 | 5796 | 4.2388 |
| 0.2319 | 484.0 | 5808 | 2.1872 |
| 0.1566 | 485.0 | 5820 | 2.3735 |
| 0.29 | 486.0 | 5832 | 3.4093 |
| 0.125 | 487.0 | 5844 | 3.3786 |
| 0.2628 | 488.0 | 5856 | 2.4406 |
| 0.2609 | 489.0 | 5868 | 3.3617 |
| 0.2055 | 490.0 | 5880 | 3.1843 |
| 0.1713 | 491.0 | 5892 | 2.1698 |
| 0.2562 | 492.0 | 5904 | 3.0665 |
| 0.3366 | 493.0 | 5916 | 3.2277 |
| 0.2359 | 494.0 | 5928 | 2.7013 |
| 0.191 | 495.0 | 5940 | 3.4616 |
| 0.175 | 496.0 | 5952 | 2.5117 |
| 0.1695 | 497.0 | 5964 | 2.3203 |
| 0.218 | 498.0 | 5976 | 2.4493 |
| 0.1953 | 499.0 | 5988 | 2.6769 |
| 0.2478 | 500.0 | 6000 | 3.1759 |
| 0.1548 | 501.0 | 6012 | 2.8604 |
| 0.123 | 502.0 | 6024 | 2.7744 |
| 0.2271 | 503.0 | 6036 | 2.9987 |
| 0.2384 | 504.0 | 6048 | 2.7653 |
| 0.2473 | 505.0 | 6060 | 3.1049 |
| 0.1937 | 506.0 | 6072 | 2.6676 |
| 0.138 | 507.0 | 6084 | 2.2486 |
| 0.2681 | 508.0 | 6096 | 3.1809 |
| 0.2182 | 509.0 | 6108 | 2.5258 |
| 0.1736 | 510.0 | 6120 | 2.2174 |
| 0.2238 | 511.0 | 6132 | 2.9662 |
| 0.189 | 512.0 | 6144 | 2.3124 |
| 0.175 | 513.0 | 6156 | 3.6426 |
| 0.2189 | 514.0 | 6168 | 2.4628 |
| 0.1918 | 515.0 | 6180 | 3.3473 |
| 0.1303 | 516.0 | 6192 | 2.9400 |
| 0.1624 | 517.0 | 6204 | 3.1941 |
| 0.134 | 518.0 | 6216 | 2.9962 |
| 0.2447 | 519.0 | 6228 | 3.0082 |
| 0.1872 | 520.0 | 6240 | 3.9689 |
| 0.1787 | 521.0 | 6252 | 3.1461 |
| 0.3039 | 522.0 | 6264 | 3.2696 |
| 0.1757 | 523.0 | 6276 | 3.0340 |
| 0.3539 | 524.0 | 6288 | 3.3542 |
| 0.2109 | 525.0 | 6300 | 2.7986 |
| 0.1743 | 526.0 | 6312 | 3.1874 |
| 0.1065 | 527.0 | 6324 | 2.9643 |
| 0.2941 | 528.0 | 6336 | 2.6260 |
| 0.2231 | 529.0 | 6348 | 2.8250 |
| 0.1307 | 530.0 | 6360 | 3.2949 |
| 0.1979 | 531.0 | 6372 | 1.8269 |
| 0.2293 | 532.0 | 6384 | 2.2357 |
| 0.2171 | 533.0 | 6396 | 2.5498 |
| 0.1975 | 534.0 | 6408 | 2.7011 |
| 0.1556 | 535.0 | 6420 | 3.5648 |
| 0.1234 | 536.0 | 6432 | 2.7632 |
| 0.2156 | 537.0 | 6444 | 2.3060 |
| 0.1402 | 538.0 | 6456 | 3.1421 |
| 0.1921 | 539.0 | 6468 | 2.3200 |
| 0.1237 | 540.0 | 6480 | 2.7612 |
| 0.1942 | 541.0 | 6492 | 2.5866 |
| 0.1648 | 542.0 | 6504 | 2.4930 |
| 0.1369 | 543.0 | 6516 | 2.9427 |
| 0.1811 | 544.0 | 6528 | 2.9692 |
| 0.2382 | 545.0 | 6540 | 3.4092 |
| 0.2001 | 546.0 | 6552 | 3.2784 |
| 0.2195 | 547.0 | 6564 | 2.8198 |
| 0.1785 | 548.0 | 6576 | 2.5721 |
| 0.2214 | 549.0 | 6588 | 3.1468 |
| 0.1685 | 550.0 | 6600 | 2.8141 |
| 0.1596 | 551.0 | 6612 | 3.1457 |
| 0.0945 | 552.0 | 6624 | 2.6508 |
| 0.1595 | 553.0 | 6636 | 2.8443 |
| 0.1805 | 554.0 | 6648 | 2.4984 |
| 0.1588 | 555.0 | 6660 | 2.9758 |
| 0.2026 | 556.0 | 6672 | 3.3614 |
| 0.1351 | 557.0 | 6684 | 2.5065 |
| 0.2395 | 558.0 | 6696 | 2.5261 |
| 0.2089 | 559.0 | 6708 | 3.3972 |
| 0.2265 | 560.0 | 6720 | 3.0095 |
| 0.2027 | 561.0 | 6732 | 3.2904 |
| 0.2691 | 562.0 | 6744 | 2.5727 |
| 0.1563 | 563.0 | 6756 | 2.0994 |
| 0.2537 | 564.0 | 6768 | 3.2397 |
| 0.1094 | 565.0 | 6780 | 2.9758 |
| 0.1523 | 566.0 | 6792 | 2.3577 |
| 0.2535 | 567.0 | 6804 | 2.6197 |
| 0.1444 | 568.0 | 6816 | 1.9130 |
| 0.1933 | 569.0 | 6828 | 2.3576 |
| 0.1368 | 570.0 | 6840 | 3.3412 |
| 0.1723 | 571.0 | 6852 | 3.5156 |
| 0.1384 | 572.0 | 6864 | 2.9785 |
| 0.1905 | 573.0 | 6876 | 3.2326 |
| 0.1495 | 574.0 | 6888 | 2.9111 |
| 0.1512 | 575.0 | 6900 | 2.1727 |
| 0.227 | 576.0 | 6912 | 2.5159 |
| 0.2271 | 577.0 | 6924 | 2.7866 |
| 0.2457 | 578.0 | 6936 | 3.2068 |
| 0.236 | 579.0 | 6948 | 2.8856 |
| 0.1579 | 580.0 | 6960 | 2.3365 |
| 0.1203 | 581.0 | 6972 | 2.3652 |
| 0.1422 | 582.0 | 6984 | 2.8213 |
| 0.1673 | 583.0 | 6996 | 2.5507 |
| 0.204 | 584.0 | 7008 | 4.0226 |
| 0.1796 | 585.0 | 7020 | 3.1953 |
| 0.163 | 586.0 | 7032 | 2.5787 |
| 0.2166 | 587.0 | 7044 | 3.8404 |
| 0.1299 | 588.0 | 7056 | 2.3668 |
| 0.2301 | 589.0 | 7068 | 2.7562 |
| 0.1506 | 590.0 | 7080 | 2.9342 |
| 0.1372 | 591.0 | 7092 | 2.8316 |
| 0.1959 | 592.0 | 7104 | 2.2761 |
| 0.1925 | 593.0 | 7116 | 2.9083 |
| 0.1885 | 594.0 | 7128 | 2.9052 |
| 0.2052 | 595.0 | 7140 | 2.9409 |
| 0.1368 | 596.0 | 7152 | 3.2571 |
| 0.1455 | 597.0 | 7164 | 2.8765 |
| 0.1398 | 598.0 | 7176 | 2.2425 |
| 0.1764 | 599.0 | 7188 | 2.6299 |
| 0.1791 | 600.0 | 7200 | 3.4030 |
| 0.1057 | 601.0 | 7212 | 3.2505 |
| 0.1947 | 602.0 | 7224 | 2.6440 |
| 0.1678 | 603.0 | 7236 | 3.3419 |
| 0.1629 | 604.0 | 7248 | 3.1957 |
| 0.1348 | 605.0 | 7260 | 3.1234 |
| 0.2332 | 606.0 | 7272 | 2.9425 |
| 0.1367 | 607.0 | 7284 | 3.8721 |
| 0.1434 | 608.0 | 7296 | 3.0653 |
| 0.2092 | 609.0 | 7308 | 3.1552 |
| 0.1765 | 610.0 | 7320 | 2.6715 |
| 0.1773 | 611.0 | 7332 | 2.8437 |
| 0.1427 | 612.0 | 7344 | 3.1257 |
| 0.2383 | 613.0 | 7356 | 3.5687 |
| 0.1376 | 614.0 | 7368 | 3.0010 |
| 0.1388 | 615.0 | 7380 | 2.7436 |
| 0.2484 | 616.0 | 7392 | 3.2465 |
| 0.146 | 617.0 | 7404 | 3.4019 |
| 0.1313 | 618.0 | 7416 | 2.5044 |
| 0.2028 | 619.0 | 7428 | 3.2449 |
| 0.1471 | 620.0 | 7440 | 3.1716 |
| 0.1755 | 621.0 | 7452 | 2.4465 |
| 0.16 | 622.0 | 7464 | 2.8572 |
| 0.108 | 623.0 | 7476 | 3.4424 |
| 0.0824 | 624.0 | 7488 | 2.6112 |
| 0.1133 | 625.0 | 7500 | 2.5730 |
| 0.1809 | 626.0 | 7512 | 1.9670 |
| 0.2606 | 627.0 | 7524 | 2.7736 |
| 0.2001 | 628.0 | 7536 | 3.1865 |
| 0.1912 | 629.0 | 7548 | 2.9717 |
| 0.1525 | 630.0 | 7560 | 2.8429 |
| 0.306 | 631.0 | 7572 | 2.6320 |
| 0.1322 | 632.0 | 7584 | 2.8373 |
| 0.1782 | 633.0 | 7596 | 2.7157 |
| 0.095 | 634.0 | 7608 | 3.2528 |
| 0.1463 | 635.0 | 7620 | 2.6568 |
| 0.184 | 636.0 | 7632 | 2.2466 |
| 0.2132 | 637.0 | 7644 | 3.4883 |
| 0.1007 | 638.0 | 7656 | 3.1021 |
| 0.1686 | 639.0 | 7668 | 2.4326 |
| 0.1359 | 640.0 | 7680 | 2.2554 |
| 0.1535 | 641.0 | 7692 | 2.8495 |
| 0.2158 | 642.0 | 7704 | 3.0866 |
| 0.1403 | 643.0 | 7716 | 2.8983 |
| 0.1092 | 644.0 | 7728 | 3.5183 |
| 0.2218 | 645.0 | 7740 | 2.9190 |
| 0.1468 | 646.0 | 7752 | 3.7689 |
| 0.2291 | 647.0 | 7764 | 3.4550 |
| 0.1616 | 648.0 | 7776 | 2.3301 |
| 0.2146 | 649.0 | 7788 | 4.2045 |
| 0.1113 | 650.0 | 7800 | 3.0168 |
| 0.1785 | 651.0 | 7812 | 2.9931 |
| 0.1535 | 652.0 | 7824 | 3.4046 |
| 0.149 | 653.0 | 7836 | 2.5526 |
| 0.1351 | 654.0 | 7848 | 2.1684 |
| 0.2564 | 655.0 | 7860 | 3.0749 |
| 0.0749 | 656.0 | 7872 | 2.8874 |
| 0.1719 | 657.0 | 7884 | 3.1585 |
| 0.1783 | 658.0 | 7896 | 4.2177 |
| 0.1632 | 659.0 | 7908 | 2.5370 |
| 0.1635 | 660.0 | 7920 | 2.7765 |
| 0.1414 | 661.0 | 7932 | 4.3148 |
| 0.2072 | 662.0 | 7944 | 3.1080 |
| 0.3758 | 663.0 | 7956 | 2.7835 |
| 0.1474 | 664.0 | 7968 | 2.7685 |
| 0.2225 | 665.0 | 7980 | 2.2965 |
| 0.2438 | 666.0 | 7992 | 2.8599 |
| 0.1872 | 667.0 | 8004 | 2.7234 |
| 0.2879 | 668.0 | 8016 | 3.1187 |
| 0.1117 | 669.0 | 8028 | 3.8094 |
| 0.0942 | 670.0 | 8040 | 4.4307 |
| 0.1219 | 671.0 | 8052 | 2.6304 |
| 0.1234 | 672.0 | 8064 | 3.0443 |
| 0.1221 | 673.0 | 8076 | 3.3849 |
| 0.1317 | 674.0 | 8088 | 2.5523 |
| 0.1091 | 675.0 | 8100 | 2.6704 |
| 0.1677 | 676.0 | 8112 | 3.3960 |
| 0.124 | 677.0 | 8124 | 2.1910 |
| 0.1508 | 678.0 | 8136 | 2.5585 |
| 0.1277 | 679.0 | 8148 | 3.2449 |
| 0.1208 | 680.0 | 8160 | 3.0315 |
| 0.1796 | 681.0 | 8172 | 2.3906 |
| 0.2055 | 682.0 | 8184 | 2.8063 |
| 0.1042 | 683.0 | 8196 | 2.7491 |
| 0.1897 | 684.0 | 8208 | 2.9381 |
| 0.138 | 685.0 | 8220 | 2.8710 |
| 0.1562 | 686.0 | 8232 | 1.9945 |
| 0.1091 | 687.0 | 8244 | 2.7079 |
| 0.1616 | 688.0 | 8256 | 3.3086 |
| 0.1699 | 689.0 | 8268 | 3.0746 |
| 0.2412 | 690.0 | 8280 | 2.2330 |
| 0.157 | 691.0 | 8292 | 3.0135 |
| 0.1263 | 692.0 | 8304 | 3.1212 |
| 0.1375 | 693.0 | 8316 | 1.8782 |
| 0.1204 | 694.0 | 8328 | 2.9291 |
| 0.1829 | 695.0 | 8340 | 2.5690 |
| 0.1539 | 696.0 | 8352 | 2.5749 |
| 0.1339 | 697.0 | 8364 | 3.0899 |
| 0.1463 | 698.0 | 8376 | 2.5024 |
| 0.1767 | 699.0 | 8388 | 2.5890 |
| 0.1392 | 700.0 | 8400 | 1.6672 |
| 0.1354 | 701.0 | 8412 | 3.1415 |
| 0.1467 | 702.0 | 8424 | 3.1370 |
| 0.2547 | 703.0 | 8436 | 2.5094 |
| 0.1116 | 704.0 | 8448 | 2.2467 |
| 0.0987 | 705.0 | 8460 | 3.2307 |
| 0.1811 | 706.0 | 8472 | 2.7363 |
| 0.1252 | 707.0 | 8484 | 2.4490 |
| 0.1613 | 708.0 | 8496 | 2.3867 |
| 0.2282 | 709.0 | 8508 | 3.0490 |
| 0.1651 | 710.0 | 8520 | 3.1520 |
| 0.1016 | 711.0 | 8532 | 2.7703 |
| 0.2515 | 712.0 | 8544 | 2.4811 |
| 0.1014 | 713.0 | 8556 | 3.7300 |
| 0.103 | 714.0 | 8568 | 2.8680 |
| 0.1714 | 715.0 | 8580 | 3.8285 |
| 0.1638 | 716.0 | 8592 | 2.5344 |
| 0.14 | 717.0 | 8604 | 3.8581 |
| 0.1202 | 718.0 | 8616 | 2.4095 |
| 0.0691 | 719.0 | 8628 | 2.9710 |
| 0.1176 | 720.0 | 8640 | 3.0506 |
| 0.2005 | 721.0 | 8652 | 2.7418 |
| 0.1719 | 722.0 | 8664 | 2.7388 |
| 0.1509 | 723.0 | 8676 | 2.5713 |
| 0.1113 | 724.0 | 8688 | 2.9053 |
| 0.2501 | 725.0 | 8700 | 2.7703 |
| 0.1192 | 726.0 | 8712 | 3.5875 |
| 0.1619 | 727.0 | 8724 | 3.0704 |
| 0.1421 | 728.0 | 8736 | 2.5629 |
| 0.164 | 729.0 | 8748 | 2.4980 |
| 0.1753 | 730.0 | 8760 | 2.7749 |
| 0.159 | 731.0 | 8772 | 3.8322 |
| 0.1929 | 732.0 | 8784 | 3.1355 |
| 0.088 | 733.0 | 8796 | 2.3649 |
| 0.1349 | 734.0 | 8808 | 2.2229 |
| 0.1093 | 735.0 | 8820 | 2.4979 |
| 0.1338 | 736.0 | 8832 | 3.2253 |
| 0.1794 | 737.0 | 8844 | 2.9326 |
| 0.0948 | 738.0 | 8856 | 2.9917 |
| 0.1341 | 739.0 | 8868 | 3.6675 |
| 0.1019 | 740.0 | 8880 | 3.4145 |
| 0.1265 | 741.0 | 8892 | 2.4996 |
| 0.1688 | 742.0 | 8904 | 2.9395 |
| 0.0829 | 743.0 | 8916 | 3.5850 |
| 0.0993 | 744.0 | 8928 | 3.2900 |
| 0.2241 | 745.0 | 8940 | 3.2025 |
| 0.1235 | 746.0 | 8952 | 2.2814 |
| 0.0937 | 747.0 | 8964 | 3.3185 |
| 0.0936 | 748.0 | 8976 | 3.4046 |
| 0.1633 | 749.0 | 8988 | 2.9694 |
| 0.1328 | 750.0 | 9000 | 3.2772 |
| 0.1168 | 751.0 | 9012 | 2.7732 |
| 0.2409 | 752.0 | 9024 | 3.3763 |
| 0.1145 | 753.0 | 9036 | 2.7232 |
| 0.1384 | 754.0 | 9048 | 3.5289 |
| 0.1326 | 755.0 | 9060 | 3.1250 |
| 0.1124 | 756.0 | 9072 | 3.2928 |
| 0.1197 | 757.0 | 9084 | 2.7365 |
| 0.1359 | 758.0 | 9096 | 2.3043 |
| 0.1031 | 759.0 | 9108 | 2.6293 |
| 0.1434 | 760.0 | 9120 | 2.7771 |
| 0.1009 | 761.0 | 9132 | 2.9574 |
| 0.1217 | 762.0 | 9144 | 3.5124 |
| 0.1017 | 763.0 | 9156 | 3.5922 |
| 0.1236 | 764.0 | 9168 | 2.2188 |
| 0.1174 | 765.0 | 9180 | 2.9054 |
| 0.1797 | 766.0 | 9192 | 2.5098 |
| 0.0971 | 767.0 | 9204 | 2.2203 |
| 0.1043 | 768.0 | 9216 | 2.8536 |
| 0.1464 | 769.0 | 9228 | 2.6191 |
| 0.195 | 770.0 | 9240 | 2.2198 |
| 0.1603 | 771.0 | 9252 | 2.8702 |
| 0.1514 | 772.0 | 9264 | 2.6832 |
| 0.1363 | 773.0 | 9276 | 3.0211 |
| 0.1263 | 774.0 | 9288 | 2.4905 |
| 0.1048 | 775.0 | 9300 | 3.0469 |
| 0.1175 | 776.0 | 9312 | 3.0265 |
| 0.1595 | 777.0 | 9324 | 2.1823 |
| 0.1243 | 778.0 | 9336 | 2.5649 |
| 0.1825 | 779.0 | 9348 | 2.8523 |
| 0.1697 | 780.0 | 9360 | 3.3646 |
| 0.1228 | 781.0 | 9372 | 2.2108 |
| 0.0893 | 782.0 | 9384 | 3.4784 |
| 0.1361 | 783.0 | 9396 | 3.4523 |
| 0.0953 | 784.0 | 9408 | 2.5469 |
| 0.1732 | 785.0 | 9420 | 3.2701 |
| 0.113 | 786.0 | 9432 | 3.4206 |
| 0.1303 | 787.0 | 9444 | 2.7898 |
| 0.2207 | 788.0 | 9456 | 3.4153 |
| 0.1762 | 789.0 | 9468 | 3.4267 |
| 0.1293 | 790.0 | 9480 | 3.6637 |
| 0.0805 | 791.0 | 9492 | 3.1007 |
| 0.2172 | 792.0 | 9504 | 2.6548 |
| 0.0886 | 793.0 | 9516 | 2.5632 |
| 0.2214 | 794.0 | 9528 | 2.8648 |
| 0.1454 | 795.0 | 9540 | 2.2529 |
| 0.1623 | 796.0 | 9552 | 2.5046 |
| 0.1443 | 797.0 | 9564 | 3.6918 |
| 0.0777 | 798.0 | 9576 | 2.4575 |
| 0.1109 | 799.0 | 9588 | 2.5164 |
| 0.1228 | 800.0 | 9600 | 3.0721 |
| 0.0774 | 801.0 | 9612 | 3.3021 |
| 0.1239 | 802.0 | 9624 | 2.8039 |
| 0.1633 | 803.0 | 9636 | 3.9218 |
| 0.1562 | 804.0 | 9648 | 2.2741 |
| 0.1398 | 805.0 | 9660 | 2.3857 |
| 0.0827 | 806.0 | 9672 | 3.8789 |
| 0.1041 | 807.0 | 9684 | 3.1660 |
| 0.1345 | 808.0 | 9696 | 2.6615 |
| 0.0964 | 809.0 | 9708 | 3.8610 |
| 0.0705 | 810.0 | 9720 | 2.6085 |
| 0.1286 | 811.0 | 9732 | 2.8976 |
| 0.1319 | 812.0 | 9744 | 3.0883 |
| 0.2169 | 813.0 | 9756 | 3.1248 |
| 0.1585 | 814.0 | 9768 | 3.5880 |
| 0.1412 | 815.0 | 9780 | 4.2307 |
| 0.1665 | 816.0 | 9792 | 2.5049 |
| 0.1138 | 817.0 | 9804 | 3.0581 |
| 0.1329 | 818.0 | 9816 | 2.6806 |
| 0.1029 | 819.0 | 9828 | 2.6299 |
| 0.0967 | 820.0 | 9840 | 3.4191 |
| 0.1269 | 821.0 | 9852 | 3.8664 |
| 0.1122 | 822.0 | 9864 | 2.9701 |
| 0.108 | 823.0 | 9876 | 3.2608 |
| 0.1038 | 824.0 | 9888 | 2.9620 |
| 0.1599 | 825.0 | 9900 | 2.8607 |
| 0.2117 | 826.0 | 9912 | 3.1970 |
| 0.1121 | 827.0 | 9924 | 3.7504 |
| 0.131 | 828.0 | 9936 | 3.8170 |
| 0.1627 | 829.0 | 9948 | 3.9556 |
| 0.1504 | 830.0 | 9960 | 3.0378 |
| 0.1334 | 831.0 | 9972 | 2.9688 |
| 0.148 | 832.0 | 9984 | 3.6264 |
| 0.0931 | 833.0 | 9996 | 3.1000 |
| 0.1124 | 834.0 | 10008 | 2.2768 |
| 0.0716 | 835.0 | 10020 | 2.5006 |
| 0.1948 | 836.0 | 10032 | 3.6966 |
| 0.1199 | 837.0 | 10044 | 2.8248 |
| 0.1664 | 838.0 | 10056 | 3.4134 |
| 0.1269 | 839.0 | 10068 | 2.6959 |
| 0.1033 | 840.0 | 10080 | 3.1595 |
| 0.1494 | 841.0 | 10092 | 3.2611 |
| 0.1642 | 842.0 | 10104 | 2.7121 |
| 0.145 | 843.0 | 10116 | 2.8543 |
| 0.0995 | 844.0 | 10128 | 3.2522 |
| 0.098 | 845.0 | 10140 | 2.1804 |
| 0.1257 | 846.0 | 10152 | 2.6450 |
| 0.0715 | 847.0 | 10164 | 2.6534 |
| 0.1559 | 848.0 | 10176 | 2.1307 |
| 0.1551 | 849.0 | 10188 | 2.5103 |
| 0.1052 | 850.0 | 10200 | 3.7062 |
| 0.0932 | 851.0 | 10212 | 3.3476 |
| 0.0832 | 852.0 | 10224 | 2.4707 |
| 0.1666 | 853.0 | 10236 | 3.2024 |
| 0.1273 | 854.0 | 10248 | 2.5906 |
| 0.163 | 855.0 | 10260 | 3.0574 |
| 0.1309 | 856.0 | 10272 | 2.5865 |
| 0.2476 | 857.0 | 10284 | 3.3188 |
| 0.1191 | 858.0 | 10296 | 2.5695 |
| 0.1548 | 859.0 | 10308 | 3.6313 |
| 0.1599 | 860.0 | 10320 | 2.8832 |
| 0.128 | 861.0 | 10332 | 2.4891 |
| 0.1391 | 862.0 | 10344 | 3.1289 |
| 0.138 | 863.0 | 10356 | 2.6089 |
| 0.0706 | 864.0 | 10368 | 3.0440 |
| 0.1128 | 865.0 | 10380 | 3.6210 |
| 0.2152 | 866.0 | 10392 | 3.2759 |
| 0.2337 | 867.0 | 10404 | 3.1451 |
| 0.1473 | 868.0 | 10416 | 3.5721 |
| 0.1346 | 869.0 | 10428 | 3.0452 |
| 0.1074 | 870.0 | 10440 | 2.7138 |
| 0.095 | 871.0 | 10452 | 2.6684 |
| 0.0699 | 872.0 | 10464 | 3.2899 |
| 0.1326 | 873.0 | 10476 | 3.5183 |
| 0.1523 | 874.0 | 10488 | 2.1549 |
| 0.1067 | 875.0 | 10500 | 2.3682 |
| 0.125 | 876.0 | 10512 | 2.7431 |
| 0.1797 | 877.0 | 10524 | 2.5871 |
| 0.1442 | 878.0 | 10536 | 3.8328 |
| 0.136 | 879.0 | 10548 | 2.3259 |
| 0.1459 | 880.0 | 10560 | 2.7320 |
| 0.0617 | 881.0 | 10572 | 3.1303 |
| 0.1419 | 882.0 | 10584 | 3.2222 |
| 0.0673 | 883.0 | 10596 | 2.7638 |
| 0.0978 | 884.0 | 10608 | 3.5383 |
| 0.0737 | 885.0 | 10620 | 3.8811 |
| 0.0948 | 886.0 | 10632 | 3.8811 |
| 0.1158 | 887.0 | 10644 | 3.2247 |
| 0.1497 | 888.0 | 10656 | 2.5282 |
| 0.1488 | 889.0 | 10668 | 3.2183 |
| 0.1361 | 890.0 | 10680 | 3.0011 |
| 0.1536 | 891.0 | 10692 | 2.8193 |
| 0.1509 | 892.0 | 10704 | 3.2418 |
| 0.0663 | 893.0 | 10716 | 2.6955 |
| 0.0954 | 894.0 | 10728 | 3.6407 |
| 0.1257 | 895.0 | 10740 | 3.0466 |
| 0.1293 | 896.0 | 10752 | 3.4879 |
| 0.1682 | 897.0 | 10764 | 3.0975 |
| 0.1427 | 898.0 | 10776 | 2.7423 |
| 0.1332 | 899.0 | 10788 | 3.3520 |
| 0.1368 | 900.0 | 10800 | 3.1909 |
| 0.1633 | 901.0 | 10812 | 3.5312 |
| 0.193 | 902.0 | 10824 | 2.9027 |
| 0.1169 | 903.0 | 10836 | 3.2119 |
| 0.0856 | 904.0 | 10848 | 2.6224 |
| 0.1507 | 905.0 | 10860 | 3.4485 |
| 0.1663 | 906.0 | 10872 | 3.7079 |
| 0.1162 | 907.0 | 10884 | 2.4238 |
| 0.1162 | 908.0 | 10896 | 2.7136 |
| 0.1181 | 909.0 | 10908 | 3.2237 |
| 0.1468 | 910.0 | 10920 | 2.9780 |
| 0.0959 | 911.0 | 10932 | 3.1877 |
| 0.1162 | 912.0 | 10944 | 2.1530 |
| 0.1245 | 913.0 | 10956 | 3.4275 |
| 0.1524 | 914.0 | 10968 | 2.9887 |
| 0.1487 | 915.0 | 10980 | 3.5492 |
| 0.1189 | 916.0 | 10992 | 3.7000 |
| 0.1104 | 917.0 | 11004 | 3.1991 |
| 0.1339 | 918.0 | 11016 | 3.3229 |
| 0.1239 | 919.0 | 11028 | 3.5813 |
| 0.1234 | 920.0 | 11040 | 2.6298 |
| 0.1115 | 921.0 | 11052 | 3.1678 |
| 0.097 | 922.0 | 11064 | 3.5488 |
| 0.1599 | 923.0 | 11076 | 2.1364 |
| 0.0864 | 924.0 | 11088 | 3.0174 |
| 0.2064 | 925.0 | 11100 | 3.3537 |
| 0.1389 | 926.0 | 11112 | 3.1944 |
| 0.1285 | 927.0 | 11124 | 2.5938 |
| 0.099 | 928.0 | 11136 | 2.9489 |
| 0.1544 | 929.0 | 11148 | 3.1323 |
| 0.0943 | 930.0 | 11160 | 3.0074 |
| 0.1343 | 931.0 | 11172 | 3.0724 |
| 0.0937 | 932.0 | 11184 | 2.5755 |
| 0.0631 | 933.0 | 11196 | 2.4738 |
| 0.1373 | 934.0 | 11208 | 2.8831 |
| 0.1043 | 935.0 | 11220 | 1.9059 |
| 0.0825 | 936.0 | 11232 | 2.8366 |
| 0.1619 | 937.0 | 11244 | 2.5491 |
| 0.0906 | 938.0 | 11256 | 2.5668 |
| 0.0479 | 939.0 | 11268 | 3.0457 |
| 0.1427 | 940.0 | 11280 | 4.0130 |
| 0.1058 | 941.0 | 11292 | 3.5801 |
| 0.1359 | 942.0 | 11304 | 2.2584 |
| 0.1117 | 943.0 | 11316 | 2.6767 |
| 0.1341 | 944.0 | 11328 | 3.2212 |
| 0.1866 | 945.0 | 11340 | 2.9726 |
| 0.1355 | 946.0 | 11352 | 3.1199 |
| 0.143 | 947.0 | 11364 | 2.7948 |
| 0.237 | 948.0 | 11376 | 3.2464 |
| 0.1206 | 949.0 | 11388 | 3.4582 |
| 0.2615 | 950.0 | 11400 | 2.1646 |
| 0.1631 | 951.0 | 11412 | 2.5108 |
| 0.158 | 952.0 | 11424 | 3.4831 |
| 0.1103 | 953.0 | 11436 | 2.3143 |
| 0.1942 | 954.0 | 11448 | 2.8638 |
| 0.1049 | 955.0 | 11460 | 3.3910 |
| 0.1635 | 956.0 | 11472 | 3.4069 |
| 0.0989 | 957.0 | 11484 | 2.7670 |
| 0.071 | 958.0 | 11496 | 3.6908 |
| 0.1326 | 959.0 | 11508 | 3.0617 |
| 0.1352 | 960.0 | 11520 | 2.4996 |
| 0.1155 | 961.0 | 11532 | 2.3456 |
| 0.1407 | 962.0 | 11544 | 3.1657 |
| 0.1622 | 963.0 | 11556 | 3.2390 |
| 0.0628 | 964.0 | 11568 | 2.4668 |
| 0.1201 | 965.0 | 11580 | 2.8448 |
| 0.1387 | 966.0 | 11592 | 2.9089 |
| 0.1103 | 967.0 | 11604 | 2.8493 |
| 0.0735 | 968.0 | 11616 | 2.5433 |
| 0.093 | 969.0 | 11628 | 3.0329 |
| 0.3551 | 970.0 | 11640 | 3.3447 |
| 0.1849 | 971.0 | 11652 | 4.2088 |
| 0.1257 | 972.0 | 11664 | 3.1439 |
| 0.0764 | 973.0 | 11676 | 3.4356 |
| 0.1678 | 974.0 | 11688 | 3.1160 |
| 0.1093 | 975.0 | 11700 | 2.7974 |
| 0.0811 | 976.0 | 11712 | 2.6031 |
| 0.0878 | 977.0 | 11724 | 2.6731 |
| 0.1478 | 978.0 | 11736 | 2.5262 |
| 0.0933 | 979.0 | 11748 | 2.9120 |
| 0.0846 | 980.0 | 11760 | 3.2794 |
| 0.1063 | 981.0 | 11772 | 2.9906 |
| 0.0907 | 982.0 | 11784 | 2.6891 |
| 0.1747 | 983.0 | 11796 | 3.6264 |
| 0.1611 | 984.0 | 11808 | 3.2517 |
| 0.1171 | 985.0 | 11820 | 2.6785 |
| 0.1323 | 986.0 | 11832 | 3.4850 |
| 0.0758 | 987.0 | 11844 | 3.6252 |
| 0.0713 | 988.0 | 11856 | 3.2538 |
| 0.0594 | 989.0 | 11868 | 2.5900 |
| 0.1958 | 990.0 | 11880 | 2.4104 |
| 0.1328 | 991.0 | 11892 | 3.8045 |
| 0.1006 | 992.0 | 11904 | 3.5627 |
| 0.0969 | 993.0 | 11916 | 2.5848 |
| 0.1363 | 994.0 | 11928 | 2.8333 |
| 0.1455 | 995.0 | 11940 | 2.3381 |
| 0.0774 | 996.0 | 11952 | 2.6104 |
| 0.1001 | 997.0 | 11964 | 3.5031 |
| 0.0956 | 998.0 | 11976 | 2.7140 |
| 0.1094 | 999.0 | 11988 | 3.1090 |
| 0.1129 | 1000.0 | 12000 | 2.6911 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
skrl/IsaacGymEnvs-Anymal-PPO | skrl | 2023-07-08T20:48:53Z | 0 | 0 | skrl | [
"skrl",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-01-24T20:41:14Z | ---
library_name: skrl
tags:
- deep-reinforcement-learning
- reinforcement-learning
- skrl
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 61.68 +/- 2.18
name: Total reward (mean)
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: IsaacGymEnvs-Anymal
type: IsaacGymEnvs-Anymal
---
<!-- ---
torch: 61.68 +/- 2.18
jax: 61.31 +/- 1.39
numpy: 59.62 +/- 1.85
--- -->
# IsaacGymEnvs-Anymal-PPO
Trained agent for [NVIDIA Isaac Gym Preview](https://github.com/NVIDIA-Omniverse/IsaacGymEnvs) environments.
- **Task:** Anymal
- **Agent:** [PPO](https://skrl.readthedocs.io/en/latest/api/agents/ppo.html)
# Usage (with skrl)
Note: Visit the skrl [Examples](https://skrl.readthedocs.io/en/latest/intro/examples.html) section to access the scripts.
* PyTorch
```python
from skrl.utils.huggingface import download_model_from_huggingface
# assuming that there is an agent named `agent`
path = download_model_from_huggingface("skrl/IsaacGymEnvs-Anymal-PPO", filename="agent.pt")
agent.load(path)
```
* JAX
```python
from skrl.utils.huggingface import download_model_from_huggingface
# assuming that there is an agent named `agent`
path = download_model_from_huggingface("skrl/IsaacGymEnvs-Anymal-PPO", filename="agent.pickle")
agent.load(path)
```
# Hyperparameters
Note: Undefined parameters keep their values by default.
```python
# https://skrl.readthedocs.io/en/latest/api/agents/ppo.html#configuration-and-hyperparameters
cfg = PPO_DEFAULT_CONFIG.copy()
cfg["rollouts"] = 24 # memory_size
cfg["learning_epochs"] = 5
cfg["mini_batches"] = 3 # 24 * 4096 / 32768
cfg["discount_factor"] = 0.99
cfg["lambda"] = 0.95
cfg["learning_rate"] = 3e-4
cfg["learning_rate_scheduler"] = KLAdaptiveRL
cfg["learning_rate_scheduler_kwargs"] = {"kl_threshold": 0.008}
cfg["random_timesteps"] = 0
cfg["learning_starts"] = 0
cfg["grad_norm_clip"] = 1.0
cfg["ratio_clip"] = 0.2
cfg["value_clip"] = 0.2
cfg["clip_predicted_values"] = True
cfg["entropy_loss_scale"] = 0.0
cfg["value_loss_scale"] = 1.0
cfg["kl_threshold"] = 0
cfg["rewards_shaper"] = None
cfg["state_preprocessor"] = RunningStandardScaler
cfg["state_preprocessor_kwargs"] = {"size": env.observation_space, "device": device}
cfg["value_preprocessor"] = RunningStandardScaler
cfg["value_preprocessor_kwargs"] = {"size": 1, "device": device}
```
|
Huggingfly/ppo-PyramidsTraining | Huggingfly | 2023-07-08T20:45:21Z | 1 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | 2023-07-08T20:45:16Z | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: Huggingfly/ppo-PyramidsTraining
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
Word2vec/wikipedia2vec_enwiki_20180420_win10_500d | Word2vec | 2023-07-08T20:43:53Z | 0 | 2 | null | [
"word2vec",
"en",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
] | null | 2023-05-17T19:37:21Z | ---
license: apache-2.0
tags:
- word2vec
datasets:
- wikipedia
language:
- en
---
## Information
Pretrained Word2vec in English. For more information, see [https://wikipedia2vec.github.io/wikipedia2vec/pretrained/](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/).
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/wikipedia2vec_enwiki_20180420_win10_500d", filename="enwiki_20180420_win10_500d.txt"))
model.most_similar("your_word")
```
## Citation
```
@inproceedings{yamada2020wikipedia2vec,
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia",
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020},
publisher = {Association for Computational Linguistics},
pages = {23--30}
}
```
|
earentilt/LunarLander-v2 | earentilt | 2023-07-08T20:36:04Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-08T19:51:05Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 286.97 +/- 14.09
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
mlabonne/gpt2-GPTQ-4bit | mlabonne | 2023-07-08T20:09:26Z | 18 | 0 | transformers | [
"transformers",
"gpt2",
"text-generation",
"AutoGPTQ",
"4bit",
"GPTQ",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-06-11T17:30:12Z | ---
license: apache-2.0
language:
- en
pipeline_tag: text-generation
tags:
- AutoGPTQ
- 4bit
- GPTQ
---
Model created using [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ) on a [GPT-2](https://huggingface.co/gpt2) model with 4-bit quantization.
You can load this model with the AutoGPTQ library, installed with the following command:
```
pip install auto-gptq
```
You can then download the model from the hub using the following code:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from auto_gptq import AutoGPTQForCausalLM, BaseQuantizeConfig
model_name = "mlabonne/gpt2-GPTQ-4bit"
tokenizer = AutoTokenizer.from_pretrained(model_name)
quantize_config = BaseQuantizeConfig.from_pretrained(model_name)
model = AutoGPTQForCausalLM.from_quantized(model_name,
model_basename="gptq_model-4bit-128g",
device="cuda:0",
use_triton=True,
use_safetensors=True,
quantize_config=quantize_config)
```
This model works with the traditional [Text Generation pipeline](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.TextGenerationPipeline).
Example of generation with the input text "I have a dream":
```
I have a dream. I want someone with my face, and what I have. I want to go home. I want to be alive. I want to see my children. I dream if I have the spirit, my body, my voice,
``` |
tyavika/LR1E5-BS8-Distil-CNN512LSTM256NoBi | tyavika | 2023-07-08T20:04:23Z | 77 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-08T16:29:04Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: LR1E5-BS8-Distil-CNN512LSTM256NoBi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LR1E5-BS8-Distil-CNN512LSTM256NoBi
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3047
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.7781 | 1.0 | 6580 | 1.6331 |
| 1.235 | 2.0 | 13160 | 1.2036 |
| 0.951 | 3.0 | 19740 | 1.1857 |
| 0.7847 | 4.0 | 26320 | 1.2156 |
| 0.6643 | 5.0 | 32900 | 1.3047 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
fobt/speecht5_finetuned_voxpopuli_nl | fobt | 2023-07-08T19:59:00Z | 75 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"generated_from_trainer",
"dataset:voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2023-07-08T17:41:08Z | ---
license: mit
tags:
- generated_from_trainer
datasets:
- voxpopuli
model-index:
- name: speecht5_finetuned_voxpopuli_nl
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# speecht5_finetuned_voxpopuli_nl
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the voxpopuli dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4585
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5237 | 4.3 | 1000 | 0.4782 |
| 0.4946 | 8.61 | 2000 | 0.4639 |
| 0.493 | 12.91 | 3000 | 0.4608 |
| 0.4903 | 17.21 | 4000 | 0.4585 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
camus-ng/dreambooth_lora_cory_v15_ten | camus-ng | 2023-07-08T19:43:42Z | 1 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-07-08T16:25:04Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of <ntvc> man
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - camus-ng/dreambooth_lora_cory_v15_ten
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of <ntvc> man using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: True.
|
snousias/bert-base-greek-uncased-v1-finetuned-imdb | snousias | 2023-07-08T19:38:31Z | 114 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-08T18:56:06Z | ---
tags:
- generated_from_trainer
model-index:
- name: bert-base-greek-uncased-v1-finetuned-imdb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-greek-uncased-v1-finetuned-imdb
This model is a fine-tuned version of [nlpaueb/bert-base-greek-uncased-v1](https://huggingface.co/nlpaueb/bert-base-greek-uncased-v1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3617
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.0877 | 1.0 | 45 | 2.9871 |
| 1.2665 | 2.0 | 90 | 2.9228 |
| 1.9122 | 3.0 | 135 | 3.1228 |
| 2.2564 | 4.0 | 180 | 1.6066 |
| 1.9132 | 5.0 | 225 | 2.6351 |
| 1.9952 | 6.0 | 270 | 2.2649 |
| 1.7895 | 7.0 | 315 | 2.3376 |
| 2.0415 | 8.0 | 360 | 1.9894 |
| 1.8113 | 9.0 | 405 | 2.2998 |
| 1.6944 | 10.0 | 450 | 2.1420 |
| 1.7862 | 11.0 | 495 | 2.7167 |
| 1.5657 | 12.0 | 540 | 2.5103 |
| 1.4576 | 13.0 | 585 | 2.0238 |
| 1.3369 | 14.0 | 630 | 2.5880 |
| 1.3598 | 15.0 | 675 | 1.8161 |
| 1.3407 | 16.0 | 720 | 2.4031 |
| 1.3805 | 17.0 | 765 | 2.2539 |
| 1.176 | 18.0 | 810 | 3.2901 |
| 1.1152 | 19.0 | 855 | 2.3024 |
| 1.0629 | 20.0 | 900 | 2.0823 |
| 1.1972 | 21.0 | 945 | 2.9957 |
| 1.1317 | 22.0 | 990 | 2.5360 |
| 1.0396 | 23.0 | 1035 | 1.6268 |
| 0.8686 | 24.0 | 1080 | 3.2657 |
| 1.0526 | 25.0 | 1125 | 3.0398 |
| 0.9023 | 26.0 | 1170 | 2.8197 |
| 0.9539 | 27.0 | 1215 | 3.1922 |
| 0.8699 | 28.0 | 1260 | 1.6943 |
| 0.8669 | 29.0 | 1305 | 2.7801 |
| 0.7893 | 30.0 | 1350 | 2.1385 |
| 0.7462 | 31.0 | 1395 | 2.2881 |
| 0.7627 | 32.0 | 1440 | 3.0789 |
| 0.7536 | 33.0 | 1485 | 2.9320 |
| 0.8317 | 34.0 | 1530 | 3.4081 |
| 0.6749 | 35.0 | 1575 | 2.7531 |
| 0.789 | 36.0 | 1620 | 2.9154 |
| 0.6609 | 37.0 | 1665 | 2.1821 |
| 0.6795 | 38.0 | 1710 | 2.5330 |
| 0.6408 | 39.0 | 1755 | 3.4374 |
| 0.6827 | 40.0 | 1800 | 2.3127 |
| 0.6188 | 41.0 | 1845 | 2.0818 |
| 0.6085 | 42.0 | 1890 | 2.2737 |
| 0.6978 | 43.0 | 1935 | 2.9629 |
| 0.6164 | 44.0 | 1980 | 2.5250 |
| 0.6273 | 45.0 | 2025 | 2.3866 |
| 0.7064 | 46.0 | 2070 | 2.0937 |
| 0.6561 | 47.0 | 2115 | 2.4984 |
| 0.7341 | 48.0 | 2160 | 3.1911 |
| 0.6271 | 49.0 | 2205 | 2.2692 |
| 0.6757 | 50.0 | 2250 | 2.2642 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Word2vec/wikipedia2vec_enwiki_20180420_win10_100d | Word2vec | 2023-07-08T19:10:30Z | 0 | 0 | null | [
"word2vec",
"en",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
] | null | 2023-05-16T17:02:16Z | ---
license: apache-2.0
tags:
- word2vec
datasets:
- wikipedia
language:
- en
---
## Information
Pretrained Word2vec in English. For more information, see [https://wikipedia2vec.github.io/wikipedia2vec/pretrained/](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/).
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/wikipedia2vec_enwiki_20180420_win10_100d", filename="enwiki_20180420_win10_100d.txt"))
model.most_similar("your_word")
```
## Citation
```
@inproceedings{yamada2020wikipedia2vec,
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia",
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020},
publisher = {Association for Computational Linguistics},
pages = {23--30}
}
```
|
visual-openllm/visual-openllm-chatglm-6b-rola | visual-openllm | 2023-07-08T19:07:58Z | 0 | 8 | null | [
"dataset:tatsu-lab/alpaca",
"dataset:shibing624/alpaca-zh",
"license:apache-2.0",
"region:us"
] | null | 2023-03-26T07:49:58Z | ---
license: apache-2.0
datasets:
- tatsu-lab/alpaca
- shibing624/alpaca-zh
---
- Loda LLM
```python
from modeling_chatglm import ChatGLMForConditionalGeneration
import torch
torch.set_default_tensor_type(torch.cuda.HalfTensor)
model = ChatGLMForConditionalGeneration.from_pretrained("THUDM/chatglm-6b", trust_remote_code=True, device_map='auto')
```
- Load LoRA
```python
from peft import PeftModel
model = PeftModel.from_pretrained(model, "visual-openllm/visual-openllm-chatglm-6b-rola")
torch.set_default_tensor_type(torch.cuda.FloatTensor)
``` |
wizofavalon/bert-large-uncased-finetuned-wikitext2 | wizofavalon | 2023-07-08T19:07:01Z | 70 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"bert",
"fill-mask",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-05T19:20:17Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: wizofavalon/bert-large-uncased-finetuned-wikitext2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# wizofavalon/bert-large-uncased-finetuned-wikitext2
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.7861
- Validation Loss: 1.5868
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 1.7861 | 1.5868 | 0 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
c72599/a2c-PandaReachDense-v2 | c72599 | 2023-07-08T18:55:51Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"PandaReachDense-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-08T18:53:05Z | ---
library_name: stable-baselines3
tags:
- PandaReachDense-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: A2C
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: PandaReachDense-v2
type: PandaReachDense-v2
metrics:
- type: mean_reward
value: -1.94 +/- 0.71
name: mean_reward
verified: false
---
# **A2C** Agent playing **PandaReachDense-v2**
This is a trained model of a **A2C** agent playing **PandaReachDense-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
tyavika/Distil-CNN512LSTM256NoBi | tyavika | 2023-07-08T18:48:27Z | 84 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-02T11:03:22Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Distil-CNN512LSTM256NoBi
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Distil-CNN512LSTM256NoBi
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3388
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.6009 | 1.0 | 3290 | 1.2927 |
| 1.0288 | 2.0 | 6580 | 1.1467 |
| 0.7497 | 3.0 | 9870 | 1.1902 |
| 0.5288 | 4.0 | 13160 | 1.3388 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
cagarraz/Reinforce-1234 | cagarraz | 2023-07-08T18:41:26Z | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | 2023-06-28T16:38:24Z | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-1234
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 34.70 +/- 15.01
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
spitfire4794/photo | spitfire4794 | 2023-07-08T18:40:04Z | 287 | 8 | diffusers | [
"diffusers",
"safetensors",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"photorealistic",
"photoreal",
"en",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-06-04T18:28:38Z | ---
language:
- en
license: other
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- photorealistic
- photoreal
- diffusers
inference: true
pipeline_tag: text-to-image
library_name: diffusers
---
# the original but with inference api enabled because why not
# Dreamlike Photoreal 2.0 is a photorealistic model based on Stable Diffusion 1.5, made by [dreamlike.art](https://dreamlike.art/).
# If you want to use dreamlike models on your website/app/etc., check the license at the bottom first!
Warning: This model is horny! Add "nude, naked" to the negative prompt if want to avoid NSFW.
You can add **photo** to your prompt to make your gens look more photorealistic.
Non-square aspect ratios work better for some prompts. If you want a portrait photo, try using a vertical aspect ratio. If you want a landscape photo, try using a horizontal aspect ratio.
This model was trained on 768x768px images, so use 768x768px, 640x896px, 896x640px, etc. It also works pretty good with higher resolutions such as 768x1024px or 1024x768px.
### Examples
<img src="https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0/resolve/main/preview1.jpg" style="max-width: 800px;" width="100%"/>
<img src="https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0/resolve/main/preview2.jpg" style="max-width: 800px;" width="100%"/>
<img src="https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0/resolve/main/preview3.jpg" style="max-width: 800px;" width="100%"/>
### dreamlike.art
You can use this model for free on [dreamlike.art](https://dreamlike.art/)!
<img src="https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0/resolve/main/dreamlike.jpg" style="max-width: 1000px;" width="100%"/>
### CKPT
[Download dreamlike-photoreal-2.0.ckpt (2.13GB)](https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0/resolve/main/dreamlike-photoreal-2.0.ckpt)
### Safetensors
[Download dreamlike-photoreal-2.0.safetensors (2.13GB)](https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0/resolve/main/dreamlike-photoreal-2.0.safetensors)
### 🧨 Diffusers
This model can be used just like any other Stable Diffusion model. For more information,
please have a look at the [Stable Diffusion Pipeline](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion).
```python
from diffusers import StableDiffusionPipeline
import torch
model_id = "dreamlike-art/dreamlike-photoreal-2.0"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
pipe = pipe.to("cuda")
prompt = "photo, a church in the middle of a field of crops, bright cinematic lighting, gopro, fisheye lens"
image = pipe(prompt).images[0]
image.save("./result.jpg")
```
<img src="https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0/resolve/main/church.jpg" style="max-width: 640px;" width="100%"/>
# License
This model is licesed under a **modified** CreativeML OpenRAIL-M license.
- **You are not allowed to host, finetune, or do inference with the model or its derivatives on websites/apps/etc. If you want to, please email us at [email protected]**
- **You are free to host the model card and files (Without any actual inference or finetuning) on both commercial and non-commercial websites/apps/etc. Please state the full model name (Dreamlike Photoreal 2.0) and include the license as well as a link to the model card (https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0)**
- **You are free to use the outputs (images) of the model for commercial purposes in teams of 10 or less**
- You can't use the model to deliberately produce nor share illegal or harmful outputs or content
- The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
- You may re-distribute the weights. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the **modified** CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here: https://huggingface.co/dreamlike-art/dreamlike-photoreal-2.0/blob/main/LICENSE.md |
NERO500/q-FrozenLake-v1-4x4-noSlippery | NERO500 | 2023-07-08T18:39:12Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-08T18:39:09Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="NERO500/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Word2vec/wikipedia2vec_arwiki_20180420_300d | Word2vec | 2023-07-08T18:34:15Z | 0 | 0 | null | [
"word2vec",
"ar",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
] | null | 2023-05-17T09:33:09Z | ---
license: apache-2.0
tags:
- word2vec
datasets:
- wikipedia
language:
- ar
---
## Information
Pretrained Word2vec in Arabic. For more information, see [https://wikipedia2vec.github.io/wikipedia2vec/pretrained/](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/).
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/wikipedia2vec_arwiki_20180420_300d", filename="arwiki_20180420_300d.txt"))
model.most_similar("your_word")
```
## Citation
```
@inproceedings{yamada2020wikipedia2vec,
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia",
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020},
publisher = {Association for Computational Linguistics},
pages = {23--30}
}
``` |
Word2vec/wikipedia2vec_zhwiki_20180420_300d | Word2vec | 2023-07-08T18:32:34Z | 0 | 1 | null | [
"word2vec",
"zh",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
] | null | 2023-05-17T09:42:06Z | ---
license: apache-2.0
tags:
- word2vec
datasets:
- wikipedia
language:
- zh
---
## Information
Pretrained Word2vec in Chinese. For more information, see [https://wikipedia2vec.github.io/wikipedia2vec/pretrained/](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/).
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/wikipedia2vec_zhwiki_20180420_300d", filename="zhwiki_20180420_300d.txt"))
model.most_similar("your_word")
```
## Citation
```
@inproceedings{yamada2020wikipedia2vec,
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia",
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020},
publisher = {Association for Computational Linguistics},
pages = {23--30}
}
```
|
Word2vec/wikipedia2vec_jawiki_20180420_300d | Word2vec | 2023-07-08T18:31:54Z | 0 | 1 | null | [
"word2vec",
"ja",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
] | null | 2023-05-17T08:53:12Z | ---
license: apache-2.0
tags:
- word2vec
datasets:
- wikipedia
language:
- ja
---
## Information
Pretrained Word2vec in Japanese. For more information, see [https://wikipedia2vec.github.io/wikipedia2vec/pretrained/](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/).
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/wikipedia2vec_jawiki_20180420_300d", filename="jawiki_20180420_300d.txt"))
model.most_similar("your_word")
```
## Citation
```
@inproceedings{yamada2020wikipedia2vec,
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia",
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020},
publisher = {Association for Computational Linguistics},
pages = {23--30}
}
```
|
Word2vec/wikipedia2vec_arwiki_20180420_100d | Word2vec | 2023-07-08T18:29:53Z | 0 | 0 | null | [
"word2vec",
"ar",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
] | null | 2023-05-16T16:51:26Z | ---
license: apache-2.0
tags:
- word2vec
datasets:
- wikipedia
language:
- ar
---
## Information
Pretrained Word2vec in Arabic. For more information, see [https://wikipedia2vec.github.io/wikipedia2vec/pretrained/](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/).
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/wikipedia2vec_arwiki_20180420_100d", filename="arwiki_20180420_100d.txt"))
model.most_similar("your_word")
```
## Citation
```
@inproceedings{yamada2020wikipedia2vec,
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia",
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020},
publisher = {Association for Computational Linguistics},
pages = {23--30}
}
``` |
Word2vec/wikipedia2vec_jawiki_20180420_100d | Word2vec | 2023-07-08T18:26:42Z | 0 | 0 | null | [
"word2vec",
"ja",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
] | null | 2023-05-16T17:01:32Z | ---
license: apache-2.0
tags:
- word2vec
datasets:
- wikipedia
language:
- ja
---
## Information
Pretrained Word2vec in Japanese. For more information, see [https://wikipedia2vec.github.io/wikipedia2vec/pretrained/](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/).
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/wikipedia2vec_jawiki_20180420_100d", filename="jawiki_20180420_100d.txt"))
model.most_similar("your_word")
```
## Citation
```
@inproceedings{yamada2020wikipedia2vec,
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia",
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020},
publisher = {Association for Computational Linguistics},
pages = {23--30}
}
``` |
Word2vec/wikipedia2vec_itwiki_20180420_300d | Word2vec | 2023-07-08T18:25:14Z | 0 | 0 | null | [
"word2vec",
"it",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
] | null | 2023-05-17T08:53:36Z | ---
license: apache-2.0
tags:
- word2vec
datasets:
- wikipedia
language:
- it
---
## Information
Pretrained Word2vec in Italian. For more information, see [https://wikipedia2vec.github.io/wikipedia2vec/pretrained/](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/).
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/wikipedia2vec_itwiki_20180420_300d", filename="itwiki_20180420_300d.txt"))
model.most_similar("your_word")
```
## Citation
```
@inproceedings{yamada2020wikipedia2vec,
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia",
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020},
publisher = {Association for Computational Linguistics},
pages = {23--30}
}
``` |
cagarraz/dqn-SpaceInvadersNoFrameskip-v4 | cagarraz | 2023-07-08T18:23:01Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-08T18:22:29Z | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 268.50 +/- 78.17
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga cagarraz -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga cagarraz -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga cagarraz
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 100000),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
Word2vec/wikipedia2vec_itwiki_20180420_100d | Word2vec | 2023-07-08T18:22:37Z | 0 | 0 | null | [
"word2vec",
"it",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
] | null | 2023-05-16T17:01:45Z | ---
license: apache-2.0
tags:
- word2vec
datasets:
- wikipedia
language:
- it
---
## Information
Pretrained Word2vec in Italian. For more information, see [https://wikipedia2vec.github.io/wikipedia2vec/pretrained/](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/).
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/wikipedia2vec_itwiki_20180420_100d", filename="itwiki_20180420_100d.txt"))
model.most_similar("your_word")
```
## Citation
```
@inproceedings{yamada2020wikipedia2vec,
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia",
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020},
publisher = {Association for Computational Linguistics},
pages = {23--30}
}
```
|
Mozart-coder/BERT_Dec-6_tokenized | Mozart-coder | 2023-07-08T18:20:58Z | 129 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2022-10-25T05:03:42Z | ---
tags:
- generated_from_trainer
model-index:
- name: BERT_Dec-6_tokenized
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_Dec-6_tokenized
This model is a fine-tuned version of [armheb/DNA_bert_6](https://huggingface.co/armheb/DNA_bert_6) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0372
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.0625 | 1.0 | 273 | 0.0376 |
| 0.039 | 2.0 | 546 | 0.0375 |
| 0.0385 | 3.0 | 819 | 0.0358 |
| 0.0375 | 4.0 | 1092 | 0.0380 |
| 0.0374 | 5.0 | 1365 | 0.0387 |
| 0.0358 | 6.0 | 1638 | 0.0378 |
| 0.0363 | 7.0 | 1911 | 0.0381 |
| 0.0373 | 8.0 | 2184 | 0.0377 |
| 0.0362 | 9.0 | 2457 | 0.0373 |
| 0.037 | 10.0 | 2730 | 0.0380 |
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
jason1i/whisper-small-zh-HK | jason1i | 2023-07-08T18:15:56Z | 87 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"hk",
"dataset:mozilla-foundation/common_voice_13_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-07-08T17:19:53Z | ---
language:
- hk
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Whisper Small hk
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
config: zh-HK
split: test
args: zh-HK
metrics:
- name: Wer
type: wer
value: 64.88393977415308
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small hk
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2883
- Wer Ortho: 66.1207
- Wer: 64.8839
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.3393 | 0.57 | 500 | 0.2883 | 66.1207 | 64.8839 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Word2vec/wikipedia2vec_eswiki_20180420_300d | Word2vec | 2023-07-08T17:58:18Z | 0 | 1 | null | [
"word2vec",
"es",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
] | null | 2023-05-17T08:53:59Z | ---
license: apache-2.0
tags:
- word2vec
datasets:
- wikipedia
language:
- es
---
## Information
Pretrained Word2vec in Spanish. For more information, see [https://wikipedia2vec.github.io/wikipedia2vec/pretrained/](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/).
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/wikipedia2vec_eswiki_20180420_300d", filename="eswiki_20180420_300d.txt"))
model.most_similar("your_word")
```
## Citation
```
@inproceedings{yamada2020wikipedia2vec,
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia",
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020},
publisher = {Association for Computational Linguistics},
pages = {23--30}
}
``` |
Word2vec/wikipedia2vec_ptwiki_20180420_100d | Word2vec | 2023-07-08T17:57:30Z | 0 | 0 | null | [
"word2vec",
"pt",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
] | null | 2023-05-16T17:00:56Z | ---
license: apache-2.0
tags:
- word2vec
datasets:
- wikipedia
language:
- pt
---
## Information
Pretrained Word2vec in Portuguese. For more information, see [https://wikipedia2vec.github.io/wikipedia2vec/pretrained/](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/).
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/wikipedia2vec_ptwiki_20180420_100d", filename="ptwiki_20180420_100d.txt"))
model.most_similar("your_word")
```
## Citation
```
@inproceedings{yamada2020wikipedia2vec,
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia",
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020},
publisher = {Association for Computational Linguistics},
pages = {23--30}
}
```
|
Word2vec/wikipedia2vec_ruwiki_20180420_100d | Word2vec | 2023-07-08T17:51:41Z | 0 | 0 | null | [
"word2vec",
"ru",
"dataset:wikipedia",
"license:apache-2.0",
"region:us"
] | null | 2023-05-16T17:00:45Z | ---
license: apache-2.0
tags:
- word2vec
datasets:
- wikipedia
language:
- ru
---
## Information
Pretrained Word2vec in Russian. For more information, see [https://wikipedia2vec.github.io/wikipedia2vec/pretrained/](https://wikipedia2vec.github.io/wikipedia2vec/pretrained/).
## How to use?
```
from gensim.models import KeyedVectors
from huggingface_hub import hf_hub_download
model = KeyedVectors.load_word2vec_format(hf_hub_download(repo_id="Word2vec/wikipedia2vec_ruwiki_20180420_100d", filename="ruwiki_20180420_100d.txt"))
model.most_similar("your_word")
```
## Citation
```
@inproceedings{yamada2020wikipedia2vec,
title = "{W}ikipedia2{V}ec: An Efficient Toolkit for Learning and Visualizing the Embeddings of Words and Entities from {W}ikipedia",
author={Yamada, Ikuya and Asai, Akari and Sakuma, Jin and Shindo, Hiroyuki and Takeda, Hideaki and Takefuji, Yoshiyasu and Matsumoto, Yuji},
booktitle = {Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
year = {2020},
publisher = {Association for Computational Linguistics},
pages = {23--30}
}
``` |
IDEA-CCNL/Erlangshen-ZEN2-345M-Chinese | IDEA-CCNL | 2023-07-08T17:47:20Z | 53 | 0 | transformers | [
"transformers",
"pytorch",
"ZEN",
"chinese",
"zh",
"arxiv:2105.01279",
"arxiv:2209.02970",
"license:apache-2.0",
"region:us"
] | null | 2022-07-27T06:13:11Z | ---
language:
- zh
license: apache-2.0
tags:
- ZEN
- chinese
inference: false
---
# Erlangshen-ZEN2-345M-Chinese
- Main Page:[Fengshenbang](https://fengshenbang-lm.com/)
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
## 简介 Brief Introduction
善于处理NLU任务,使用了N-gram编码增强文本语义,3.45亿参数量的ZEN2
ZEN2 model, which uses N-gram to enhance text semantic and has 345M parameters, is adept at NLU tasks.
## 模型分类 Model Taxonomy
| 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| 通用 General | 自然语言理解 NLU | 二郎神 Erlangshen | ZEN2 | 345M | 中文-Chinese |
## 模型信息 Model Information
我们与[ZEN团队](https://github.com/sinovation/ZEN2)合作,使用我们的封神框架,开源发布了ZEN2模型。具体而言,通过引入无监督学习中提取的知识,ZEN通过N-gram方法学习不同的文本粒度信息。ZEN2使用大规模数据集和特殊的预训练策略对N-gram增强编码器进行预训练。下一步,我们将继续与ZEN团队一起探索PLM的优化,并提高下游任务的性能。
We open source and publicly release ZEN2 using our Fengshen Framework in collaboration with the [ZEN team](https://github.com/sinovation/ZEN2). More precisely, by bringing together knowledge extracted by unsupervised learning, ZEN learns different textual granularity information through N-gram methods. ZEN2 pre-trains the N-gram-enhanced encoders with large-scale datasets and special pre-training strategies. In the next step, we continue with the ZEN team to explore the optimization of PLM and improve the performance on downstream tasks.
### 下游效果 Performance
**分类任务 Classification**
| Model(Acc) | afqmc | tnews | iflytek | ocnli | cmnli |
| :--------: | :-----: | :----: | :-----: | :----: | :----: |
| Erlangshen-ZEN2-345M-Chinese | 0.741 | 0.584 | 0.599 | 0.788 | 0.80 |
| Erlangshen-ZEN2-668M-Chinese | 0.75 | 0.60 | 0.589 | 0.81 | 0.82 |
**抽取任务 Extraction**
| Model(F1) | WEIBO(test) | Resume(test) | MSRA(test) | OntoNote4.0(test) | CMeEE(dev) | CLUENER(dev) |
| :--------: | :-----: | :----: | :-----: | :----: | :----: | :----: |
| Erlangshen-ZEN2-345M-Chinese | 65.26 | 96.03 | 95.15 | 78.93 | 62.81 | 79.27 |
| Erlangshen-ZEN2-668M-Chinese | 70.02 | 96.08 | 95.13 | 80.89 | 63.37 | 79.22 |
## 使用 Usage
因为[transformers](https://github.com/huggingface/transformers)库中是没有ZEN2相关的模型结构的,所以你可以在我们的[Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)中找到并且运行代码。
Since there is no structure of ZEN2 in [transformers library](https://github.com/huggingface/transformers), you can find the structure of ZEN2 and run the codes in [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).
```shell
git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git
```
```python
from fengshen.models.zen2.ngram_utils import ZenNgramDict
from fengshen.models.zen2.tokenization import BertTokenizer
from fengshen.models.zen2.modeling import ZenForSequenceClassification, ZenForTokenClassification
pretrain_path = 'IDEA-CCNL/Erlangshen-ZEN2-345M-Chinese'
tokenizer = BertTokenizer.from_pretrained(pretrain_path)
model_classification = ZenForSequenceClassification.from_pretrained(pretrain_path)
model_extraction = ZenForTokenClassification.from_pretrained(pretrain_path)
ngram_dict = ZenNgramDict.from_pretrained(pretrain_path, tokenizer=tokenizer)
```
你可以从下方的链接获得我们做分类和抽取的详细示例。
You can get classification and extraction examples below.
[分类 classification example on fengshen](https://github.com/IDEA-CCNL/Fengshenbang-LM/blob/main/fengshen/examples/zen2_finetune/fs_zen2_base_tnews.sh)
[抽取 extraction example on fengshen](https://github.com/IDEA-CCNL/Fengshenbang-LM/blob/main/fengshen/examples/zen2_finetune/ner_zen2_base_ontonotes4.sh)
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的对该模型的论文:
If you are using the resource for your work, please cite the our paper for this model:
```text
@article{Sinovation2021ZEN2,
title="{ZEN 2.0: Continue Training and Adaption for N-gram Enhanced Text Encoders}",
author={Yan Song, Tong Zhang, Yonggang Wang, Kai-Fu Lee},
journal={arXiv preprint arXiv:2105.01279},
year={2021},
}
```
如果您在您的工作中使用了我们的模型,也可以引用我们的[总论文](https://arxiv.org/abs/2209.02970):
If you are using the resource for your work, please cite the our [overview paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` |
Teunis89/q-FrozenLake-v1-4x4-noSlippery | Teunis89 | 2023-07-08T17:45:45Z | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-08T17:45:43Z | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="Teunis89/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
dashan1992/dsl1 | dashan1992 | 2023-07-08T17:42:12Z | 0 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-08T17:41:42Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
devan666dewa/test | devan666dewa | 2023-07-08T17:36:45Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-08T16:43:39Z | ---
license: creativeml-openrail-m
---
|
skrl/IsaacGymEnvs-BallBalance-PPO | skrl | 2023-07-08T17:29:56Z | 0 | 0 | skrl | [
"skrl",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-01-24T20:42:20Z | ---
library_name: skrl
tags:
- deep-reinforcement-learning
- reinforcement-learning
- skrl
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 298.89 +/- 27.4
name: Total reward (mean)
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: IsaacGymEnvs-BallBalance
type: IsaacGymEnvs-BallBalance
---
<!-- ---
torch: 298.89 +/- 27.4
jax: 256.32 +/- 12.84
numpy: 240.59 +/- 19.15
--- -->
# IsaacGymEnvs-BallBalance-PPO
Trained agent for [NVIDIA Isaac Gym Preview](https://github.com/NVIDIA-Omniverse/IsaacGymEnvs) environments.
- **Task:** BallBalance
- **Agent:** [PPO](https://skrl.readthedocs.io/en/latest/api/agents/ppo.html)
# Usage (with skrl)
Note: Visit the skrl [Examples](https://skrl.readthedocs.io/en/latest/intro/examples.html) section to access the scripts.
* PyTorch
```python
from skrl.utils.huggingface import download_model_from_huggingface
# assuming that there is an agent named `agent`
path = download_model_from_huggingface("skrl/IsaacGymEnvs-BallBalance-PPO", filename="agent.pt")
agent.load(path)
```
* JAX
```python
from skrl.utils.huggingface import download_model_from_huggingface
# assuming that there is an agent named `agent`
path = download_model_from_huggingface("skrl/IsaacGymEnvs-BallBalance-PPO", filename="agent.pickle")
agent.load(path)
```
# Hyperparameters
Note: Undefined parameters keep their values by default.
```python
# https://skrl.readthedocs.io/en/latest/api/agents/ppo.html#configuration-and-hyperparameters
cfg = PPO_DEFAULT_CONFIG.copy()
cfg["rollouts"] = 16 # memory_size
cfg["learning_epochs"] = 8
cfg["mini_batches"] = 8 # 16 * 4096 / 8192
cfg["discount_factor"] = 0.99
cfg["lambda"] = 0.95
cfg["learning_rate"] = 3e-4
cfg["learning_rate_scheduler"] = KLAdaptiveRL
cfg["learning_rate_scheduler_kwargs"] = {"kl_threshold": 0.008}
cfg["random_timesteps"] = 0
cfg["learning_starts"] = 0
cfg["grad_norm_clip"] = 1.0
cfg["ratio_clip"] = 0.2
cfg["value_clip"] = 0.2
cfg["clip_predicted_values"] = True
cfg["entropy_loss_scale"] = 0.0
cfg["value_loss_scale"] = 2.0
cfg["kl_threshold"] = 0
cfg["rewards_shaper"] = lambda rewards, timestep, timesteps: rewards * 0.1
cfg["state_preprocessor"] = RunningStandardScaler
cfg["state_preprocessor_kwargs"] = {"size": env.observation_space, "device": device}
cfg["value_preprocessor"] = RunningStandardScaler
cfg["value_preprocessor_kwargs"] = {"size": 1, "device": device}
```
|
nevernotsean/realisticVisionV30_v30VAE_COOLKIDS_MERGE | nevernotsean | 2023-07-08T17:17:25Z | 4 | 0 | diffusers | [
"diffusers",
"safetensors",
"en",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-08T16:44:19Z | ---
license: creativeml-openrail-m
language:
- en
library_name: diffusers
--- |
mystt/llama-tr | mystt | 2023-07-08T17:10:28Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-08T17:10:24Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0.dev0
|
mark-oppenheim/rl-course-unit1-lunar-lander | mark-oppenheim | 2023-07-08T17:09:16Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-08T17:08:59Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 268.14 +/- 23.29
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
skrl/IsaacGymEnvs-Cartpole-PPO | skrl | 2023-07-08T17:03:59Z | 0 | 0 | skrl | [
"skrl",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-01-24T20:42:39Z | ---
library_name: skrl
tags:
- deep-reinforcement-learning
- reinforcement-learning
- skrl
model-index:
- name: PPO
results:
- metrics:
- type: mean_reward
value: 493.73 +/- 0.58
name: Total reward (mean)
task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: IsaacGymEnvs-Cartpole
type: IsaacGymEnvs-Cartpole
---
<!-- ---
torch: 493.73 +/- 0.58
jax: 492.06 +/- 3.58
numpy: 491.92 +/- 0.57
--- -->
# IsaacGymEnvs-Cartpole-PPO
Trained agent for [NVIDIA Isaac Gym Preview](https://github.com/NVIDIA-Omniverse/IsaacGymEnvs) environments.
- **Task:** Cartpole
- **Agent:** [PPO](https://skrl.readthedocs.io/en/latest/api/agents/ppo.html)
# Usage (with skrl)
Note: Visit the skrl [Examples](https://skrl.readthedocs.io/en/latest/intro/examples.html) section to access the scripts.
* PyTorch
```python
from skrl.utils.huggingface import download_model_from_huggingface
# assuming that there is an agent named `agent`
path = download_model_from_huggingface("skrl/IsaacGymEnvs-Cartpole-PPO", filename="agent.pt")
agent.load(path)
```
* JAX
```python
from skrl.utils.huggingface import download_model_from_huggingface
# assuming that there is an agent named `agent`
path = download_model_from_huggingface("skrl/IsaacGymEnvs-Cartpole-PPO", filename="agent.pickle")
agent.load(path)
```
# Hyperparameters
Note: Undefined parameters keep their values by default.
```python
# https://skrl.readthedocs.io/en/latest/api/agents/ppo.html#configuration-and-hyperparameters
cfg = PPO_DEFAULT_CONFIG.copy()
cfg["rollouts"] = 16 # memory_size
cfg["learning_epochs"] = 8
cfg["mini_batches"] = 1 # 16 * 512 / 8192
cfg["discount_factor"] = 0.99
cfg["lambda"] = 0.95
cfg["learning_rate"] = 3e-4
cfg["learning_rate_scheduler"] = KLAdaptiveRL
cfg["learning_rate_scheduler_kwargs"] = {"kl_threshold": 0.008}
cfg["random_timesteps"] = 0
cfg["learning_starts"] = 0
cfg["grad_norm_clip"] = 1.0
cfg["ratio_clip"] = 0.2
cfg["value_clip"] = 0.2
cfg["clip_predicted_values"] = True
cfg["entropy_loss_scale"] = 0.0
cfg["value_loss_scale"] = 2.0
cfg["kl_threshold"] = 0
cfg["rewards_shaper"] = lambda rewards, timestep, timesteps: rewards * 0.1
cfg["state_preprocessor"] = RunningStandardScaler
cfg["state_preprocessor_kwargs"] = {"size": env.observation_space, "device": device}
cfg["value_preprocessor"] = RunningStandardScaler
cfg["value_preprocessor_kwargs"] = {"size": 1, "device": device}
```
|
jason1i/whisper-small-dv | jason1i | 2023-07-08T16:31:01Z | 77 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"dv",
"dataset:mozilla-foundation/common_voice_13_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-07-08T15:56:40Z | ---
language:
- dv
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_13_0
metrics:
- wer
model-index:
- name: Whisper Small Dv
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Common Voice 13
type: mozilla-foundation/common_voice_13_0
config: dv
split: test
args: dv
metrics:
- name: Wer
type: wer
value: 13.198525576381403
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Dv
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 13 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1712
- Wer Ortho: 62.2885
- Wer: 13.1985
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 50
- training_steps: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:-------:|
| 0.1227 | 1.63 | 500 | 0.1712 | 62.2885 | 13.1985 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
OumaElha/Speech12 | OumaElha | 2023-07-08T16:25:52Z | 81 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | 2023-07-03T23:53:49Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: Speech12
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Speech12
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0053
- Wer: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 2.8827 | 3.96 | 1000 | 2.8766 | 1 |
| 2.8369 | 7.92 | 2000 | 2.8362 | 1 |
| 1.6725 | 11.88 | 3000 | 1.4849 | 1 |
| 1.2083 | 15.84 | 4000 | 1.0574 | 1 |
| 1.1507 | 19.8 | 5000 | 1.0053 | 1 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1
- Datasets 2.13.1
- Tokenizers 0.13.3
|
camus-ng/dreambooth_lora_cory_v15 | camus-ng | 2023-07-08T16:24:49Z | 1 | 0 | diffusers | [
"diffusers",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-07-08T12:58:54Z |
---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: a photo of <ntvc> man
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - camus-ng/dreambooth_lora_cory_v15
These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on a photo of <ntvc> man using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.




LoRA for the text encoder was enabled: False.
|
lizhuang144/flan-t5-small-factual-sg | lizhuang144 | 2023-07-08T16:16:21Z | 106 | 0 | transformers | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text2text-generation | 2023-07-08T08:23:01Z | See details from https://github.com/zhuang-li/FACTUAL . |
LanzerPotaz/distilbert-base-uncased-finetuned-cola | LanzerPotaz | 2023-07-08T16:10:47Z | 61 | 0 | transformers | [
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-08T16:05:33Z | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: LanzerPotaz/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# LanzerPotaz/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2051
- Validation Loss: 0.5625
- Train Matthews Correlation: 0.5132
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5286 | 0.4656 | 0.4613 | 0 |
| 0.3364 | 0.4611 | 0.4982 | 1 |
| 0.2051 | 0.5625 | 0.5132 | 2 |
### Framework versions
- Transformers 4.30.2
- TensorFlow 2.12.0
- Datasets 2.13.1
- Tokenizers 0.13.3
|
HeinrichWirth/ppo-LunarLander-v2 | HeinrichWirth | 2023-07-08T16:09:18Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-08T16:08:59Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 266.99 +/- 18.71
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
MAT-NUS/roberta-base-rotten-tomatoes | MAT-NUS | 2023-07-08T16:01:45Z | 0 | 0 | null | [
"region:us"
] | null | 2023-07-08T16:01:03Z | This `roberta-base` model was fine-tuned for sequence classificationusing TextAttack
and the rotten_tomatoes dataset loaded using the `nlp` library. The model was fine-tuned
for 10 epochs with a batch size of 64, a learning
rate of 2e-05, and a maximum sequence length of 128.
Since this was a classification task, the model was trained with a cross-entropy loss function.
The best score the model achieved on this task was 0.9033771106941839, as measured by the
eval set accuracy, found after 2 epochs.
For more information, check out [TextAttack on Github](https://github.com/QData/TextAttack). |
agercas/ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan | agercas | 2023-07-08T15:42:55Z | 161 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"audio-spectrogram-transformer",
"audio-classification",
"generated_from_trainer",
"dataset:marsyas/gtzan",
"license:bsd-3-clause",
"endpoints_compatible",
"region:us"
] | audio-classification | 2023-07-08T14:53:43Z | ---
license: bsd-3-clause
tags:
- generated_from_trainer
datasets:
- marsyas/gtzan
metrics:
- accuracy
model-index:
- name: ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ast-finetuned-audioset-10-10-0.4593-finetuned-gtzan
This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on the GTZAN dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3847
- Accuracy: 0.91
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.1646 | 1.0 | 225 | 0.7426 | 0.74 |
| 0.6839 | 2.0 | 450 | 0.7655 | 0.78 |
| 0.4179 | 3.0 | 675 | 0.8210 | 0.81 |
| 0.1836 | 4.0 | 900 | 0.3845 | 0.86 |
| 0.0018 | 5.0 | 1125 | 0.4368 | 0.87 |
| 0.0032 | 6.0 | 1350 | 0.4066 | 0.9 |
| 0.0001 | 7.0 | 1575 | 0.4524 | 0.89 |
| 0.0 | 8.0 | 1800 | 0.3708 | 0.9 |
| 0.0 | 9.0 | 2025 | 0.3975 | 0.9 |
| 0.0001 | 10.0 | 2250 | 0.3847 | 0.91 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
RobertoFont/falcon-7b-chat-oasst1 | RobertoFont | 2023-07-08T15:40:27Z | 1 | 0 | peft | [
"peft",
"region:us"
] | null | 2023-07-08T15:40:22Z | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.4.0.dev0
|
choozmo/choozmomic | choozmo | 2023-07-08T15:38:56Z | 2 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"lora",
"base_model:runwayml/stable-diffusion-v1-5",
"base_model:adapter:runwayml/stable-diffusion-v1-5",
"license:creativeml-openrail-m",
"region:us"
] | text-to-image | 2023-07-08T15:38:52Z | ---
license: creativeml-openrail-m
base_model: runwayml/stable-diffusion-v1-5
instance_prompt: choozmomic
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- lora
inference: true
---
# LoRA DreamBooth - choozmomic
These are LoRA adaption weights for [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5). The weights were trained on the instance prompt "choozmomic" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
|
CeroShrijver/chinese-lert-large-ling-cls | CeroShrijver | 2023-07-08T15:30:51Z | 112 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | 2023-07-08T14:20:50Z | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: chinese-lert-large-ling-cls
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# chinese-lert-large-ling-cls
This model is a fine-tuned version of [hfl/chinese-lert-large](https://huggingface.co/hfl/chinese-lert-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4531
- Accuracy: 0.7822
- Test Accuracy: 0.8102
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5082 | 1.0 | 1008 | 0.5476 | 0.7601 |
| 0.3669 | 2.0 | 2017 | 0.5202 | 0.7978 |
| 0.2006 | 3.0 | 3025 | 0.8294 | 0.7748 |
| 0.0954 | 4.0 | 4034 | 1.2630 | 0.7931 |
| 0.0447 | 5.0 | 5040 | 1.4531 | 0.7822 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.11.6
|
Ocelotr/speecht5tts | Ocelotr | 2023-07-08T15:13:15Z | 94 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"ara",
"generated_from_trainer",
"ar",
"dataset:SDA_CLEAN_NAJDI",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | 2023-07-08T13:55:16Z | ---
language:
- ar
license: mit
tags:
- ara
- generated_from_trainer
datasets:
- SDA_CLEAN_NAJDI
model-index:
- name: SpeechT5 TTS
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the SDA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5062
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5789 | 1.49 | 1000 | 0.5299 |
| 0.5448 | 2.97 | 2000 | 0.5150 |
| 0.5422 | 4.46 | 3000 | 0.5090 |
| 0.5417 | 5.95 | 4000 | 0.5062 |
### Framework versions
- Transformers 4.31.0.dev0
- Pytorch 2.0.1+cu117
- Datasets 2.13.0
- Tokenizers 0.13.3
|
AACEE/textual_inversion_airship | AACEE | 2023-07-08T15:03:05Z | 42 | 0 | diffusers | [
"diffusers",
"tensorboard",
"stable-diffusion",
"stable-diffusion-diffusers",
"text-to-image",
"textual_inversion",
"base_model:stabilityai/stable-diffusion-2-depth",
"base_model:adapter:stabilityai/stable-diffusion-2-depth",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | 2023-07-08T11:45:59Z |
---
license: creativeml-openrail-m
base_model: stabilityai/stable-diffusion-2-depth
tags:
- stable-diffusion
- stable-diffusion-diffusers
- text-to-image
- diffusers
- textual_inversion
inference: true
---
# Textual inversion text2image fine-tuning - AACEE/textual_inversion_airship
These are textual inversion adaption weights for stabilityai/stable-diffusion-2-depth. You can find some example images in the following.
|
manueltonneau/clinicalcovid-bert-nli | manueltonneau | 2023-07-08T14:59:52Z | 8 | 0 | transformers | [
"transformers",
"pytorch",
"jax",
"bert",
"feature-extraction",
"en",
"doi:10.57967/hf/0868",
"endpoints_compatible",
"region:us"
] | feature-extraction | 2022-03-02T23:29:05Z | ---
language:
- en
---
All information can be found here: https://github.com/manueltonneau/covid-berts
If you find this model useful, please cite:
```
@misc {manuel_tonneau_2023,
author = { {Manuel Tonneau} },
title = { clinicalcovid-bert-nli (Revision 9a0bad1) },
year = 2023,
url = { https://huggingface.co/manueltonneau/clinicalcovid-bert-nli },
doi = { 10.57967/hf/0868 },
publisher = { Hugging Face }
}
```
|
Quacktab/ppo-LunarLander-v2 | Quacktab | 2023-07-08T14:52:36Z | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | 2023-07-08T06:22:12Z | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 273.28 +/- 19.40
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
RogerB/afriberta_small-finetuned-kintweetsC | RogerB | 2023-07-08T14:50:21Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-08T14:41:18Z | ---
tags:
- generated_from_trainer
model-index:
- name: afriberta_small-finetuned-kintweetsC
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afriberta_small-finetuned-kintweetsC
This model is a fine-tuned version of [castorini/afriberta_small](https://huggingface.co/castorini/afriberta_small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1630
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.5819 | 1.0 | 900 | 4.2983 |
| 4.3316 | 2.0 | 1800 | 4.1280 |
| 4.2441 | 3.0 | 2700 | 4.2305 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
tyavika/LR1E5-BS8-Distilbert-QA-Pytorch-FULL | tyavika | 2023-07-08T14:39:57Z | 105 | 0 | transformers | [
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | 2023-07-08T12:07:30Z | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: LR1E5-BS8-Distilbert-QA-Pytorch-FULL.pt
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# LR1E5-BS8-Distilbert-QA-Pytorch-FULL.pt
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2312
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.3783 | 1.0 | 6580 | 1.2680 |
| 1.1465 | 2.0 | 13160 | 1.1625 |
| 0.8655 | 3.0 | 19740 | 1.1681 |
| 0.7235 | 4.0 | 26320 | 1.2312 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
RogerB/afriberta_large-finetuned-kintweetsC | RogerB | 2023-07-08T14:26:56Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-08T14:13:13Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: afriberta_large-finetuned-kintweetsC
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afriberta_large-finetuned-kintweetsC
This model is a fine-tuned version of [castorini/afriberta_large](https://huggingface.co/castorini/afriberta_large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9358
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.3534 | 1.0 | 900 | 4.0667 |
| 4.0818 | 2.0 | 1800 | 3.9280 |
| 3.9884 | 3.0 | 2700 | 3.9982 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
lvob/relextract_old | lvob | 2023-07-08T14:10:57Z | 0 | 0 | null | [
"region:us"
] | null | 2023-07-08T07:40:49Z | # Relation Extraction Model
1. Copy your .spacy files into the data folder
2. Run the "generate_mappings.py" file inside the datasets folder
3. Run the "rebel_train.py" file from the src folder
|
bpw1621/ppo-Huggy | bpw1621 | 2023-07-08T14:10:30Z | 0 | 0 | ml-agents | [
"ml-agents",
"tensorboard",
"onnx",
"Huggy",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Huggy",
"region:us"
] | reinforcement-learning | 2023-07-08T14:10:20Z | ---
library_name: ml-agents
tags:
- Huggy
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Huggy
---
# **ppo** Agent playing **Huggy**
This is a trained model of a **ppo** agent playing **Huggy**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: bpw1621/ppo-Huggy
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
sled-umich/OctoBERT | sled-umich | 2023-07-08T13:51:05Z | 0 | 0 | null | [
"arxiv:2306.08685",
"region:us"
] | null | 2023-07-07T08:25:33Z | Weights for the pretrained OctoBERT model.
[Model Demo](https://huggingface.co/spaces/sled-umich/OctoBERT-flickr-demo) • [Paper](https://arxiv.org/abs/2306.08685)
[Ziqiao Ma](https://mars-tin.github.io/)\*, [Jiayi Pan](https://www.jiayipan.me/)\*, [Joyce Chai](https://web.eecs.umich.edu/~chaijy/) (\* denotes equal contribution) |
openchat/openchat_v2 | openchat | 2023-07-08T13:51:04Z | 1,485 | 12 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"en",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | 2023-07-07T15:30:54Z | ---
language:
- en
tags:
- llama
license: other
---
# OpenChat: Advancing Open-source Language Models with Imperfect Data
The OpenChat v2 family is inspired by offline reinforcement learning, including conditional behavior cloning (OpenChat-v2) and weighted behavior cloning (OpenChat-v2-w).
- **[OpenChat-v2-w](https://huggingface.co/openchat/openchat_v2_w)**: ~80k cleaned ShareGPT data with conditioning and weighted loss, based on LLaMA-13B with a context length of 2048.
- Achieves **50.9%** win-rate over ChatGPT on MT-bench.
- Achieves **79.4%** win-rate over ChatGPT on Vicuna-bench.
- Achieves **87.1%** win-rate over text-davinci-003 on AlpacaEval.
- **[OpenChat-v2](https://huggingface.co/openchat/openchat_v2)**: ~80k cleaned ShareGPT data with only conditioning, based on LLaMA-13B with a context length of 2048.
- Achieves **48.1%** win-rate over ChatGPT on MT-bench.
- Achieves **80.6%** win-rate over ChatGPT on Vicuna-bench.
- Achieves **85.0%** win-rate over text-davinci-003 on AlpacaEval.
## Code and Inference Server
We provide the full source code, including an inference server compatible with the "ChatCompletions" API, in the [OpenChat](https://github.com/imoneoi/openchat) GitHub repository.
## Web UI
OpenChat also includes a web UI for a better user experience. See the GitHub repository for instructions.
## Conversation Template
The conversation template **involves concatenating tokens**, and cannot be expressed in plain-text.
Besides base model vocabulary, an end-of-turn token `<|end_of_turn|>` is added.
Here is an example of single-round conversation template:
```python
def tokenize_single_input(tokenizer, prompt):
# OpenChat V2
human_prefix = "User:"
prefix = "Assistant GPT4:"
eot_token = "<|end_of_turn|>"
bos_token = "<s>"
def _tokenize(text):
return tokenizer.convert_tokens_to_ids(tokenizer._tokenize(text))
def _tokenize_special(special_name):
return tokenizer.convert_tokens_to_ids(special_name)
return [_tokenize_special(bos_token)] + _tokenize(human_prefix) + _tokenize(prompt) + [_tokenize_special(eot_token)] + \
_tokenize(prefix)
```
To explore conditional language models, you can also set `prefix = "Assistant GPT3:"` to mimic ChatGPT behavior (this may cause performance degradation).
*Hint: In BPE, `tokenize(A) + tokenize(B)` does not always equals to `tokenize(A + B)`*
## Limitations
**Foundation Model Limitations**
Despite its advanced capabilities, OpenChat is still bound by the limitations inherent in its foundation models. These limitations may impact the model's performance in areas such as:
- Complex reasoning
- Mathematical and arithmetic tasks
- Programming and coding challenges
**Hallucination of Non-existent Information**
OpenChat may sometimes generate information that does not exist or is not accurate, also known as "hallucination". Users should be aware of this possibility and verify any critical information obtained from the model.
|
RogerB/afro-xlmr-mini-finetuned-kintweetsC | RogerB | 2023-07-08T13:49:45Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-08T13:33:09Z | ---
license: afl-3.0
tags:
- generated_from_trainer
model-index:
- name: afro-xlmr-mini-finetuned-kintweetsC
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afro-xlmr-mini-finetuned-kintweetsC
This model is a fine-tuned version of [Davlan/afro-xlmr-mini](https://huggingface.co/Davlan/afro-xlmr-mini) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.3689
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 10
- eval_batch_size: 10
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.7494 | 1.0 | 900 | 3.4395 |
| 3.5927 | 2.0 | 1800 | 3.3878 |
| 3.5147 | 3.0 | 2700 | 3.3751 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
hopkins/eng-mya-union | hopkins | 2023-07-08T13:44:58Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-08T13:24:07Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-mya-union
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-mya-union
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8049
- Bleu: 5.0257
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
Cookieszz/Xiaoz | Cookieszz | 2023-07-08T13:39:19Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-07-08T13:39:19Z | ---
license: bigscience-openrail-m
---
|
RogerB/afro-xlmr-large-finetuned-kintweetsC | RogerB | 2023-07-08T13:28:57Z | 3 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"fill-mask",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | 2023-07-08T12:35:48Z | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: afro-xlmr-large-finetuned-kintweetsC
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# afro-xlmr-large-finetuned-kintweetsC
This model is a fine-tuned version of [Davlan/afro-xlmr-large](https://huggingface.co/Davlan/afro-xlmr-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3370
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 3
- eval_batch_size: 3
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.6129 | 1.0 | 3000 | 2.3303 |
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.1
- Tokenizers 0.13.3
|
Macosrun/Macrown | Macosrun | 2023-07-08T13:26:14Z | 0 | 0 | null | [
"license:bigscience-openrail-m",
"region:us"
] | null | 2023-07-08T13:26:13Z | ---
license: bigscience-openrail-m
---
|
Falah/stable_diffusion_prompts_gen | Falah | 2023-07-08T13:05:30Z | 21 | 3 | diffusers | [
"diffusers",
"pytorch",
"gpt2",
"art",
"stable diffusion",
"text-generation",
"en",
"dataset:Falah/stable_diffusion_prompts_dataset",
"license:apache-2.0",
"region:us"
] | text-generation | 2023-07-08T09:30:49Z | ---
license: apache-2.0
datasets:
- Falah/stable_diffusion_prompts_dataset
language:
- en
metrics:
- accuracy
library_name: diffusers
pipeline_tag: text-generation
tags:
- art
- stable diffusion
- gpt2
---
# Stable Diffusion Prompts Generation Model
This model is designed for generating illustration art style prompts for the Stable Diffusion tool for text-to-image generation.
It utilizes the custom dataset "Falah/stable_diffusion_prompts_dataset" to generate creative and coherent text prompts.
## Examples
To load the model and generate inferences using the model, you can use the following code snippet:
```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "Falah/stable_diffusion_prompts_gen"
dataset_name = "Falah/stable_diffusion_prompts_dataset"
prompt = r'a beautiful female' # the beginning of the prompt
temperature = 0.9 # A higher temperature will produce more diverse results, but with a higher risk of less coherent text
top_k = 8 # the number of tokens to sample from at each step
max_length = 200 # the maximum number of tokens for the output of the model
repetition_penalty = 1.2 # the penalty value for each repetition of a token
num_return_sequences = 5 # the number of results to generate
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
input_ids = tokenizer(prompt, return_tensors='pt').input_ids
output = model.generate(
input_ids,
do_sample=True,
temperature=temperature,
top_k=top_k,
max_length=max_length,
num_return_sequences=num_return_sequences,
repetition_penalty=repetition_penalty,
early_stopping=True
)
print('\033[96m' + prompt + '\033[0m')
for i in range(len(output)):
print(tokenizer.decode(output[i], skip_special_tokens=True) + '\n')
```
These are examples of prompts generating and testing the model with the website [STABLE DIFFUSION XL](https://clipdrop.co/) for the stable diffusion model
generating images from prompts
```
a beautiful female
a beautiful female woman, and she's got the best hair in this world. I'm not saying her look is bad (I think it has to be), but my point was that when one looks at these things like we're all looking for something different about our bodies as individuals they are completely wrong; there isn't anything inherently evil with being an animal or having two legs instead of just walking on both sides of you while holding your other leg up so tightly around yourself - no matter how
```

## another generating prompts
```
a beautiful female and she's been in the business for over 30 years.
I've had my fair share of bad things, and I'm sure many more will befall me at some point as well… but it is one thing when you have such an incredible woman on your team that makes life so difficult to bear (aside from being very much human) while also having her back with no regard whatsoever towards any personal issues or even just trying desperately hard not too far away! And
```

--------------

Feel free to modify the parameters like `prompt`, `temperature`, `top_k`, etc., to experiment with different outputs.
## Citation
If you use this model or the associated dataset in your research or projects, please cite it as follows:
```
@sd_prompts{stable_diffussion_prompts_generating_gpt2),
author = {Falah.G.Salieh},
title = {Stable Diffusion Prompts Generating By fine-tuning GPT2 },
year = {2023},
publisher = {Hugging Face},
url = {https://huggingface.co/Falah/stable_diffusion_prompts_gen},
}
```
## License
This project is licensed under the Apache License, Version 2.0. Please see the [LICENSE](link-to-license-file) file for more details. |
hopkins/eng-guj-union | hopkins | 2023-07-08T13:05:16Z | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | 2023-07-08T12:43:45Z | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: eng-guj-union
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# eng-guj-union
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1883
- Bleu: 3.1843
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
swl-models/AingDiffusion-v2.5 | swl-models | 2023-07-08T13:05:09Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-07-08T13:01:20Z | ---
license: creativeml-openrail-m
---
|
swl-models/NullStyle-v1.0 | swl-models | 2023-07-08T13:03:43Z | 0 | 0 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | 2023-06-21T15:34:25Z | ---
license: creativeml-openrail-m
---
|
Subsets and Splits