TobiTob's picture
Update README.md
6aaf2f4
|
raw
history blame
1.91 kB
metadata
tags:
  - generated_from_trainer
datasets:
  - city_learn
model-index:
  - name: decision_transformer_1
    results: []

decision_transformer_1

This model is a fine-tuned version of on the city_learn dataset.

Model description

state_mean: [6.52472527e+00 4.00000000e+00 1.25000000e+01 1.68241415e+01 1.68242216e+01 1.68249313e+01 1.68268315e+01 7.29934753e+01 7.29969093e+01 7.29977106e+01 7.29979396e+01 2.08098329e+02 2.08098329e+02 2.07998283e+02 2.08040522e+02 2.01204785e+02 2.01204785e+02 2.00978709e+02 2.01073375e+02 1.56447270e-01 1.06496225e+00 6.98845768e-01 2.90539899e-01 4.02466726e-01 2.73094091e-01 2.73094091e-01 2.73094091e-01 2.73094091e-01] state_std: [3.45249551e+00 2.00000100e+00 6.92218755e+00 3.55839049e+00 3.55843321e+00 3.55972060e+00 3.56299330e+00 1.64936264e+01 1.64957718e+01 1.64978640e+01 1.65000009e+01 2.92600647e+02 2.92600647e+02 2.92543689e+02 2.92592247e+02 2.96262436e+02 2.96262436e+02 2.96151575e+02 2.96175911e+02 3.53418023e-02 8.88195655e-01 1.01691038e+00 3.23315111e-01 9.21189104e-01 1.17759695e-01 1.17759695e-01 1.17759695e-01 1.17759695e-01]

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 64
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 500

Training results

Framework versions

  • Transformers 4.26.1
  • Pytorch 1.13.1+cu116
  • Datasets 2.10.0
  • Tokenizers 0.13.2