danj0nes commited on
Commit
9844405
·
1 Parent(s): 88f1163

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -22
README.md CHANGED
@@ -4,30 +4,19 @@ base_model: gpt2
4
  tags:
5
  - generated_from_trainer
6
  model-index:
7
- - name: dropout_c-gpt2_s-0
8
  results: []
 
 
9
  ---
10
 
11
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
- should probably proofread and complete it, then remove this comment. -->
13
 
14
- # dropout_c-gpt2_s-0
15
-
16
- This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
17
 
18
  ## Model description
19
 
20
- More information needed
21
-
22
- ## Intended uses & limitations
23
-
24
- More information needed
25
-
26
- ## Training and evaluation data
27
-
28
- More information needed
29
-
30
- ## Training procedure
31
 
32
  ### Training hyperparameters
33
 
@@ -42,13 +31,9 @@ The following hyperparameters were used during training:
42
  - lr_scheduler_type: linear
43
  - training_steps: 4000
44
 
45
- ### Training results
46
-
47
-
48
-
49
  ### Framework versions
50
 
51
  - Transformers 4.35.2
52
  - Pytorch 2.1.0+cu121
53
  - Datasets 2.16.1
54
- - Tokenizers 0.15.0
 
4
  tags:
5
  - generated_from_trainer
6
  model-index:
7
+ - name: dropout_gpt2
8
  results: []
9
+ language:
10
+ - en
11
  ---
12
 
13
+ # dropout_gpt2
 
14
 
15
+ This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on a English Wikipedia dataset.
 
 
16
 
17
  ## Model description
18
 
19
+ Dropout debiased GPT-2 using the hyperparameters specified in [Measuring and Reducing Gendered Correlations in Pre-trained Models (Webster et al. 2021)](https://arxiv.org/abs/2010.06032).
 
 
 
 
 
 
 
 
 
 
20
 
21
  ### Training hyperparameters
22
 
 
31
  - lr_scheduler_type: linear
32
  - training_steps: 4000
33
 
 
 
 
 
34
  ### Framework versions
35
 
36
  - Transformers 4.35.2
37
  - Pytorch 2.1.0+cu121
38
  - Datasets 2.16.1
39
+ - Tokenizers 0.15.0