ahmedheakl commited on
Commit
4e514b8
·
verified ·
1 Parent(s): 91d7321

ahmedheakl/asm2asm-qwen2.5coder-0.5b-500k-2ep

Browse files
Files changed (2) hide show
  1. README.md +37 -37
  2. generation_config.json +1 -1
README.md CHANGED
@@ -1,57 +1,57 @@
1
  ---
2
- base_model: Qwen/Qwen2.5-Coder-0.5B-Instruct
3
  library_name: transformers
4
- model_name: asm2asm-qwen2.5coder-0.5b-500k-2ep
 
5
  tags:
6
- - generated_from_trainer
7
  - trl
8
  - sft
9
- licence: license
 
 
 
10
  ---
11
 
12
- # Model Card for asm2asm-qwen2.5coder-0.5b-500k-2ep
 
13
 
14
- This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-0.5B-Instruct).
15
- It has been trained using [TRL](https://github.com/huggingface/trl).
16
 
17
- ## Quick start
18
 
19
- ```python
20
- from transformers import pipeline
21
 
22
- question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
23
- generator = pipeline("text-generation", model="ahmedheakl/asm2asm-qwen2.5coder-0.5b-500k-2ep", device="cuda")
24
- output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
25
- print(output["generated_text"])
26
- ```
27
 
28
- ## Training procedure
29
 
30
- [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/ahmed-heakl/huggingface/runs/6vagatxa)
31
 
32
- This model was trained with SFT.
33
 
34
- ### Framework versions
35
 
36
- - TRL: 0.12.1
37
- - Transformers: 4.46.3
38
- - Pytorch: 2.5.1+cu124
39
- - Datasets: 3.1.0
40
- - Tokenizers: 0.20.3
 
 
 
 
 
 
 
 
 
41
 
42
- ## Citations
43
 
44
 
45
 
46
- Cite TRL as:
47
-
48
- ```bibtex
49
- @misc{vonwerra2022trl,
50
- title = {{TRL: Transformer Reinforcement Learning}},
51
- author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
52
- year = 2020,
53
- journal = {GitHub repository},
54
- publisher = {GitHub},
55
- howpublished = {\url{https://github.com/huggingface/trl}}
56
- }
57
- ```
 
1
  ---
 
2
  library_name: transformers
3
+ license: apache-2.0
4
+ base_model: Qwen/Qwen2.5-Coder-0.5B-Instruct
5
  tags:
 
6
  - trl
7
  - sft
8
+ - generated_from_trainer
9
+ model-index:
10
+ - name: asm2asm-qwen2.5coder-0.5b-500k-2ep
11
+ results: []
12
  ---
13
 
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
 
17
+ # asm2asm-qwen2.5coder-0.5b-500k-2ep
 
18
 
19
+ This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-0.5B-Instruct) on an unknown dataset.
20
 
21
+ ## Model description
 
22
 
23
+ More information needed
 
 
 
 
24
 
25
+ ## Intended uses & limitations
26
 
27
+ More information needed
28
 
29
+ ## Training and evaluation data
30
 
31
+ More information needed
32
 
33
+ ## Training procedure
34
+
35
+ ### Training hyperparameters
36
+
37
+ The following hyperparameters were used during training:
38
+ - learning_rate: 0.0002
39
+ - train_batch_size: 1
40
+ - eval_batch_size: 8
41
+ - seed: 42
42
+ - gradient_accumulation_steps: 8
43
+ - total_train_batch_size: 8
44
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
+ - lr_scheduler_type: linear
46
+ - num_epochs: 2
47
 
48
+ ### Training results
49
 
50
 
51
 
52
+ ### Framework versions
53
+
54
+ - Transformers 4.44.2
55
+ - Pytorch 2.4.1+cu118
56
+ - Datasets 3.0.0
57
+ - Tokenizers 0.19.1
 
 
 
 
 
 
generation_config.json CHANGED
@@ -10,5 +10,5 @@
10
  "temperature": 0.7,
11
  "top_k": 20,
12
  "top_p": 0.8,
13
- "transformers_version": "4.46.3"
14
  }
 
10
  "temperature": 0.7,
11
  "top_k": 20,
12
  "top_p": 0.8,
13
+ "transformers_version": "4.44.2"
14
  }