Model save
Browse files
    	
        README.md
    CHANGED
    
    | 
         @@ -27,7 +27,7 @@ print(output["generated_text"]) 
     | 
|
| 27 | 
         | 
| 28 | 
         
             
            ## Training procedure
         
     | 
| 29 | 
         | 
| 30 | 
         
            -
            [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/saiswaroopk/huggingface/runs/ 
     | 
| 31 | 
         | 
| 32 | 
         | 
| 33 | 
         
             
            This model was trained with SFT.
         
     | 
| 
         @@ -36,7 +36,7 @@ This model was trained with SFT. 
     | 
|
| 36 | 
         | 
| 37 | 
         
             
            - TRL: 0.16.1
         
     | 
| 38 | 
         
             
            - Transformers: 4.51.3
         
     | 
| 39 | 
         
            -
            - Pytorch: 2. 
     | 
| 40 | 
         
             
            - Datasets: 3.5.0
         
     | 
| 41 | 
         
             
            - Tokenizers: 0.21.1
         
     | 
| 42 | 
         | 
| 
         | 
|
| 27 | 
         | 
| 28 | 
         
             
            ## Training procedure
         
     | 
| 29 | 
         | 
| 30 | 
         
            +
            [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/saiswaroopk/huggingface/runs/hg17j6dw) 
         
     | 
| 31 | 
         | 
| 32 | 
         | 
| 33 | 
         
             
            This model was trained with SFT.
         
     | 
| 
         | 
|
| 36 | 
         | 
| 37 | 
         
             
            - TRL: 0.16.1
         
     | 
| 38 | 
         
             
            - Transformers: 4.51.3
         
     | 
| 39 | 
         
            +
            - Pytorch: 2.6.0+cu124
         
     | 
| 40 | 
         
             
            - Datasets: 3.5.0
         
     | 
| 41 | 
         
             
            - Tokenizers: 0.21.1
         
     | 
| 42 | 
         |