johnrachwanpruna commited on
Commit
113f2ec
·
verified ·
1 Parent(s): d5c14ea

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -47,7 +47,7 @@ tags:
47
 
48
  You can run the smashed model with these steps:
49
 
50
- 0. Check requirements from the original repo google/gemma-7b-it installed. In particular, check python, cuda, and transformers versions.
51
  1. Make sure that you have installed quantization related packages.
52
  ```bash
53
  pip install transformers accelerate bitsandbytes>0.37.0
@@ -75,7 +75,7 @@ The configuration info are in `smash_config.json`.
75
 
76
  ## Credits & License
77
 
78
- The license of the smashed model follows the license of the original model. Please check the license of the original model google/gemma-7b-it before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
79
 
80
  ## Want to compress other models?
81
 
 
47
 
48
  You can run the smashed model with these steps:
49
 
50
+ 0. Check requirements from the original repo google/codegemma-7b-it installed. In particular, check python, cuda, and transformers versions.
51
  1. Make sure that you have installed quantization related packages.
52
  ```bash
53
  pip install transformers accelerate bitsandbytes>0.37.0
 
75
 
76
  ## Credits & License
77
 
78
+ The license of the smashed model follows the license of the original model. Please check the license of the original model ggoogle/codegemma-7b-it before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
79
 
80
  ## Want to compress other models?
81