Feat(readme): Add instructions for Google GPU VM instances (#1410)
Browse files
README.md
CHANGED
|
@@ -249,6 +249,21 @@ For cloud GPU providers that support docker images, use [`winglian/axolotl-cloud
|
|
| 249 |
```
|
| 250 |
</details>
|
| 251 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 252 |
#### Windows
|
| 253 |
Please use WSL or Docker!
|
| 254 |
|
|
|
|
| 249 |
```
|
| 250 |
</details>
|
| 251 |
|
| 252 |
+
##### GCP
|
| 253 |
+
|
| 254 |
+
<details>
|
| 255 |
+
|
| 256 |
+
<summary>Click to Expand</summary>
|
| 257 |
+
|
| 258 |
+
Use a Deeplearning linux OS with cuda and pytorch installed. Then follow instructions on quickstart.
|
| 259 |
+
|
| 260 |
+
Make sure to run the below to uninstall xla.
|
| 261 |
+
```bash
|
| 262 |
+
pip uninstall -y torch_xla[tpu]
|
| 263 |
+
```
|
| 264 |
+
|
| 265 |
+
</details>
|
| 266 |
+
|
| 267 |
#### Windows
|
| 268 |
Please use WSL or Docker!
|
| 269 |
|