metascroy commited on
Commit
3be6f2e
·
verified ·
1 Parent(s): 8ad2519

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -21,11 +21,11 @@ pipeline_tag: text-generation
21
  [Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B) is quantized by the PyTorch team using [torchao](https://huggingface.co/docs/transformers/main/en/quantization/torchao) with 8-bit embeddings and 8-bit dynamic activations with 4-bit weight linears (8da4w).
22
  The model is suitable for mobile deployment with [ExecuTorch](https://github.com/pytorch/executorch).
23
 
24
- We provide the [quantized pte](https://huggingface.co/pytorch/Qwen3-4B-8da4w/blob/main/qwen3-4b-1024-ctx.pte) for direct use in ExecuTorch.
25
  (The provided pte file is exported with a max_seq_length/max_context_length of 1024; if you wish to change this, re-export the quantized model following the instructions in [Exporting to ExecuTorch](#exporting-to-executorch).)
26
 
27
  # Running in a mobile app
28
- The [pte file](https://huggingface.co/pytorch/Qwen3-4B-8da4w/blob/main/qwen3-4b-1024-ctx.pte) can be run with ExecuTorch on a mobile phone. See the [instructions](https://pytorch.org/executorch/main/llm/llama-demo-ios.html) for doing this in iOS.
29
  On iPhone 15 Pro, the model runs at [TODO: ADD] tokens/sec and uses [TODO: ADD] Mb of memory.
30
 
31
  [TODO: ADD SCREENSHOT]
@@ -227,7 +227,7 @@ python -m executorch.examples.models.llama.export_llama \
227
  --metadata '{"get_bos_id":199999, "get_eos_ids":[200020,199999]}' \
228
  --max_seq_length 1024 \
229
  --max_context_length 1024 \
230
- --output_name="qwen3-4B-8da4w-1024-cxt.pte"
231
  ```
232
 
233
  After that you can run the model in a mobile app (see [Running in a mobile app](#running-in-a-mobile-app)).
 
21
  [Qwen3-4B](https://huggingface.co/Qwen/Qwen3-4B) is quantized by the PyTorch team using [torchao](https://huggingface.co/docs/transformers/main/en/quantization/torchao) with 8-bit embeddings and 8-bit dynamic activations with 4-bit weight linears (8da4w).
22
  The model is suitable for mobile deployment with [ExecuTorch](https://github.com/pytorch/executorch).
23
 
24
+ We provide the [quantized pte](https://huggingface.co/pytorch/Qwen3-4B-8da4w/blob/main/qwen3-4B-8da4w-1024-cxt.pte) for direct use in ExecuTorch.
25
  (The provided pte file is exported with a max_seq_length/max_context_length of 1024; if you wish to change this, re-export the quantized model following the instructions in [Exporting to ExecuTorch](#exporting-to-executorch).)
26
 
27
  # Running in a mobile app
28
+ The [pte file](https://huggingface.co/pytorch/Qwen3-4B-8da4w/blob/main/qwen3-4B-8da4w-1024-cxt.pte) can be run with ExecuTorch on a mobile phone. See the [instructions](https://pytorch.org/executorch/main/llm/llama-demo-ios.html) for doing this in iOS.
29
  On iPhone 15 Pro, the model runs at [TODO: ADD] tokens/sec and uses [TODO: ADD] Mb of memory.
30
 
31
  [TODO: ADD SCREENSHOT]
 
227
  --metadata '{"get_bos_id":199999, "get_eos_ids":[200020,199999]}' \
228
  --max_seq_length 1024 \
229
  --max_context_length 1024 \
230
+ --output_name="qwen3-4b-8da4w-1024-cxt.pte"
231
  ```
232
 
233
  After that you can run the model in a mobile app (see [Running in a mobile app](#running-in-a-mobile-app)).