LiteRT
Gemma2-2B-IT / README.md
chenxugz's picture
Update README.md
d65fc0b verified
|
raw
history blame
2.96 kB
---
license: gemma
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model:
- google/gemma-2-2b-it
---
# litert-community/Gemma2-2B-IT
This model provides a few variants of [google/gemma-2-2b-it](https://huggingface.co/google/gemma-2-2b-it) that are ready for deployment on Android using the [LiteRT (fka TFLite) stack](https://ai.google.dev/edge/litert) and [MediaPipe LLM Inference API](https://ai.google.dev/edge/mediapipe/solutions/genai/llm_inference).
# Use the models
## Colab
_Disclaimer: The target deployment surface for the LiteRT models is Android/iOS/Web and the stack has been optimized for performance on these targets. Trying out the system in Colab is an easier way to familiarize yourself with the LiteRT stack, with the caveat that the performance (memory and latency) on Colab could be much worse than on a local device._
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/#fileId=https://huggingface.co/litert-community/Gemma2-2B-IT/blob/main/gemma2_tflite.ipynb)
## Android
To build the demo app from source, please follow the [instructions](https://github.com/google-ai-edge/mediapipe-samples/blob/main/examples/llm_inference/android/README.md) from the GitHub repository.
# Performance
## Android
Note that all benchmark stats are from a Samsung S24 Ultra with 1280 KV cache size, 512 tokens prefill, 128 tokens decode.
<table>
<tr>
<td>
</td>
<th>Backend</th>
<th>Prefill (tokens/sec)</th>
<th>Decode (tokens/sec)</th>
<th>Time-to-first-token (sec)</th>
<th>Memory (RSS in MB)</th>
<th>Model size (MB)</th>
</tr>
<tr>
<td>dynamic_int8</td>
<td>CPU</td>
<td><p style="text-align: right">146</p></td>
<td><p style="text-align: right">11</p></td>
<td><p style="text-align: right">3.9</p></td>
<td><p style="text-align: right">4086</p></td>
<td><p style="text-align: right">2703</p></td>
</tr>
<tr>
<td>dynamic_int8</td>
<td>GPU</td>
<td><p style="text-align: right">1052</p></td>
<td><p style="text-align: right">15.6</p></td>
<td><p style="text-align: right">7.6</p></td>
<td><p style="text-align: right">5322</p></td>
<td><p style="text-align: right">2702</p></td>
</tr>
</table>
* Model Size: measured by the size of the .tflite flatbuffer (serialization format for LiteRT models)
* Memory: indicator of peak RAM usage
* The inference on CPU is accelerated via the LiteRT [XNNPACK](https://github.com/google/XNNPACK) delegate with 4 threads
* Benchmark on CPU is done assuming XNNPACK cache is enabled
* dynamic_int8: quantized model with int8 weights and float activations.