litert-community/Hammer2.1-1.5b
This model provides a few variants of MadeAgents/Hammer2.1-1.5b that are ready for deployment on Android using the LiteRT (fka TFLite) stack, MediaPipe LLM Inference API and LiteRT-LM.
Use the models
Colab
Disclaimer: The target deployment surface for the LiteRT models is Android/iOS/Web and the stack has been optimized for performance on these targets. Trying out the system in Colab is an easier way to familiarize yourself with the LiteRT stack, with the caveat that the performance (memory and latency) on Colab could be much worse than on a local device.
Android
Edge Gallery App
Download or build the app from GitHub.
Install the app from Google Play.
Follow the instructions in the app.
LLM Inference API
- Download and install the apk.
- Follow the instructions in the app.
To build the demo app from source, please follow the instructions from the GitHub repository.
iOS
- Clone the MediaPipe samples repository and follow the instructions to build the LLM Inference iOS Sample App using XCode.
- Run the app via the iOS simulator or deploy to an iOS device.
Performance
Android
Note that all benchmark stats are from a Samsung S24 Ultra and multiple prefill signatures enabled.
Backend | Quantization scheme | Context length | Prefill (tokens/sec) | Decode (tokens/sec) | Time-to-first-token (sec) | Model size (MB) | Peak RSS Memory (MB) | GPU Memory (MB) | |
---|---|---|---|---|---|---|---|---|---|
CPU |
fp32 (baseline) |
1280 |
51.50 tk/s |
9.99 tk/s |
20.30 s |
6,180 MB |
6252 MB |
N/A |
|
CPU |
dynamic_int8 |
1280 |
290.00 tk/s |
34.47 tk/s |
3.79 s |
1598 MB |
1998 MB |
N/A |
|
CPU |
dynamic_int8 |
4096 |
162.90 tk/s |
23.66 tk/s |
6.54 s |
1598 MB |
2215 MB |
N/A |
|
GPU |
dynamic_int8 |
1280 |
1648.95 tk/s |
30.20 tk/s |
3.21 s |
1598 MB |
1814 MB |
1505 MB |
|
GPU |
dynamic_int8 |
4096 |
920.04 tk/s |
27.00 tk/s |
4.17 s |
1598 MB |
1866 MB |
1659 MB |
- For the list of supported quantization schemes see supported-schemes. For these models, we are using prefill signature lengths of 32, 128, 512 and 1280.
- Model Size: measured by the size of the .tflite flatbuffer (serialization format for LiteRT models)
- Memory: indicator of peak RAM usage
- The inference on CPU is accelerated via the LiteRT XNNPACK delegate with 4 threads
- Benchmark is run with cache enabled and initialized. During the first run, the time to first token may differ.
- Downloads last month
- 45
Model tree for litert-community/Hammer2.1-1.5b
Base model
Qwen/Qwen2.5-1.5B