File size: 1,437 Bytes
2c341f2
 
 
 
 
 
 
6e7971e
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
---
license: apache-2.0
datasets:
- togethercomputer/RedPajama-Data-1T
tags:
- llama
---
# Model Summery
MobileLLaMA-2.7B-Base is a Transformer with 2.7B billon paramters. We downscale LLaMA to facilitate the off-the-shelf deployment. To make our work reproducible, all the models are trained on 1.3T tokens from the [RedPajama v1](https://www.together.ai/blog/redpajama) dataset only. This benefits further research by enabling controlled experiments.

We extensively assess our models on two standard natural language benchmarks, for language understanding and common sense reasoning respectively. Experimental results show that our MobileLLaMA is on par with the most recent opensource models. MobileLLaMA 2.7B also demonstrates competitive performance to INCITE 3B (V1) and OpenLLaMA 3B (V1), while being about 40% faster than OpenLLaMA 3B on a Snapdragon 888 CPU as shown in our [paper](https://arxiv.org/abs/2312.16886) Table 5.

# Model Sources
- Repository: https://github.com/Meituan-AutoML/MobileVLM
- Paper: https://arxiv.org/abs/2312.16886
# How to Get Started with the Model
Model weights can be loaded with Hugging Face Transformers. Examples can be found at [Github](https://github.com/Meituan-AutoML/MobileVLM).

# Datasets and Training
For our training details, please refer to our paper in section 4.1: [MobileVLM: A Fast, Strong and Open Vision Language Assistant for Mobile Devices](https://arxiv.org/abs/2312.16886).