reka-flash-3.1 / README.md
lixiaonan's picture
Update README.md
403a1b3 verified
|
raw
history blame
3.56 kB

Reka Flash 3.1 is a 21B general-purpose reasoning model that was trained from scratch. It was trained in synthetic and public datasets for supervised finetuning, followed by large-scale RLOO with rule-based rewards. Reka Flash 3.1 is an improved version of Reka Flash 3 due to significant advances in our reinforcement learning stack and curated high-qaulity RL data. Reka Flash 3.1 is particularly strong on coding and as a base model to be finetuned on agentic tasks.
Reka Flash 3.1 improves by 10 points on LiveCodeBench v5 (Full set) from Reka Flash 3. For coding related tasks, Reka Flash 3.1 is competitive with models such as Qwen3-32B. o3-mini, and Gemini 2.5 Flash Thinking. If you want to learn more about how we do reinforcement learning for Reka Flash 3.1 that results in these improvements, please check out this post.

image/png

Try it out at Reka Space.

Strong reasoning and coding skills are important capabilities to support multimodal agentic use cases, and near-lossless quantization allows us to deploy our models anywhere. A multimodal version of Reka-Flash-3.1 serves as a base model for our core products Reka Research and Reka Vision. Please contact us for more information about how you can use them in your organizations.

Model efficiency is critical for the local deployment. We also release a quantized version of Reka Flash 3.1 in this link. Meanwhile, we opensource the corresponding quantizatioon library at this link.

Quickstart

For ease of deployment, the model is released in a Llama-compatible format. You may use any library compatible with Llama to run the model.

Via Hugging Face

import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained("RekaAI/reka-flash-3.1")
model = transformers.AutoModelForCausalLM.from_pretrained("RekaAI/reka-flash-3.1", torch_dtype='auto', device_map='auto')
prompt = {"role": "human", "content": "Write a poem about large language model."}
text = tokenizer.apply_chat_template([prompt], tokenize=False, add_generation_prompt=True)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
outputs = model.generate(**model_inputs, max_new_tokens=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Via vLLM

docker run --rm -it --network=host --gpus '"device=0"' -v  --shm-size=10.24gb vllm/vllm-openai:latest serve RekaAI/reka-flash-3.1 --dtype auto -tp 1

Model Details

Prompt Format

Reka Flash 3.1 uses cl100k_base tokenizer and adds no additional special tokens. Its prompt format is as follows:

human: this is round 1 prompt <sep> assistant: this is round 1 response <sep> ...

Generation should stop on seeing the string <sep> or seeing the special token <|endoftext|>. System prompt can be added by prepending to the first user round.

human: You are a friendly assistant blah ... this is round 1 user prompt <sep> assistant: this is round 1 response <sep> ...

For multi-round conversations, it is recommended to drop the reasoning traces in the previous assistant round to save tokens for the model to think. If you are using HF or vLLM, the built-in chat_template shall handle prompt formatting automatically.

Language Support

This model is primarily built for the English language, and you should consider this an English only model. However, the model is able to converse and understand other languages to some degree.