You need to agree to share your contact information to access this model
This repository is publicly accessible, but you have to accept the conditions to access its files and content.
By submitting the access request, you accept Be.FM Terms of Use. Please simultaneously submit an access request via the Google Form.
Log in or Sign Up to review the conditions and access this model content.
Overview
Be.FM 70B is an open foundation model for human behavior modeling, built on Llama 3.1 70B and fine-tuned on diverse behavioral datasets. It is designed to enhance the understanding and prediction of human decision-making.
Paper: Be.FM: Open Foundation Models for Human Behavior
Usage
Be.FM 70B is fine-tuned with behavioral data using an Alpaca-style instruction format. For best performance, prompts should include structured instructions with relevant behavioral context (e.g., demographics, survey/experiment setup).
You can use the model with Hugging Face Transformers and PEFT on at least four 40GB+ GPUs with 8-bit quantization. For optimal performance, we recommend four A100-class GPUs with bf16 or fp16 support.
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel, PeftConfig
base_model_id = "meta-llama/Llama-3.1-70B-Instruct"
peft_model_id = "befm/Be.FM-70B"
tokenizer = AutoTokenizer.from_pretrained(base_model_id)
model = AutoModelForCausalLM.from_pretrained(base_model_id, load_in_8bit = True, device_map="auto")
config = PeftConfig.from_pretrained(peft_model_id)
model = PeftModel.from_pretrained(model, peft_model_id)
Inference
For inference, you may use the following demo function:
def generate_response(model, tokenizer, system_prompt, user_prompt):
user = f"Instruction: {user_prompt}\n\nResponse:"
full_prompt = f"{system_prompt}\n\n{user}"
inputs = tokenizer(full_prompt, return_tensors="pt").to("cuda")
with torch.no_grad():
outputs = model.generate(**inputs, max_new_tokens=256)
res = tokenizer.decode(outputs[0], skip_special_tokens=True)
return res
More examples can be found in the appendix of our paper.
Citation, Terms of Use, and Feedback
@article{xie2025fm,
title={Be. FM: Open Foundation Models for Human Behavior},
author={Xie, Yutong and Li, Zhuoheng and Wang, Xiyuan and Pan, Yijun and Liu, Qijia and Cui, Xingzhi and Lo, Kuang-Yu and Gao, Ruoyi and Zhang, Xingjian and Huang, Jin and others},
journal={arXiv preprint arXiv:2505.23058},
year={2025}
}
By using this model, you agree to Be.FM Terms of Use.
We welcome your feedback on model performance as you apply Be.FM to your work. Please share your feedback via the form.
- Downloads last month
- 2
Model tree for befm/Be.FM-70B
Base model
meta-llama/Llama-3.1-70B