File size: 2,846 Bytes
01b0dd4 e7a58b9 01b0dd4 e7a58b9 01b0dd4 e7a58b9 01b0dd4 e7a58b9 01b0dd4 668fa97 01b0dd4 8a17009 01b0dd4 ea33e69 01b0dd4 941c84d e7a58b9 668fa97 e7a58b9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 |
---
license: cc-by-4.0
tags:
- health
- physical-activity
- behavior-change
- stage-of-change
- llm
- transformers
datasets:
- SriyaM/MHC_LLM_Preference_Data
language:
- en
model-index:
- name: MHC-Coach
results:
- task:
name: Health Coaching
type: health-coaching
dataset:
name: MyHeartCounts Coaching Data
type: SriyaM/MHC_LLM_Preference_Data
metrics:
- name: Human Preference Rate
type: preference
value: 68
- name: Expert Effectiveness Score
type: likert
value: 4.4
- name: Stage Match Score
type: likert
value: 4.1
base_model:
- meta-llama/Meta-Llama-3-70B
pipeline_tag: text-generation
---
# MHC-Coach: A Behaviorally Informed Health Coaching Language Model
**MHC-Coach** is a large language model fine-tuned on behavioral science principles to deliver stage-specific health coaching messages aimed at increasing physical activity. Built on LLaMA 3-70B, it integrates the Transtheoretical Model of Change and cardiovascular health content to provide motivational, personalized coaching aligned with each user’s behavioral readiness.
## Highlights
- Fine-tuned on 3,000+ human-expert health coaching messages
- Embeds psychological theory (Transtheoretical Model) for personalized messaging
- Evaluated on 632 users in the MyHeartCounts app
- Preferred over expert-written messages in 68% of matched-stage comparisons
## Reference
For more details, see the preprint:
> [Fine-tuning Large Language Models in Behavioral Psychology for Scalable Physical Activity Coaching](https://doi.org/10.1101/2025.02.19.25322559)
## Usage
You can load the model using Hugging Face's `transformers` library:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("SriyaM/MHC-Coach")
model = AutoModelForCausalLM.from_pretrained("SriyaM/MHC-Coach")
prompt = "Write a 20-word notification to motivate someone in the Precontemplation stage of change to increase their exercise levels. In this stage, people do not intend to take action in the foreseeable future (defined as within the next 6 months). People are often unaware that their behavior is problematic or produces negative consequences."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Citation
If you use this model, please cite:
@article{mantena2025mhccoach, title={Fine-tuning Large Language Models in Behavioral Psychology for Scalable Physical Activity Coaching}, author={Mantena, Sriya and Johnson, Anders and Oppezzo, Marily and Schuetz, Narayan and Tolas, Alexander and others}, journal={medRxiv}, year={2025}, doi={10.1101/2025.02.19.25322559} }
## License
[CC-BY 4.0](http://creativecommons.org/licenses/by/4.0/) |