File size: 1,528 Bytes
1a0d6b3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
---
license: mit
language:
- en
base_model:
- microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract-fulltext
pipeline_tag: text-classification
tags:
- medical
---

<div align="center">
<h1>
    Disentangling Reasoning and Knowledge in Medical Large Language Models
</h1>
</div>

We provide our reasoning vs. knowledge classifier, which can be loaded as shown below:
```python
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer

model_name = "zou-lab/BioMedBERT-Knowledge-vs-Reasoning"
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)

question = "What is the full form of RBC?"
threshold = 0.75

inputs = tokenizer(question, return_tensors="pt", truncation=True, max_length=512)
model.eval()
with torch.no_grad():
    outputs = model(**inputs)
logits = outputs.logits
probs = torch.nn.functional.softmax(logits, dim=1).cpu().numpy()
positive_prob = probs[:, 1]
prediction = (positive_prob >= threshold).astype(int)
```

## 📖 Citation

```
@article{thapa2025disentangling,
  title={Disentangling Reasoning and Knowledge in Medical Large Language Models},
  author={Thapa, Rahul and Wu, Qingyang and Wu, Kevin and Zhang, Harrison and Zhang, Angela and Wu, Eric and Ye, Haotian and Bedi, Suhana and Aresh, Nevin and Boen, Joseph and Reddy, Shriya and Athiwaratkun, Ben and Song, Shuaiwen Leon and Zou, James},
  journal={arXiv preprint arXiv:2505.11462},
  year={2025},
  url={https://arxiv.org/abs/2505.11462}
}
```