[RESOLVED] Model is not outputting the <think> token at the beginning.

#37
by bsvaz - opened

Neither this model nor the distill 1.5B model outputs the opening thinking token <think> before starting to think, but they do output the closing token </think>.

Edit 1: I found the solution in the model card, here is:
"Additionally, we have observed that the DeepSeek-R1 series models tend to bypass thinking pattern (i.e., outputting <think>\n\n</think>) when responding to certain queries, which can adversely affect the model's performance. To ensure that the model engages in thorough reasoning, we recommend enforcing the model to initiate its response with <think>\n at the beginning of every output."

But I still have a concern about this: Is there a way to enforce this while using the HF Inference API?

Edit 2: I found the solution:  

If you want to interact with the model similar to the way you interact using model.generate() you should make a direct HTTP request to the Inference API like this:  

API_URL = "https://api-inference.huggingface.co/models/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B"  

headers = {"Authorization": f"Bearer {hf_api_key}"}  

# Construct the EXACT input string 
formatted_input = '<|begin▁of▁sentence|>' + '<|User|>' + prompt + '<|Assistant|>' + '<think>\n'  

payload = { "inputs": formatted_input, "parameters": { "do_sample": False, "temperature": 0.6 } } 
response = requests.post(API_URL, headers=headers, json=payload)  

print(response.json())```

Hi. Thanks for the message. Can you please show how you pass the token to the model?

Is this way correct?
chat_completion = client.chat.completions.create( messages=[ {"role": "assistant", "content": "<think>\n"}, { "role": "user", "content": f"""what is happiness?""", } ], model="default", temperature=0.35, top_p=0.9 ) print(chat_completion.choices[0].message.content)

Hi. Thanks for the message. Can you please show how you pass the token to the model?

Is this way correct?
chat_completion = client.chat.completions.create( messages=[ {"role": "assistant", "content": "<think>\n"}, { "role": "user", "content": f"""what is happiness?""", } ], model="default", temperature=0.35, top_p=0.9 ) print(chat_completion.choices[0].message.content)

With this, you are using the Inference API. The model works by completing a single string, so the messages input is processed into one single string (which also includes special tokens).
What I did was to download the model and do the inference, here is the code

model = AutoModelForCausalLM.from_pretrained("deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B")

inputs = tokenizer('<|User|>' + prompt + '<|Assistant|>' + '<think>\n', return_tensors="pt")

# Generate text
model.eval()
with torch.no_grad():
    outputs = model.generate(**inputs.to(device), max_new_tokens=500)

# Decode the generated text
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=False)

I tried this way you told me now, but I'm not sure if it considered the <think> because it was not outputted in the message.
image.png

Has this issue been resolved?

Has this issue been resolved?

Yes, I'll edit the post with the solution so other people with the same question can know.

bsvaz changed discussion title from Model is not outputting the <think> token at the beginning. to [RESOLVED] Model is not outputting the <think> token at the beginning.

Sign up or log in to comment