tergel's picture
Update README.md
4007885 verified
metadata
library_name: transformers
license: mit
language:
  - en
base_model:
  - google/gemma-2-2b-it
pipeline_tag: text-generation

Self-Training Elicits Concise Reasoning in Large Language Models

This model is fine-tuned using self-training methods to generate more concise reasoning paths for reasoning tasks while maintaining accuracy.

Model Details

How to Get Started with the Model

Use the code below to get started with the model.

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_name = "tergel/gemma-2-2b-it-math-fs-gpt4o-bon"
device = "cuda" if torch.cuda.is_available() else "cpu"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map=device, torch_dtype=torch.bfloat16)

question = "If $f(x) = \\frac{3x-2}{x-2}$, what is the value of $f(-2) +f(-1)+f(0)$? Express your answer as a common fraction"

inputs = tokenizer(question, return_tensors="pt").to(device)
input_length = len(inputs['input_ids'][0])

outputs = model.generate(**inputs, max_new_tokens=512)

response = tokenizer.decode(outputs[0][input_length:], skip_special_tokens=True)
print(response)

For more detailed information about training methods, evaluation results, limitations, and technical specifications, please refer to our paper.

Citation

@article{munkhbat2025self,
  title={Self-Training Elicits Concise Reasoning in Large Language Models},
  author={Munkhbat, Tergel and Ho, Namgyu and Kim, Seohyun and Yang, Yongjin and Kim, Yujin and Yun, Se-Young},
  journal={arXiv preprint arXiv:2502.20122},
  year={2025}
}