Keep ReDistillng with more data?

#2
by MiaoCata - opened

Good job for making this re-distill model! Compared to the original model, its reasoning capacity has improved a lot! In some cases, it does even better than DeepScaleR, using the question below:
Xiao Ying and Xiao Dong were doing their homework together . When solving a quadratic equation with the coefficient of the quadratic term being 1, Xiao Ying made a mistake in the constant term during the simplification process and thus obtained the two roots of the equation as 6 and 1; Xiao Dong made an error in the coefficient of the linear term during the simplification process and thus obtained the two roots of the equation as -2 and -5. What was the original equation ?
Answer: x²-7x+10

Your model successfully solved the problem while DeepScaleR does it wrongly.
So, can you use more data to fine-tune the model to make it more powerful in reasoning and math problems? Thanks a lot!

This comment has been hidden (marked as Spam)
MiaoCata changed discussion status to closed
MiaoCata changed discussion status to open
Your need to confirm your account before you can post a new comment.

Sign up or log in to comment