Papers
arxiv:2206.14858

Solving Quantitative Reasoning Problems with Language Models

Published on Jun 29, 2022
Authors:
,
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

Language models have achieved remarkable performance on a wide range of tasks that require natural language understanding. Nevertheless, state-of-the-art models have generally struggled with tasks that require <PRE_TAG><PRE_TAG><PRE_TAG>quantitative reasoning</POST_TAG></POST_TAG></POST_TAG>, such as solving mathematics, science, and engineering problems at the college level. To help close this gap, we introduce Minerva, a large language model pretrained on general natural language data and further trained on technical content. The model achieves state-of-the-art performance on technical benchmarks without the use of external tools. We also evaluate our model on over two hundred undergraduate-level problems in physics, <PRE_TAG><PRE_TAG>biology</POST_TAG></POST_TAG>, <PRE_TAG><PRE_TAG>chemistry</POST_TAG></POST_TAG>, <PRE_TAG><PRE_TAG>economics</POST_TAG></POST_TAG>, and other sciences that require <PRE_TAG><PRE_TAG><PRE_TAG>quantitative reasoning</POST_TAG></POST_TAG></POST_TAG>, and find that the model can correctly answer nearly a third of them.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2206.14858 in a model README.md to link it from this page.

Datasets citing this paper 3

Spaces citing this paper 2

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.