OpenAI just announced (see https://openai.com/research/improving-mathematical-reasoning-with-process-supervision) progress on solving math problems using process supervision during training.
The data on https://github.com/openai/prm800k/tree/main comes from https://github.com/hendrycks/math (which is for https://arxiv.org/pdf/2103.03874.pdf) and there are examples in that data which come from https://artofproblemsolving.com/wiki/index.php/2015_AIME_II_Problems/Problem_6
AoPS describes itself as "Math texts, online classes, and more for students in grades 5-12."
The problems are constrained and feel very artificial. See for example https://artofproblemsolving.com/wiki/index.php/Mock_AIME_1_Pre_2005_Problems/Problem_4
The training data doesn't have inference rules, so the output from the LLM doesn't have inference rules. As a consequence, the output of the LLM cannot be confirmed by a Computer Algebra System. The output text needs to be validated by a human. LLMs are hallucinating answers that sound reasonable, so checking each step is still vital.
The ability to resolve distinct variables across all of Mathematical Physics is beyond the scope of the training data.
On a positive note, if the Physics Derivation Graph content existed, I now think an LLM-based approach could be used to make progress in Mathematical Physics.
No comments:
Post a Comment