Papers
Topics
Authors
Recent
Gemini 2.5 Flash
Gemini 2.5 Flash
149 tokens/sec
GPT-4o
7 tokens/sec
Gemini 2.5 Pro Pro
45 tokens/sec
o3 Pro
4 tokens/sec
GPT-4.1 Pro
38 tokens/sec
DeepSeek R1 via Azure Pro
28 tokens/sec
2000 character limit reached

From Human Days to Machine Seconds: Automatically Answering and Generating Machine Learning Final Exams (2206.05442v7)

Published 11 Jun 2022 in cs.LG

Abstract: A final exam in machine learning at a top institution such as MIT, Harvard, or Cornell typically takes faculty days to write, and students hours to solve. We demonstrate that LLMs pass machine learning finals at a human level, on finals available online after the models were trained, and automatically generate new human-quality final exam questions in seconds. Previous work has developed program synthesis and few-shot learning methods to solve university-level problem set questions in mathematics and STEM courses. In this work, we develop and compare methods that solve final exams, which differ from problem sets in several ways: the questions are longer, have multiple parts, are more complicated, and span a broader set of topics. We curate a dataset and benchmark of questions from machine learning final exams available online and code for answering these questions and generating new questions. We show how to generate new questions from other questions and course notes. For reproducibility and future research on this final exam benchmark, we use automatic checkers for multiple-choice, numeric, and questions with expression answers. We perform ablation studies comparing zero-shot learning with few-shot learning and chain-of-thought prompting using GPT-3, OPT, Codex, and ChatGPT across machine learning topics and find that few-shot learning methods perform best. We highlight the transformative potential of LLMs to streamline the writing and solution of large-scale assessments, significantly reducing the workload from human days to mere machine seconds. Our results suggest that rather than banning LLMs such as ChatGPT in class, instructors should teach students to harness them by asking students meta-questions about correctness, completeness, and originality of the responses generated, encouraging critical thinking in academic studies.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (18)
  1. Language models are few-shot learners. In Proceedings of Advances in Neural Information Processing Systems (NeurIPS), Vol. 33. 1877–1901.
  2. Mark Chen et al. 2021. Evaluating Large Language Models Trained on Code. arXiv preprint arXiv:2107.03374 (2021). arXiv:2107.03374
  3. PaLM: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311 (2022).
  4. A Neural Network Solves, Explains, and Generates University Math Problems by Program Synthesis and Few-Shot Learning at Human Level. Proceedings of the National Academy of Sciences 119, 32 (2022).
  5. Measuring massive multitask language understanding. In Proceedings of the International Conference on Learning Representations (ICLR).
  6. Large Language Models are Zero-Shot Reasoners. arXiv preprint arXiv:2205.11916 (2022).
  7. Competition-level code generation with alphacode. arXiv preprint arXiv:2203.07814 (2022).
  8. Mathpix. 2023. Mathpix Snip. https://mathpix.com/
  9. OpenAI. 2022. ChatGPT: Optimizing Language Models for Dialogue. (2022).
  10. Asking Questions Like Educational Experts: Automatically Generating Question-Answer Pairs on Real-World Examination Data. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Online and Punta Cana, Dominican Republic.
  11. Scaling language models: Methods, analysis & insights from training gopher. arXiv preprint arXiv:2112.11446 (2021).
  12. BLOOM: A 176B-Parameter Open-Access Multilingual Language Model. arXiv preprint arXiv:2211.05100 (2022).
  13. Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615 (2022).
  14. Solving Probability and Statistics problems by probabilistic program synthesis at human level and predicting solvability. In Proceedings of the International Conference on Artificial Intelligence in Education (AIED).
  15. Solving Machine Learning Problems. In Proceedings of the Asian Conference on Machine Learning (ACML). 470–485.
  16. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171 (2022).
  17. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903 (2022).
  18. Least-to-Most Prompting Enables Complex Reasoning in Large Language Models. arXiv preprint arXiv:2205.10625 (2022).
Citations (7)

Summary

  • The paper demonstrates that large language models can both answer and generate machine learning exam content at performance levels comparable to human experts.
  • It introduces a curated dataset from prestigious institutions and benchmarks various prompting methods, revealing few-shot learning's superior performance.
  • The findings highlight LLMs' potential to streamline exam creation, reduce faculty workload, and transform educational assessment.

Automatically Answering and Generating Machine Learning Final Exams: An Exploration with LLMs

Introduction

As the field of ML continues to evolve, so too do the methods for educating the next generation of ML professionals. This paper introduces a robust examination of the capabilities of LLMs in the context of university-level machine learning final exams. It presents a novel dataset comprised of ML final exam questions from prestigious institutions and benchmarks various LLMs' abilities to both solve and generate new exam content. The findings reveal that these models can perform at human levels on existing finals and generate new exam questions that are indistinguishable from those created by humans.

Dataset and Benchmarks

At the core of this exploration is a meticulously curated dataset of final exam questions from MIT, Harvard, and Cornell's introductory ML courses, creating a diverse spectrum of topics within the field. This dataset includes a total of 646 question parts across 149 questions, offering a broad examination of the subject matter taught in these courses.

The benchmarking effort compares a variety of LLMs, including GPT-3, Codex, OPT, and ChatGPT, with different prompting schemes such as zero-shot and few-shot learning, as well as chain-of-thought prompting. The results from these benchmarks are telling: few-shot learning methods consistently outperform their counterparts across different semesters and topics. For example, when evaluating non-image-based questions from MIT's Spring 2022 exams, few-shot learning with GPT-3 achieved a notable performance, aligning closely with human levels.

Implications and Future Directions

The successful application of LLMs in solving and generating ML exam content has several immediate implications:

  • Educational Efficiency: The ability of LLMs to generate new, quality exam questions in seconds can significantly reduce the workload on faculty and teaching assistants, thereby streamlining the examination preparation process.
  • Learning and Assessment: Instead of outright banning LLMs in academic settings due to concerns over cheating, this paper advocates for a constructive approach. By integrating LLMs into the learning process, educators can develop meta-questions that help students refine their critical thinking and problem-solving skills.
  • Future of AI in Education: The findings hint at the transformative potential of AI in educational settings. This could redefine not just how content is created but also how students interact with and learn from this content.

A notable limitation of this paper is its focus on text and mathematical notation-based questions, excluding those that require visual aids for solving. Future research could look into overcoming this limitation by incorporating multimodal LLMs capable of understanding and generating image-based content.

Conclusion

This research marks a significant stride in understanding the capacity of LLMs to support and enhance learning in complex fields like machine learning. The ability of these models to automate the time-consuming task of generating exam content—and to do so at a level comparable to human experts—opens up new avenues for educational tools and methodologies. Moving forward, it will be crucial to explore how these technologies can be integrated responsibly into educational frameworks to augment the learning experience without diminishing the value of human instruction and insight.