TaskEval: Assessing Difficulty of Code Generation Tasks for Large Language Models (2407.21227v2)
Abstract: LLMs excel in code-related tasks like code generation, but benchmark evaluations often overlook task characteristics, such as difficulty. Moreover, benchmarks are usually built using tasks described with one single prompt, despite the formulation of prompts having a profound impact on the outcome. This paper introduces a generalist approach, TaskEval, a framework using diverse prompts and Item Response Theory (IRT) to efficiently assess LLMs' capabilities and benchmark task characteristics, improving the understanding of their performance. Using two code generation benchmarks, HumanEval+ and ClassEval, as well as 5 code generation LLMs, we show that TaskEval is capable of characterizing the properties of tasks. Using topic analysis, we identify and analyze the tasks of respectively 17 and 21 topics within the benchmarks. We also cross-analyze tasks' characteristics with programming constructs (e.g., variable assignment, conditions, etc.) used by LLMs, emphasizing some patterns with tasks' difficulty. Finally, we conduct a comparison between the difficulty assessment of tasks by human-annotators and LLMs. Orthogonal to current benchmarking evaluation efforts, TaskEval can assist researchers and practitioners in fostering better assessments of LLMs. The tasks' characteristics can be used to identify shortcomings within existing benchmarks. This could be used to generate additional related tasks for the evaluation or improvement of LLM.
Collections
Sign up for free to add this paper to one or more collections.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.