On the Reliability and Explainability of Language Models for Program Generation (2302.09587v3)
Abstract: Recent studies have adopted pre-trained LLMs, such as CodeT5 and CodeGPT, for automated program generation tasks like code generation, repair, and translation. Numerous LLM-based approaches have been proposed and evaluated on various benchmark datasets, demonstrating promising performance. However, there is still uncertainty about the reliability of these models, particularly their realistic ability to consistently transform code sequences. This raises the question: are these techniques sufficiently trustworthy for automated program generation? Consequently, Further research is needed to understand model logic and assess reliability and explainability. To bridge these research gaps, we conduct a thorough empirical study of eight popular LLMs on five representative datasets to determine the capabilities and limitations of automated program generation approaches. We further employ advanced explainable AI approaches to highlight the tokens that significantly contribute to the code transformation. We discover that state-of-the-art approaches suffer from inappropriate performance evaluation stemming from severe data duplication, causing over-optimistic results. Our explainability analysis reveals that, in various experimental scenarios, LLMs can recognize code grammar and structural information, but they exhibit limited robustness to changes in input sequences. Overall, more rigorous evaluation approaches and benchmarks are critical to enhance the reliability and explainability of automated program generation moving forward. Our findings provide important guidelines for this goal.
- Yue Liu (257 papers)
- Chakkrit Tantithamthavorn (49 papers)
- Yonghui Liu (9 papers)
- Li Li (657 papers)