Papers
Topics
Authors
Recent
Detailed Answer
Quick Answer
Concise responses based on abstracts only
Detailed Answer
Well-researched responses based on abstracts and relevant paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses
Gemini 2.5 Flash
Gemini 2.5 Flash 39 tok/s
Gemini 2.5 Pro 49 tok/s Pro
GPT-5 Medium 12 tok/s Pro
GPT-5 High 18 tok/s Pro
GPT-4o 91 tok/s Pro
Kimi K2 191 tok/s Pro
GPT OSS 120B 456 tok/s Pro
Claude Sonnet 4 37 tok/s Pro
2000 character limit reached

Execution-Based Evaluation of Natural Language to Bash and PowerShell for Incident Remediation (2405.06807v2)

Published 10 May 2024 in cs.CL and cs.SE

Abstract: Given recent advancements of LLMs, code generation tasks attract immense attention for wide application in different domains. In an effort to evaluate and select a best model to automatically remediate system incidents discovered by Application Performance Monitoring (APM) platforms, it is crucial to verify if the generated code is syntactically and semantically correct, and whether it can be executed correctly as intended. However, current methods for evaluating the quality of code generated by LLMs heavily rely on surface form similarity metrics (e.g. BLEU, ROUGE, and exact/partial match) which have numerous limitations. In contrast, execution based evaluation focuses more on code functionality and does not constrain the code generation to any fixed solution. Nevertheless, designing and implementing such execution-based evaluation platform is not a trivial task. There are several works creating execution-based evaluation platforms for popular programming languages such as SQL, Python, Java, but limited or no attempts for scripting languages such as Bash and PowerShell. In this paper, we present the first execution-based evaluation platform in which we created three test suites (total 125 handcrafted test cases) to evaluate Bash (both single-line commands and multiple-line scripts) and PowerShell codes generated by LLMs. We benchmark seven closed and open-source LLMs using our platform with different techniques (zero-shot vs. few-shot learning).

Definition Search Book Streamline Icon: https://streamlinehq.com
References (6)
  1. Lin, X. V., Wang, C., Zettlemoyer, L., and Ernst, M. D. “Nl2bash: A corpus and semantic parser for natural language interface to the linux operating system”. Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018).
  2. Zhong, V., Xiong, C., and Socher, R. “Seq2sql: Generating structured queries from natural language using reinforcement learning”. arXiv preprint arXiv:1709.00103, 2017.
  3. Dong, L., and Lapata, M. “Language to logical form with neural attention”. arXiv preprint arXiv:1601.01280, 2016.
  4. Wang, Z., Zhou, S., Fried, D., and Neubig, G. “Execution-based evaluation for open-domain code generation”. arXiv preprint arXiv:2212.10481, 2022.
  5. Dong, Y., Ding, J., Jiang, X., Li, G., Li, Z., and Jin, Z. “Codescore: Evaluating code generation by learning code execution”. arXiv preprint arXiv:2301.09043, 2023.
  6. Ni, A., Iyer, S., Radev, D., Stoyanov, V., Yih, W. T., Wang, S., and Lin, X. V. “Lever: Learning to verify language-to-code generation with execution”. In International Conference on Machine Learning. PMLR, 2023.

Summary

We haven't generated a summary for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

Lightbulb On Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.