Papers
Topics
Authors
Recent
Assistant
AI Research Assistant
Well-researched responses based on relevant abstracts and paper content.
Custom Instructions Pro
Preferences or requirements that you'd like Emergent Mind to consider when generating responses.
GPT-5.1
GPT-5.1 89 tok/s
Gemini 3.0 Pro 56 tok/s
Gemini 2.5 Flash 158 tok/s Pro
Kimi K2 198 tok/s Pro
Claude Sonnet 4.5 36 tok/s Pro
2000 character limit reached

Enhancing Source Code Classification Effectiveness via Prompt Learning Incorporating Knowledge Features (2401.05544v4)

Published 10 Jan 2024 in cs.CL and cs.AI

Abstract: Researchers have investigated the potential of leveraging pre-trained LLMs, such as CodeBERT, to enhance source code-related tasks. Previous methodologies have relied on CodeBERT's '[CLS]' token as the embedding representation of input sequences for task performance, necessitating additional neural network layers to enhance feature representation, which in turn increases computational expenses. These approaches have also failed to fully leverage the comprehensive knowledge inherent within the source code and its associated text, potentially limiting classification efficacy. We propose CodeClassPrompt, a text classification technique that harnesses prompt learning to extract rich knowledge associated with input sequences from pre-trained models, thereby eliminating the need for additional layers and lowering computational costs. By applying an attention mechanism, we synthesize multi-layered knowledge into task-specific features, enhancing classification accuracy. Our comprehensive experimentation across four distinct source code-related tasks reveals that CodeClassPrompt achieves competitive performance while significantly reducing computational overhead.

Definition Search Book Streamline Icon: https://streamlinehq.com
References (7)
  1. Breiman L (2001) Random Forests. Machine Learning 45(1):5–32
  2. Gilda S (2017) Source code classification using Neural Networks. In: 2017 14th International Joint Conference on Computer Science and Software Engineering (JCSSE), pp 1–6
  3. Goldberg Y (2019) Assessing BERT’s Syntactic Abilities. 1901.05287
  4. Yang G (2021) DeepSCC: Source Code Classification Based on Fine-Tuned RoBERTa (S). In: The 33rd International Conference on Software Engineering and Knowledge Engineering, pp 499–502
  5. Fowler M (2018) Refactoring. Addison-Wesley Professional
  6. Arcelli Fontana F, Zanoni M (2017) Code smell severity classification using machine learning techniques. Knowledge-Based Systems 128(C):43–58
  7. Schick T, Schütze H (2021) Few-Shot Text Generation with Pattern-Exploiting Training. 2012.11926
Citations (1)

Summary

We haven't generated a summary for this paper yet.

Dice Question Streamline Icon: https://streamlinehq.com

Open Problems

We haven't generated a list of open problems mentioned in this paper yet.

Lightbulb Streamline Icon: https://streamlinehq.com

Continue Learning

We haven't generated follow-up questions for this paper yet.

List To Do Tasks Checklist Streamline Icon: https://streamlinehq.com

Collections

Sign up for free to add this paper to one or more collections.

X Twitter Logo Streamline Icon: https://streamlinehq.com

Tweets

This paper has been mentioned in 1 tweet and received 0 likes.

Upgrade to Pro to view all of the tweets about this paper: