CCT-Code: Cross-Consistency Training for Multilingual Clone Detection and Code Search (2305.11626v2)
Abstract: We consider the well-known and important tasks of clone detection and information retrieval for source code. The most standard setup is to search clones inside the same language code snippets. But it is also useful to find code snippets with identical behaviour in different programming languages. Nevertheless multi- and cross-lingual clone detection has been little studied in literature. We present a novel training procedure, cross-consistency training (CCT) leveraging cross-lingual similarity, that we apply to train LLMs on source code in various programming languages. We show that this training is effective both for encoder- and decoder-based models. The trained encoder-based CCT-LM model achieves a new state of the art on POJ-104 (monolingual C++ clone detection benchmark) with 96.73\% MAP and AdvTest (monolingual Python code search benchmark) with 47.18\% MRR. The decoder-based CCT-LM model shows comparable performance in these tasks. In addition, we formulate the multi- and cross-lingual clone detection problem and present XCD, a new benchmark dataset produced from CodeForces submissions.
- Nikita Sorokin (3 papers)
- Dmitry Abulkhanov (7 papers)
- Sergey Nikolenko (33 papers)
- Valentin Malykh (24 papers)
- Anton Tikhonov (2 papers)
- Irina Piontkovskaya (24 papers)