Code search aims to search for code snippets from large codebase that are semantically related to natural query statements. Deep learning is a valuable method for solving code search tasks in which the quality of training data directly impacts the performance of deep-learning models. However, most existing deep-learning models for code search research have overlooked the critical role of training data within batches, particularly hard negative samples, in optimizing model parameters. In this paper, we propose contrastive-metric learning CMCS for code search based on vector-level sampling and augmentation. Specifically, we propose a sampling method to obtain hard negative samples based on the K-means algorithm and a hardness-controllable sample augmentation method to obtain positive and hard negative samples based on vector-level augmentation techniques. We then design an optimization objective composed of metric learning and multimodal contrastive learning using obtained positive and hard negative samples. Extensive experiments were conducted on the large-scale dataset CodeSearchNet using seven advanced code search models. The results show that our proposed method significantly enhances the training efficiency and search performance of code search models, which is conducive to promoting software engineering development. © 2024. The Author(s).
Qihong Song, Haize Hu, Tebo Dai. CMCS: contrastive-metric learning via vector-level sampling and augmentation for code search. Scientific reports. 2024 Jun 24;14(1):14549
PMID: 38914579
View Full Text