Method name prediction
14 papers with code • 1 benchmarks • 1 datasets
Libraries
Use these libraries to find Method name prediction models and implementationsMost implemented papers
Testing Neural Program Analyzers
Deep neural networks have been increasingly used in software engineering and program analysis tasks.
Understanding Neural Code Intelligence Through Program Simplification
Our approach, SIVAND, uses simplification techniques that reduce the size of input programs of a CI model while preserving the predictions of the model.
Memorization and Generalization in Neural Code Intelligence Models
The goal of this paper is to evaluate and compare the extent of memorization and generalization in neural code intelligence models.
Extracting Label-specific Key Input Features for Neural Code Intelligence Models
The code intelligence (CI) models are often black-box and do not offer any insights on the input features that they learn for making correct predictions.
Syntax-Guided Program Reduction for Understanding Neural Code Intelligence Models
Our experiments on multiple models across different types of input programs show that the syntax-guided program reduction technique is faster and provides smaller sets of key tokens in reduced programs.
Assessing Project-Level Fine-Tuning of ML4SE Models
We evaluate three models of different complexity and compare their quality in three settings: trained on a large dataset of Java projects, further fine-tuned on the data from a particular project, and trained from scratch on this data.
Embedding Java Classes with code2vec: Improvements from Variable Obfuscation
code2vec is a recently released embedding approach that uses the proxy task of method name prediction to map Java methods to feature vectors.
Evaluation of Generalizability of Neural Program Analyzers under Semantic-Preserving Transformations
The abundance of publicly available source code repositories, in conjunction with the advances in neural networks, has enabled data-driven approaches to program analysis.
Contrastive Code Representation Learning
Recent work learns contextual representations of source code by reconstructing tokens from their context.
On the Generalizability of Neural Program Models with respect to Semantic-Preserving Program Transformations
With the prevalence of publicly available source code repositories to train deep neural network models, neural program models can do well in source code analysis tasks such as predicting method names in given programs that cannot be easily done by traditional program analysis techniques.