1 code implementation • 6 Feb 2024 • Zixiao Zhao, Millon Madhur Das, Fatemeh H. Fard
Pre-trained Code Language Models (Code-PLMs) have shown many advancements and achieved state-of-the-art results for many software engineering tasks in the past few years.
1 code implementation • 19 Apr 2022 • Divyam Goel, Ramansh Grover, Fatemeh H. Fard
Although adapters are known to facilitate adapting to many downstream tasks compared to fine-tuning the model that require retraining all of the models' parameters -- which owes to the adapters' plug and play nature and being parameter efficient -- their usage in software engineering is not explored.
no code implementations • 5 Apr 2022 • Mohammad Abdul Hadi, Imam Nur Bani Yusuf, Ferdian Thung, Kien Gia Luong, Jiang Lingxiao, Fatemeh H. Fard, David Lo
We have also identified two different tokenization approaches that can contribute to a significant boost in PTMs' performance for the API sequence generation task.
no code implementations • 4 Feb 2022 • Yue Cao, Fatemeh H. Fard
In this paper, we evaluate PTMs to generate replies to the mobile app user feedbacks.
no code implementations • 12 Apr 2021 • Mohammad Abdul Hadi, Fatemeh H. Fard
In addition, we investigate the performance of the PTMs trained on app reviews (i. e. domain-specific PTMs) .
no code implementations • 19 Mar 2021 • Ramin Shahbazi, Rishab Sharma, Fatemeh H. Fard
However, as the number of APIs that are used in a method increases, the performance of the model in generating comments decreases due to long documentations used in the input.