no code implementations • 19 Mar 2024 • Anh Bui, Vy Vo, Tung Pham, Dinh Phung, Trung Le
There has long been plenty of theoretical and empirical evidence supporting the success of ensemble learning.
no code implementations • 23 Feb 2024 • Vy Vo, He Zhao, Trung Le, Edwin V. Bonilla, Dinh Phung
Merely filling in missing values with existing imputation methods and subsequently applying structure learning on the complete data is empirical shown to be sub-optimal.
no code implementations • 28 Jan 2024 • Le Chen, Arijit Bhattacharjee, Nesreen Ahmed, Niranjan Hasabnis, Gal Oren, Vy Vo, Ali Jannesari
Our extensive evaluations demonstrate that OMPGPT outperforms existing large language models specialized in OpenMP tasks and maintains a notably smaller size, aligning it more closely with the typical hardware constraints of HPC environments.
1 code implementation • 25 May 2023 • Vy Vo, Trung Le, Long-Tung Vuong, He Zhao, Edwin Bonilla, Dinh Phung
Estimating the parameters of a probabilistic directed graphical model from incomplete data remains a long-standing challenge.
1 code implementation • 27 Sep 2022 • Vy Vo, Trung Le, Van Nguyen, He Zhao, Edwin Bonilla, Gholamreza Haffari, Dinh Phung
Interpretable machine learning seeks to understand the reasoning process of complex black-box systems that are long notorious for lack of explainability.
1 code implementation • 7 Jul 2022 • Vy Vo, Van Nguyen, Trung Le, Quan Hung Tran, Gholamreza Haffari, Seyit Camtepe, Dinh Phung
A popular attribution-based approach is to exploit local neighborhoods for learning instance-specific explainers in an additive manner.
1 code implementation • 10 Jun 2022 • Vy Vo, Weiqing Wang, Wray Buntine
Text simplification is the task of rewriting a text so that it is readable and easily understood.
1 code implementation • NeurIPS 2021 • Richard Antonello, Javier Turek, Vy Vo, Alexander Huth
We find that this representation embedding can predict how well each individual feature space maps to human brain responses to natural language stimuli recorded using fMRI.
no code implementations • NeurIPS 2020 • Shailee Jain, Vy Vo, Shivangi Mahto, Amanda LeBel, Javier S. Turek, Alexander Huth
To understand how the human brain represents this information, one approach is to build encoding models that predict fMRI responses to natural language using representations extracted from neural network language models (LMs).
1 code implementation • ICML 2020 • Javier S. Turek, Shailee Jain, Vy Vo, Mihai Capota, Alexander G. Huth, Theodore L. Willke
In this work, we explore the delayed-RNN, which is a single-layer RNN that has a delay between the input and output.