1 code implementation • 1 Dec 2022 • Zijian Zhou, Xinyi Xu, Rachael Hwee Ling Sim, Chuan Sheng Foo, Kian Hsiang Low
We observe that the fairness guarantees of exact SVs are too restrictive for SV estimates.
1 code implementation • NeurIPS 2020 • Zhongxiang Dai, Kian Hsiang Low, Patrick Jaillet
Bayesian optimization (BO) is a prominent approach to optimizing expensive-to-evaluate black-box functions.
no code implementations • ICML 2020 • Zhongxiang Dai, Yizhou Chen, Kian Hsiang Low, Patrick Jaillet, Teck-Hua Ho
This paper presents a recursive reasoning formalism of Bayesian optimization (BO) to model the reasoning process in the interactions between boundedly rational, self-interested agents with unknown, complex, and costly-to-evaluate payoff functions in repeated games, which we call Recursive Reasoning-Based BO (R2-B2).
no code implementations • 22 Feb 2020 • Dmitrii Kharkovskii, Chun Kai Ling, Kian Hsiang Low
This paper presents a multi-staged approach to nonmyopic adaptive Gaussian process optimization (GPO) for Bayesian optimization (BO) of unknown, highly complex objective functions that, in contrast to existing nonmyopic adaptive BO algorithms, exploits the notion of macro-actions for scaling up to a further lookahead to match up to a larger available budget.
no code implementations • 5 Dec 2019 • Tong Teng, Jie Chen, Yehong Zhang, Kian Hsiang Low
To achieve this, we represent the probabilistic kernel as an additional variational variable in a variational inference (VI) framework for SGPR models where its posterior belief is learned together with that of the other variational variables (i. e., inducing variables and kernel hyperparameters).
no code implementations • 16 Nov 2019 • Tien Mai, Quoc Phong Nguyen, Kian Hsiang Low, Patrick Jaillet
We consider the problem of recovering an expert's reward function with inverse reinforcement learning (IRL) when there are missing/incomplete state-action pairs or observations in the demonstrated trajectories.
1 code implementation • NeurIPS 2019 • Haibin Yu, Yizhou Chen, Zhongxiang Dai, Kian Hsiang Low, Patrick Jaillet
This paper presents an implicit posterior variational inference (IPVI) framework for DGPs that can ideally recover an unbiased posterior belief and still preserve time efficiency.
no code implementations • 17 Jun 2019 • Yehong Zhang, Zhongxiang Dai, Kian Hsiang Low
This paper presents novel mixed-type Bayesian optimization (BO) algorithms to accelerate the optimization of a target objective function by exploiting correlated auxiliary information of binary type that can be more cheaply obtained, such as in policy search for reinforcement learning and hyperparameter tuning of machine learning models with early stopping.
1 code implementation • 15 Mar 2019 • Quoc Phong Nguyen, Kar Wai Lim, Dinil Mon Divakaran, Kian Hsiang Low, Mun Choon Chan
This paper looks into the problem of detecting network anomalies by analyzing NetFlow records.
no code implementations • 28 Feb 2019 • Jingfeng Zhang, Bo Han, Laura Wynter, Kian Hsiang Low, Mohan Kankanhalli
Our analytical studies reveal that the step factor h in the Euler method is able to control the robustness of ResNet in both its training and generalization.
no code implementations • 23 May 2018 • Trong Nghia Hoang, Quang Minh Hoang, Kian Hsiang Low, Jonathan How
Distributed machine learning (ML) is a modern computation paradigm that divides its workload into independent tasks that can be simultaneously achieved by multiple machines (i. e., agents) for better scalability.
no code implementations • 19 Nov 2017 • Trong Nghia Hoang, Quang Minh Hoang, Ruofei Ouyang, Kian Hsiang Low
This paper presents a novel decentralized high-dimensional Bayesian optimization (DEC-HBO) algorithm that, in contrast to existing HBO algorithms, can exploit the interdependent effects of various input components on the output of the unknown objective function f for boosting the BO performance and still preserve scalability in the number of input dimensions without requiring prior knowledge or the existence of a low (effective) dimension of the input space.
no code implementations • 16 Nov 2017 • Ruofei Ouyang, Kian Hsiang Low
To achieve this, we propose a novel transfer learning mechanism for a team of agents capable of sharing and transferring information encapsulated in a summary based on a support set to that utilizing a different support set with some loss that can be theoretically bounded and analyzed.
no code implementations • 1 Nov 2017 • Haibin Yu, Trong Nghia Hoang, Kian Hsiang Low, Patrick Jaillet
This paper presents a novel variational inference framework for deriving a family of Bayesian sparse Gaussian process regression (SGPR) models whose approximations are variationally optimal with respect to the full-rank GPR model enriched with various corresponding correlation structures of the observation noises.
no code implementations • 18 Nov 2016 • Quang Minh Hoang, Trong Nghia Hoang, Kian Hsiang Low
While much research effort has been dedicated to scaling up sparse Gaussian process (GP) models based on inducing variables for big data, little attention is afforded to the other less explored class of low-rank GP approximations that exploit the sparse spectral representation of a GP kernel.
1 code implementation • 5 Jan 2016 • Jie Fu, Hongyin Luo, Jiashi Feng, Kian Hsiang Low, Tat-Seng Chua
The performance of deep neural networks is well-known to be sensitive to the setting of their hyperparameters.
no code implementations • 23 Nov 2015 • Chao Wang, Somchaya Liemhetcharat, Kian Hsiang Low
A continuous transportation task is one in which a multi-agent team visits a number of fixed locations, picks up objects, and delivers them to a final destination.
1 code implementation • 21 Nov 2015 • Yehong Zhang, Trong Nghia Hoang, Kian Hsiang Low, Mohan Kankanhalli
This paper addresses the problem of active learning of a multi-output Gaussian process (MOGP) model representing multiple types of coexisting correlated environmental phenomena.
no code implementations • 21 Nov 2015 • Chun Kai Ling, Kian Hsiang Low, Patrick Jaillet
This paper presents a novel nonmyopic adaptive Gaussian process planning (GPP) framework endowed with a general class of Lipschitz continuous reward functions that can unify some active learning/sensing and Bayesian optimization criteria and offer practitioners some flexibility to specify their desired choices for defining new tasks/problems.
no code implementations • 17 Nov 2014 • Kian Hsiang Low, Jiangbo Yu, Jie Chen, Patrick Jaillet
To improve its scalability, this paper presents a low-rank-cum-Markov approximation (LMA) of the GP model that is novel in leveraging the dual computational advantages stemming from complementing a low-rank approximate representation of the full-rank GP based on a support set of inputs with a Markov approximation of the resulting residual process; the latter approximation is guaranteed to be closest in the Kullback-Leibler distance criterion subject to some constraint and is considerably more refined than that of existing sparse GP models utilizing low-rank representations due to its more relaxed conditional independence assumption (especially with larger data).
no code implementations • 9 Aug 2014 • Jie Chen, Kian Hsiang Low, Colin Keng-Yan Tan, Ali Oran, Patrick Jaillet, John Dolan, Gaurav Sukhatme
The problem of modeling and predicting spatiotemporal traffic phenomena over an urban road network is important to many traffic applications such as detecting and forecasting congestion hotspots.
no code implementations • 9 Aug 2014 • Jie Chen, Nannan Cao, Kian Hsiang Low, Ruofei Ouyang, Colin Keng-Yan Tan, Patrick Jaillet
We theoretically guarantee the predictive performances of our proposed parallel GPs to be equivalent to that of some centralized approximate GP regression methods: The computation of their centralized counterparts can be distributed among parallel machines, hence achieving greater time efficiency and scalability.
no code implementations • 21 Apr 2014 • Nuo Xu, Kian Hsiang Low, Jie Chen, Keng Kiat Lim, Etkin Baris Ozgul
Central to robot exploration and mapping is the task of persistent localization in environmental fields characterized by spatially correlated measurements.
no code implementations • 2 Jun 2013 • Jie Chen, Kian Hsiang Low, Colin Keng-Yan Tan
This paper presents a novel decentralized data fusion and active sensing algorithm for real-time, fine-grained mobility demand sensing and prediction with a fleet of autonomous robotic vehicles in a MoD system.
no code implementations • 27 May 2013 • Kian Hsiang Low, John M. Dolan, Pradeep Khosla
The time complexity of solving MASP approximately depends on the map resolution, which limits its use in large-scale, high-resolution exploration and mapping.
no code implementations • 24 May 2013 • Jie Chen, Nannan Cao, Kian Hsiang Low, Ruofei Ouyang, Colin Keng-Yan Tan, Patrick Jaillet
We theoretically guarantee the predictive performances of our proposed parallel GPs to be equivalent to that of some centralized approximate GP regression methods: The computation of their centralized counterparts can be distributed among parallel machines, hence achieving greater time efficiency and scalability.
1 code implementation • 18 Apr 2013 • Trong Nghia Hoang, Kian Hsiang Low
A key challenge in non-cooperative multi-agent systems is that of developing efficient planning algorithms for intelligent agents to interact and perform effectively among boundedly rational, self-interested agents (e. g., humans).
no code implementations • 7 Apr 2013 • Trong Nghia Hoang, Kian Hsiang Low
Recent advances in Bayesian reinforcement learning (BRL) have shown that Bayes-optimality is theoretically achievable by modeling the environment's latent dynamics using Flat-Dirichlet-Multinomial (FDM) prior.
Multi-agent Reinforcement Learning reinforcement-learning +1