no code implementations • ICML 2020 • Hassan Rafique, Tong Wang, Qihang Lin, Arshia Singhani
We propose a novel type of hybrid model for multi-class classification, which utilizes competing linear models to collaborate with an existing black-box model, promoting transparency in the decision-making process.
no code implementations • 14 Jul 2023 • Wei Liu, Qihang Lin, Yangyang Xu
In this paper, we make the first attempt to establish lower complexity bounds of FOMs for solving a class of composite non-convex non-smooth optimization with linear constraints.
no code implementations • 23 Dec 2022 • Yao Yao, Qihang Lin, Tianbao Yang
In this work, we formulate the training problem of a fairness-aware machine learning model as an AUC optimization problem subject to a class of AUC-based fairness constraints.
2 code implementations • 6 Nov 2022 • Ronilo J. Ragodos, Tong Wang, Qihang Lin, Xun Zhou
To teach ProtoX about visual similarity, we pre-train an encoder using contrastive learning via self-supervised learning to recognize states as similar if they occur close together in time and receive the same action from the black-box agent.
no code implementations • 21 Jul 2022 • Yankun Huang, Qihang Lin, Nick Street, Stephen Baek
We propose a federated learning method with weighted nodes in which the weights can be modified to optimize the model's performance on a separate validation set.
no code implementations • 3 Mar 2022 • Yao Yao, Qihang Lin, Tianbao Yang
The partial AUC, as a generalization of the AUC, summarizes only the TPRs over a specific range of the FPRs and is thus a more suitable performance measure in many real-world situations.
no code implementations • NeurIPS 2020 • Yan Yan, Yi Xu, Qihang Lin, Wei Liu, Tianbao Yang
In this paper, we bridge this gap by providing a sharp analysis of epoch-wise stochastic gradient descent ascent method (referred to as Epoch-GDA) for solving strongly convex strongly concave (SCSC) min-max problems, without imposing any additional assumption about smoothness or the function's structure.
1 code implementation • 9 Jan 2020 • Parshan Pakiman, Selvaprabu Nadarajah, Negar Soheili, Qihang Lin
Approximate linear programs (ALPs) are well-known models based on value function approximations (VFAs) to obtain policies and lower bounds on the optimal policy cost of discounted-cost Markov decision processes (MDPs).
no code implementations • 23 Sep 2019 • Hassan Rafique, Tong Wang, Qihang Lin
Driven by an increasing need for model interpretability, interpretable models have become strong competitors for black-box models in many real applications.
no code implementations • 7 Aug 2019 • Qihang Lin, Selvaprabu Nadarajah, Negar Soheili, Tianbao Yang
We design a stochastic feasible level set method (SFLS) for SOECs that has low data complexity and emphasizes feasibility before convergence.
no code implementations • 10 May 2019 • Tong Wang, Qihang Lin
The interpretable model substitutes the black-box model on a subset of data where the black-box is overkill or nearly overkill, gaining transparency at no or low cost of the predictive accuracy.
no code implementations • 23 Apr 2019 • Yan Yan, Yi Xu, Qihang Lin, Lijun Zhang, Tianbao Yang
The main contribution of this paper is the design and analysis of new stochastic primal-dual algorithms that use a mixture of stochastic gradient updates and a logarithmic number of deterministic dual updates for solving a family of convex-concave problems with no bilinear structure assumed.
no code implementations • 28 Nov 2018 • Yi Xu, Qi Qi, Qihang Lin, Rong Jin, Tianbao Yang
In this paper, we propose new stochastic optimization algorithms and study their first-order convergence theories for solving a broad family of DC functions.
no code implementations • 24 Oct 2018 • Mingrui Liu, Hassan Rafique, Qihang Lin, Tianbao Yang
In this paper, we consider first-order convergence theory and algorithms for solving a class of non-convex non-concave min-max saddle-point problems, whose objective function is weakly convex in the variables of minimization and weakly concave in the variables of maximization.
no code implementations • 4 Oct 2018 • Hassan Rafique, Mingrui Liu, Qihang Lin, Tianbao Yang
Min-max problems have broad applications in machine learning, including learning with non-decomposable loss and learning with robustness to data distribution.
no code implementations • 30 Aug 2018 • Yan Yan, Tianbao Yang, Zhe Li, Qihang Lin, Yi Yang
However, their theoretical analysis of convergence of the training objective and the generalization error for prediction is still under-explored.
no code implementations • ICML 2018 • Qihang Lin, Runchao Ma, Tianbao Yang
To update the level parameter towards the optimality, both methods require an oracle that generates upper and lower bounds as well as an affine-minorant of the level function.
no code implementations • 14 Feb 2018 • Michael T. Lash, Qihang Lin, W. Nick Street
Inverse classification uses an induced classifier as a queryable oracle to guide test instances towards a preferred posterior class label.
no code implementations • NeurIPS 2017 • Yi Xu, Qihang Lin, Tianbao Yang
The most studied error bound is the quadratic error bound, which generalizes strong convexity and is satisfied by a large family of machine learning problems.
no code implementations • NeurIPS 2017 • Yi Xu, Mingrui Liu, Qihang Lin, Tianbao Yang
The novelty of the proposed scheme lies at that it is adaptive to a local sharpness property of the objective function, which marks the key difference from previous adaptive scheme that adjusts the penalty parameter per-iteration based on certain conditions on iterates.
no code implementations • 13 Oct 2017 • Lin Xiao, Adams Wei Yu, Qihang Lin, Weizhu Chen
Machine learning with big data often involves large optimization models.
no code implementations • ICML 2017 • Yi Xu, Qihang Lin, Tianbao Yang
In this paper, a new theory is developed for first-order stochastic convex optimization, showing that the global convergence rate is sufficiently quantified by a local growth rate of the objective function in a neighborhood of the optimal solutions.
2 code implementations • ICLR 2018 • Adams Wei Yu, Lei Huang, Qihang Lin, Ruslan Salakhutdinov, Jaime Carbonell
In this paper, we propose a generic and simple strategy for utilizing stochastic gradient information in optimization.
no code implementations • 21 Dec 2016 • Xi Chen, Kevin Jiao, Qihang Lin
Rank aggregation based on pairwise comparisons over a set of items has a wide range of applications.
no code implementations • NeurIPS 2016 • Yi Xu, Yan Yan, Qihang Lin, Tianbao Yang
To the best of our knowledge, this is the lowest iteration complexity achieved so far for the considered non-smooth optimization problems without strong convexity assumption.
no code implementations • 5 Oct 2016 • Michael T. Lash, Qihang Lin, W. Nick Street, Jennifer G. Robinson, Jeffrey Ohlmann
To solve such a problem, we propose three real-valued heuristic-based methods and two sensitivity analysis-based comparison methods, each of which is evaluated on two freely available real-world datasets.
no code implementations • ICML 2017 • Tianbao Yang, Qihang Lin, Lijun Zhang
In this paper, we develop projection reduced optimization algorithms for both smooth and non-smooth optimization with improved convergence rates under a certain regularity condition of the constraint function.
no code implementations • NeurIPS 2016 • Yi Xu, Yan Yan, Qihang Lin, Tianbao Yang
In this work, we will show that the proposed HOPS achieved a lower iteration complexity of $\widetilde O(1/\epsilon^{1-\theta})$\footnote{$\widetilde O()$ suppresses a logarithmic factor.}
no code implementations • 4 Jul 2016 • Yi Xu, Qihang Lin, Tianbao Yang
In particular, if the objective function $F(\mathbf w)$ in the $\epsilon$-sublevel set grows as fast as $\|\mathbf w - \mathbf w_*\|_2^{1/\theta}$, where $\mathbf w_*$ represents the closest optimal solution to $\mathbf w$ and $\theta\in(0, 1]$ quantifies the local growth rate, the iteration complexity of first-order stochastic optimization for achieving an $\epsilon$-optimal solution can be $\widetilde O(1/\epsilon^{2(1-\theta)})$, which is optimal at most up to a logarithmic factor.
1 code implementation • 29 May 2016 • Michael T. Lash, Qihang Lin, W. Nick Street, Jennifer G. Robinson
In this paper we propose a general framework and method that overcomes these and other limitations.
no code implementations • 12 Apr 2016 • Tianbao Yang, Qihang Lin, Zhe Li
This paper fills the gap between practice and theory by developing a basic convergence analysis of two stochastic momentum methods, namely stochastic heavy-ball method and the stochastic variant of Nesterov's accelerated gradient method.
no code implementations • 9 Dec 2015 • Tianbao Yang, Qihang Lin
We show that, when applied to a broad class of convex optimization problems, RSG method can find an $\epsilon$-optimal solution with a low complexity than SG method.
no code implementations • 6 Oct 2015 • Tianbao Yang, Qihang Lin
In this paper, we show that simple {Stochastic} subGradient Decent methods with multiple Restarting, named {\bf RSGD}, can achieve a \textit{linear convergence rate} for a class of non-smooth and non-strongly convex optimization problems where the epigraph of the objective function is a polyhedron, to which we refer as {\bf polyhedral convex optimization}.
no code implementations • 14 Aug 2015 • Adams Wei Yu, Qihang Lin, Tianbao Yang
We propose a doubly stochastic primal-dual coordinate optimization algorithm for empirical risk minimization, which can be formulated as a bilinear saddle-point problem.
no code implementations • 27 Jul 2015 • Jason D. Lee, Qihang Lin, Tengyu Ma, Tianbao Yang
We also prove a lower bound for the number of rounds of communication for a broad class of distributed first-order methods including the proposed algorithms in this paper.
no code implementations • 18 Jul 2015 • Tianbao Yang, Lijun Zhang, Qihang Lin, Rong Jin
In this paper, we study a fast approximation method for {\it large-scale high-dimensional} sparse least-squares regression problem by exploiting the Johnson-Lindenstrauss (JL) transforms, which embed a set of high-dimensional vectors into a low-dimensional space.
no code implementations • NeurIPS 2014 • Qihang Lin, Zhaosong Lu, Lin Xiao
We develop an accelerated randomized proximal coordinate gradient (APCG) method, for solving a broad class of composite convex optimization problems.
no code implementations • 13 Aug 2014 • Tianbao Yang, Rong Jin, Shenghuo Zhu, Qihang Lin
In this work, we study data preconditioning, a well-known and long-existing technique, for boosting the convergence of first-order methods for regularized loss minimization.
no code implementations • 12 Mar 2014 • Xi Chen, Qihang Lin, Dengyong Zhou
In crowd labeling, a large amount of unlabeled data instances are outsourced to a crowd of workers.
no code implementations • 19 Apr 2013 • Jianhui Chen, Tianbao Yang, Qihang Lin, Lijun Zhang, Yi Chang
We consider stochastic strongly convex optimization with a complex inequality constraint.
no code implementations • NeurIPS 2012 • Xi Chen, Qihang Lin, Javier Pena
We develop a novel algorithm based on the regularized dual averaging (RDA) method, that can simultaneously achieve the optimal convergence rates for both convex and strongly convex loss.