no code implementations • NAACL 2022 • Si-An Chen, Jie-Jyun Liu, Tsung-Han Yang, Hsuan-Tien Lin, Chih-Jen Lin
The power and the potential of deep learning models attract many researchers to design advanced and sophisticated architectures.
Multi Label Text Classification Multi-Label Text Classification +1
no code implementations • 23 Oct 2023 • Vo Nguyen Le Duy, Hsuan-Tien Lin, Ichiro Takeuchi
We propose a novel statistical method for testing the results of anomaly detection (AD) under domain adaptation (DA), which we call CAD-DA -- controllable AD under DA.
1 code implementation • 29 Aug 2023 • Wei-Chao Cheng, Tan-Ha Mai, Hsuan-Tien Lin
Traditionally, the well-known synthetic minority oversampling technique (SMOTE) for data augmentation, a data mining approach for imbalanced learning, has been used to improve this generalization.
no code implementations • 9 Jul 2023 • Paul Kuo-Ming Huang, Si-An Chen, Hsuan-Tien Lin
Score-based generative models (SGMs) are a popular family of deep generative models that achieve leading image generation quality.
1 code implementation • 15 Jun 2023 • Po-Yi Lu, Chun-Liang Li, Hsuan-Tien Lin
Active learning is a paradigm that significantly enhances the performance of machine learning models when acquiring labeled data is expensive.
1 code implementation • 23 May 2023 • Oscar Chew, Hsuan-Tien Lin, Kai-Wei Chang, Kuan-Hao Huang
Recent research has revealed that machine learning models have a tendency to leverage spurious correlations that exist in the training set but may not hold true in general circumstances.
1 code implementation • 15 May 2023 • Hsiu-Hsuan Wang, Wei-I Lin, Hsuan-Tien Lin
Through extensive benchmark experiments, we discovered a notable decline in performance when transitioning from synthetic datasets to real-world datasets.
no code implementations • 15 May 2023 • Wei-I Lin, Gang Niu, Hsuan-Tien Lin, Masashi Sugiyama
Our analysis reveals that the efficiency of implicit label sharing is closely related to the performance of existing CLL models.
no code implementations • 1 Mar 2023 • Xiwei Xuan, Ziquan Deng, Hsuan-Tien Lin, Zhaodan Kong, Kwan-Liu Ma
Researchers have proposed various methods for visually interpreting the Convolutional Neural Network (CNN) via saliency maps, which include Class-Activation-Map (CAM) based approaches as a leading family.
1 code implementation • CVPR 2023 • Yu-Chu Yu, Hsuan-Tien Lin
Semi-Supervised Domain Adaptation (SSDA) involves learning to classify unseen target data with a few labeled and lots of unlabeled target data, along with many labeled source data from a related domain.
1 code implementation • 21 Oct 2022 • Andrew Bai, Cho-Jui Hsieh, Wendy Kan, Hsuan-Tien Lin
In this paper, we propose memorization rejection, a training scheme that rejects generated samples that are near-duplicates of training samples during training.
no code implementations • 20 Sep 2022 • Wei-I Lin, Hsuan-Tien Lin
In this paper, we sidestep those limitations with a novel perspective--reduction to probability estimates of complementary classes.
no code implementations • 3 Nov 2021 • Si-An Chen, Chun-Liang Li, Hsuan-Tien Lin
To improve GAN in terms of model compatibility, we propose Boundary-Calibration GANs (BCGANs), which leverage the boundary information from a set of pre-trained classifiers using the original data.
1 code implementation • NeurIPS 2021 • Si-An Chen, Chun-Liang Li, Hsuan-Tien Lin
Conditional Generative Adversarial Networks (cGANs) are implicit generative models which allow to sample from class-conditional distributions.
no code implementations • 29 Sep 2021 • Cheng-Yu Hsieh, Wei-I Lin, Miao Xu, Gang Niu, Hsuan-Tien Lin, Masashi Sugiyama
The goal of multi-label learning (MLL) is to associate a given instance with its relevant labels from a set of concepts.
1 code implementation • 6 Jun 2021 • Ching-Yuan Bai, Hsuan-Tien Lin, Colin Raffel, Wendy Chih-wen Kan
Many recent developments on generative models for natural images have relied on heuristically-motivated metrics that can be easily gamed by memorizing a small sample from the true distribution or training a model directly to improve the metric.
no code implementations • 16 Feb 2021 • Ashesh, Buo-Fu Chen, Treng-Shi Huang, Boyo Chen, Chia-Tung Chang, Hsuan-Tien Lin
We propose a new deep learning model for precipitation nowcasting that includes both the discrimination and attention techniques.
no code implementations • 1 Jan 2021 • Ching-Yuan Bai, Hsuan-Tien Lin, Colin Raffel, Wendy Kan
Many recent developments on generative models for natural images have relied on heuristically-motivated metrics that can be easily gamed by memorizing a small sample from the true distribution or training a model directly to improve the metric.
1 code implementation • ICLR 2021 • Yu-Ying Chou, Hsuan-Tien Lin, Tyng-Luh Liu
In addition, to break the limit of training with images only from seen classes, we design a generative scheme to simultaneously generate virtual class labels and their visual features by sampling and interpolating over seen counterparts.
no code implementations • 1 Jan 2021 • Chia-You Chen, Hsuan-Tien Lin, Gang Niu, Masashi Sugiyama
One is to (pre-)train a classifier with examples from known classes, and then transfer the pre-trained classifier to unknown classes using the new examples.
1 code implementation • EMNLP 2020 • Michelle Yuan, Hsuan-Tien Lin, Jordan Boyd-Graber
Typically, the active learning strategy is contingent on the classification model.
1 code implementation • 15 Sep 2020 • Ashesh, Chu-Song Chen, Hsuan-Tien Lin
Technically, the gaze information can be inferred from two different magnification levels: face orientation and eye orientation.
no code implementations • ICML 2020 • Yu-Ting Chou, Gang Niu, Hsuan-Tien Lin, Masashi Sugiyama
In weakly supervised learning, unbiased risk estimator(URE) is a powerful tool for training classifiers when training and test data are drawn from different distributions.
1 code implementation • 24 May 2020 • Chi-Chang Lee, Yu-Chen Lin, Hsuan-Tien Lin, Hsin-Min Wang, Yu Tsao
The results verify that the SERIL model can effectively adjust itself to new noise environments while overcoming the catastrophic forgetting issue.
no code implementations • 29 Oct 2019 • Kuen-Han Tsai, Hsuan-Tien Lin
The problem of learning from label proportions (LLP) involves training classifiers with weak labels on bags of instances, rather than strong labels on individual instances.
1 code implementation • 25 Sep 2019 • Ching-Yuan Bai, Buo-Fu Chen, Hsuan-Tien Lin
In addition, the human-driven nature of such an approach makes it difficult to reproduce and benchmark prediction models.
no code implementations • 25 Sep 2019 • Chien-Min Yu, Hsuan-Tien Lin
Deep reinforcement learning agents are known to be vulnerable to adversarial attacks.
no code implementations • ICLR Workshop LLD 2019 • Cheng-Yu Hsieh, Miao Xu, Gang Niu, Hsuan-Tien Lin, Masashi Sugiyama
To address the need, we propose a special weakly supervised MLL problem that not only focuses on the situation of limited fine-grained supervision but also leverages the hierarchical relationship between the coarse concepts and the fine-grained ones.
no code implementations • 6 Dec 2018 • Si-An Chen, Voot Tangkaratt, Hsuan-Tien Lin, Masashi Sugiyama
In this work, we propose Active Reinforcement Learning with Demonstration (ARLD), a new framework to streamline RL in terms of demonstration efforts by allowing the RL agent to query for demonstration actively during training.
no code implementations • NeurIPS 2018 • Yu-Shao Peng, Kai-Fu Tang, Hsuan-Tien Lin, Edward Chang
This paper proposes REFUEL, a reinforcement learning method with two techniques: {\em reward shaping} and {\em feature rebuilding}, to improve the performance of online symptom checking for disease diagnosis.
1 code implementation • 5 Feb 2018 • Yao-Yuan Yang, Yi-An Lin, Hong-Min Chu, Hsuan-Tien Lin
Extracting the hidden correlation is generally a challenging task.
1 code implementation • 2 Dec 2017 • Yong-Siang Shih, Kai-Yueh Chang, Hsuan-Tien Lin, Min Sun
In our learned space, we introduce a novel Projected Compatibility Distance (PCD) function which is differentiable and ensures diversity by aiming for at least one prototype to be close to a compatible item, whereas none of the prototypes are close to an incompatible item.
no code implementations • 14 Nov 2017 • Hong-Min Chu, Kuan-Hao Huang, Hsuan-Tien Lin
The foundation of CS-DPP is an online LSDR framework derived from a leading LSDR algorithm.
no code implementations • 26 Oct 2017 • Te-Kang Jan, Da-Wei Wang, Chi-Hung Lin, Hsuan-Tien Lin
Many real-world data mining applications need varying cost for different types of classification errors and thus call for cost-sensitive classification algorithms.
5 code implementations • 1 Oct 2017 • Yao-Yuan Yang, Shao-Chuan Lee, Yu-An Chung, Tung-En Wu, Si-An Chen, Hsuan-Tien Lin
libact is a Python package designed to make active learning easier for general users.
no code implementations • 24 Aug 2017 • Wei-Yuan Shen, Hsuan-Tien Lin
In this paper, we develop a novel active sampling scheme within the pair-wise approach to conduct bipartite ranking efficiently.
1 code implementation • 29 Nov 2016 • Yao-Yuan Yang, Kuan-Hao Huang, Chih-Wei Chang, Hsuan-Tien Lin
Label space expansion for multi-label classification (MLC) is a methodology that encodes the original label vectors to higher dimensional codes before training and decodes the predicted codes back to the label vectors during testing.
no code implementations • 16 Nov 2016 • Yu-An Chung, Shao-Wen Yang, Hsuan-Tien Lin
While deep neural networks have succeeded in several visual applications, such as object recognition, detection, and localization, by reaching very high classification accuracies, it is important to note that many real-world applications demand varying costs for different types of misclassification errors, thus requiring cost-sensitive classification algorithms.
no code implementations • 2 Aug 2016 • Hong-Min Chu, Hsuan-Tien Lin
Empirical studies demonstrate that the learned experience not only is competitive with existing strategies on most single datasets, but also can be transferred across datasets to improve the performance on future learning tasks.
no code implementations • 12 Jul 2016 • Chih-Kuan Yeh, Hsuan-Tien Lin
Existing artificial intelligence systems for bridge bidding rely on and are thus restricted by human-designed bidding systems or features.
2 code implementations • 30 Mar 2016 • Kuan-Hao Huang, Hsuan-Tien Lin
Furthermore, extensive experimental results demonstrate that CLEMS is significantly better than a wide spectrum of existing LE algorithms and state-of-the-art cost-sensitive algorithms across different cost functions.
no code implementations • 30 Nov 2015 • Yu-An Chung, Hsuan-Tien Lin, Shao-Wen Yang
Deep learning has been one of the most prominent machine learning techniques nowadays, being the state-of-the-art on a broad range of applications where automatic feature extraction is needed.
no code implementations • 4 Jun 2015 • Chun-Liang Li, Hsuan-Tien Lin, Chi-Jen Lu
In this paper, we analyze the convergence rate of a representative algorithm with decayed learning rate (Oja and Karhunen, 1985) in the first family for the general $k>1$ case.
no code implementations • NeurIPS 2012 • Yao-Nan Chen, Hsuan-Tien Lin
In addition, the approach can be extended to a kernelized version that allows the use of sophisticated feature combinations to assist LSDR.