no code implementations • 13 May 2024 • Mu-Huan Miles Chung, Sharon Li, Jaturong Kongmanee, Lu Wang, Yuhong Yang, Calvin Giang, Khilan Jerath, Abhay Raman, David Lie, Mark Chignell
We also recommend that the information gain maximizing sample method (based on expert confidence) should be used in early stages of Active Learning, providing that well-calibrated confidence can be obtained.
no code implementations • 13 May 2024 • Jiawei Zhang, Yuhong Yang, Jie Ding
It is quite popular nowadays for researchers and data analysts holding different datasets to seek assistance from each other to enhance their modeling performance.
no code implementations • 29 Feb 2024 • Juexiao Feng, Yuhong Yang, Yanchun Xie, Yaqian Li, Yandong Guo, Yuchen Guo, Yuwei He, Liuyu Xiang, Guiguang Ding
In recent years, object detection in deep learning has experienced rapid development.
1 code implementation • 26 Dec 2023 • Mengyao Lyu, Yuhong Yang, Haiwen Hong, Hui Chen, Xuan Jin, Yuan He, Hui Xue, Jungong Han, Guiguang Ding
The prevalent use of commercial and open-source diffusion models (DMs) for text-to-image generation prompts risk mitigation to prevent undesired behaviors.
no code implementations • 3 Nov 2023 • Xinmeng Xu, Yuhong Yang, Weiping tu
To overcome this limitation, we introduce a strategy to map monaural speech into a fixed simulation space for better differentiation between target speech and noise.
no code implementations • 19 Sep 2023 • Hongyang Chen, Yuhong Yang, Qingmu Liu, Baifeng Li, Weiping tu, Song Lin
Then We compare natural and grid sentences in terms of Lombard effect and Normal-to-Lombard conversion using LCT and Enhanced MAndarin Lombard Grid corpus (EMALG).
no code implementations • 28 Jul 2023 • Xinmeng Xu, Weiping tu, Yuhong Yang
Convolutional neural networks (CNN) and Transformer have wildly succeeded in multimedia applications.
no code implementations • 26 Jul 2023 • Chang Han, Xinmeng Xu, Weiping tu, Yuhong Yang, Yajie Liu
We observe that besides target positive information, e. g., ground-truth speech and features, the target negative information, such as interference signals and features, helps make pattern of target speech and interference signals more discriminative.
no code implementations • 26 Apr 2023 • Xinmeng Xu, Weiping tu, Chang Han, Yuhong Yang
In this study, we propose a SE model that integrates both speech positive and negative information for improving SE performance by adopting contrastive learning, in which two innovations have consisted.
no code implementations • 1 Mar 2023 • Mu-Huan Chung, Lu Wang, Sharon Li, Yuhong Yang, Calvin Giang, Khilan Jerath, Abhay Raman, David Lie, Mark Chignell
In this paper we present research results concerning the application of Active Learning to anomaly detection in redacted emails, comparing the utility of different methods for implementing active learning in this context.
1 code implementation • ICLR 2023 • Enmao Diao, Ganghua Wang, Jiawei Zhan, Yuhong Yang, Jie Ding, Vahid Tarokh
Our extensive experiments corroborate the hypothesis that for a generic pruning procedure, PQI decreases first when a large model is being effectively regularized and then increases when its compressibility reaches a limit that appears to correspond to the beginning of underfitting.
no code implementations • 7 Dec 2022 • Xinmeng Xu, Weiping tu, Yuhong Yang
Attention mechanisms, such as local and non-local attention, play a fundamental role in recent deep learning based speech enhancement (SE) systems.
no code implementations • 2 Dec 2022 • Xinmeng Xu, Weiping tu, Yuhong Yang
To address this issue, we inject spatial information into the monaural SE model and propose a knowledge distillation strategy to enable the monaural SE model to learn binaural speech features from the binaural SE model, which makes monaural SE model possible to reconstruct higher intelligibility and quality enhanced speeches under low signal-to-noise ratio (SNR) conditions.
no code implementations • 11 Jun 2022 • Wenjing Yang, Ganghua Wang, Jie Ding, Yuhong Yang
One problem is understanding if a network is more compressible than another of the same structure.
no code implementations • 14 Sep 2021 • Jiawei Zhang, Jie Ding, Yuhong Yang
A standard approach is to find the globally best modeling method from a set of candidate methods.
no code implementations • 13 Jul 2021 • Giuseppe Cavaliere, Zeng-Hua Lu, Anders Rahbek, Yuhong Yang
We show that our tests perform better than/or perform as good as existing score tests in terms of joint testing, and has furthermore the added benefit of allowing for simultaneously testing individual elements of parameter of interest.
1 code implementation • 4 Mar 2021 • Baojin Huang, Zhongyuan Wang, Guangcheng Wang, Kui Jiang, Kangli Zeng, Zhen Han, Xin Tian, Yuhong Yang
In particular, we first collect a variety of glasses and masks as occlusion, and randomly combine the occlusion attributes (occlusion objects, textures, and colors) to achieve a large number of more realistic occlusion types.
no code implementations • 26 May 2020 • Sakshi Arya, Yuhong Yang
In randomized strategies, the extent of exploration-exploitation is controlled by a user-determined exploration probability sequence.
1 code implementation • 8 Nov 2019 • Jiawei Zhang, Jie Ding, Yuhong Yang
For testing parametric classification models, the BAGofT has a broader scope than the existing methods since it is not restricted to specific parametric models (e. g., logistic regression).
no code implementations • 3 Feb 2019 • Sakshi Arya, Yuhong Yang
We study a multi-armed bandit problem with covariates in a setting where there is a possible delay in observing the rewards.
no code implementations • 22 Oct 2018 • Jie Ding, Vahid Tarokh, Yuhong Yang
In the era of big data, analysts usually explore various statistical models or machine learning methods for observed data in order to facilitate scientific discoveries or gain predictive power.
no code implementations • 11 Aug 2015 • Jie Ding, Vahid Tarokh, Yuhong Yang
When the data is generated from a finite order autoregression, the Bayesian information criterion is known to be consistent, and so is the new criterion.