1 code implementation • 11 Jun 2023 • Josephine Lamp, Yuxin Wu, Steven Lamp, Prince Afriyie, Kenneth Bilchick, Lu Feng, Sula Mazimba
To address these limitations, this paper presents CARNA, a hemodynamic risk stratification and phenotyping framework for advanced HF that takes advantage of the explainability and expressivity of machine learned Multi-Valued Decision Diagrams (MVDDs).
1 code implementation • 5 Dec 2022 • Zhicheng Ren, Yifu Yuan, Yuxin Wu, Xiaxuan Gao, Yewen Wang, Yizhou Sun
The existing Active Graph Embedding framework proposes to use centrality score, density score, and entropy score to evaluate the value of unlabeled nodes, and it has been shown to be capable of bringing some improvement to the node classification tasks of Graph Convolutional Networks.
1 code implementation • 17 May 2021 • Yuxin Wu, Justin Johnson
BatchNorm is a critical building block in modern convolutional neural networks.
14 code implementations • CVPR 2020 • Alexander Kirillov, Yuxin Wu, Kaiming He, Ross Girshick
We present a new method for efficient high-quality image segmentation of objects and scenes.
Ranked #3 on Instance Segmentation on COCO 2017 val
45 code implementations • CVPR 2020 • Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, Ross Girshick
This enables building a large and consistent dictionary on-the-fly that facilitates contrastive unsupervised learning.
Ranked #11 on Contrastive Learning on imagenet-1k
1 code implementation • ICCV 2019 • Yi Wu, Yuxin Wu, Aviv Tamar, Stuart Russell, Georgia Gkioxari, Yuandong Tian
We introduce a new memory architecture, Bayesian Relational Memory (BRM), to improve the generalization ability for semantic visual navigation agents in unseen environments, where an agent is given a semantic target to navigate towards.
2 code implementations • CVPR 2019 • Cihang Xie, Yuxin Wu, Laurens van der Maaten, Alan Yuille, Kaiming He
This study suggests that adversarial perturbations on images lead to noise in the features constructed by these networks.
Ranked #1 on Adversarial Defense on CAAD 2018
2 code implementations • 14 Nov 2018 • Amy Zhang, Yuxin Wu, Joelle Pineau
While current benchmark reinforcement learning (RL) tasks have been useful to drive progress in the field, they are in many ways poor substitutes for learning with real-world data.
no code implementations • ICLR 2019 • Yi Wu, Yuxin Wu, Aviv Tamar, Stuart Russell, Georgia Gkioxari, Yuandong Tian
Building deep reinforcement learning agents that can generalize and adapt to unseen environments remains a fundamental challenge for AI.
18 code implementations • ECCV 2018 • Yuxin Wu, Kaiming He
FAIR's research platform for object detection research, implementing popular algorithms like Mask R-CNN and RetinaNet.
Ranked #140 on Object Detection on COCO minival
5 code implementations • ICLR 2018 • Yi Wu, Yuxin Wu, Georgia Gkioxari, Yuandong Tian
To generalize to unseen environments, an agent needs to be robust to low-level variations (e. g. color, texture, object changes), and also high-level variations (e. g. layout changes of the environment).
2 code implementations • NeurIPS 2017 • Yuandong Tian, Qucheng Gong, Wenling Shang, Yuxin Wu, C. Lawrence Zitnick
In addition, our platform is flexible in terms of environment-agent communication topologies, choices of RL methods, changes in game parameters, and can host existing C/C++-based game environments like Arcade Learning Environment.
2 code implementations • 30 Nov 2016 • Qinyao He, He Wen, Shuchang Zhou, Yuxin Wu, Cong Yao, Xinyu Zhou, Yuheng Zou
In addition, we propose balanced quantization methods for weights to further reduce performance degradation.
12 code implementations • 20 Jun 2016 • Shuchang Zhou, Yuxin Wu, Zekun Ni, Xinyu Zhou, He Wen, Yuheng Zou
We propose DoReFa-Net, a method to train convolutional neural networks that have low bitwidth weights and activations using low bitwidth parameter gradients.
no code implementations • 31 Dec 2015 • Shuchang Zhou, Jia-Nan Wu, Yuxin Wu, Xinyu Zhou
In this paper, we propose and study a technique to reduce the number of parameters and computation time in convolutional neural networks.
no code implementations • 30 Jul 2015 • Shuchang Zhou, Yuxin Wu
In this paper we propose and study a technique to impose structural constraints on the output of a neural network, which can reduce amount of computation and number of parameters besides improving prediction accuracy when the output is known to approximately conform to the low-rankness prior.