no code implementations • 25 Mar 2024 • Guangqian Yang, Kangrui Du, Zhihan Yang, Ye Du, Yongping Zheng, Shujun Wang
Our proposed framework is built on a masked Vim autoencoder to learn a unified multi-modal representation and long-dependencies contained in 3D medical images.
no code implementations • 20 Nov 2023 • Zhihan Yang, Zhiming Cheng, Tengjin Weng, Shucheng He, Yaqi Wang, Xin Ye, Shuai Wang
Specifically, we design a Patch Message Passing (PMP) module based on the Message Passing mechanism to establish global interaction for pathological semantic features and to exploit the subtle differences further between different diseases.
no code implementations • 18 Oct 2023 • Yuanyuan Wang, Yang Zhang, Zhiyong Wu, Zhihan Yang, Tao Wei, Kun Zou, Helen Meng
Existing augmentation methods for speaker verification manipulate the raw signal, which are time-consuming and the augmented samples lack diversity.
1 code implementation • 25 Oct 2021 • Zhihan Yang, Hai Nguyen
When the environment is partially observable (PO), a deep reinforcement learning (RL) agent must learn a suitable temporal representation of the entire history in addition to a strategy to control.
no code implementations • 13 Oct 2020 • Anurag Sarkar, Zhihan Yang, Seth Cooper
Prior research has shown variational autoencoders (VAEs) to be useful for generating and blending game levels by learning latent representations of existing level data.
no code implementations • 27 Feb 2020 • Anurag Sarkar, Zhihan Yang, Seth Cooper
We then use this space to generate level segments that combine properties of levels from both games.