no code implementations • 13 May 2024 • Shuo Yin, Weihao You, Zhilong Ji, Guoqiang Zhong, Jinfeng Bai
To fully leverage the advantages of our augmented data, we propose a two-stage training strategy: In Stage-1, we finetune Llama-2 on pure CoT data to get an intermediate model, which then is trained on the code-nested data in Stage-2 to get the resulting MuMath-Code.
no code implementations • 7 Mar 2023 • Shangshang Shi, Zhimin Wang, Ruimin Shang, Yanan Li, Jiaxin Li, Guoqiang Zhong, Yongjian Gu
The taxonomic composition and abundance of phytoplankton, having direct impact on marine ecosystem dynamic and global environment change, are listed as essential ocean variables.
1 code implementation • 7 Feb 2023 • Yanan Li, Zhimin Wang, Rongbing Han, Shangshang Shi, Jiaxin Li, Ruimin Shang, Haiyong Zheng, Guoqiang Zhong, Yongjian Gu
Quantum neural network (QNN) is one of the promising directions where the near-term noisy intermediate-scale quantum (NISQ) devices could find advantageous applications against classical resources.
no code implementations • 18 Jun 2020 • Jinxuan Sun, Yang Chen, Junyu Dong, Guoqiang Zhong
Generative adversarial networks (GANs) are widely used in image generation tasks, yet the generated images are usually lack of texture details.
no code implementations • 30 Sep 2019 • Zhenlin Fan, Guoqiang Zhong
However, the existing detection methods mainly based on traditional detection methods typically only use Sea Surface Height (SSH) as a variable to detect, resulting in inaccurate performance.
no code implementations • 17 Sep 2019 • Qingyang Li, Guoqiang Zhong, Cui Xie
The method uses the stochastic gradient descent and the correlation loss function to obtain a good ocean front image output.
no code implementations • 6 Nov 2018 • Jinxuan Sun, Guoqiang Zhong, Yang Chen, Yongbin Liu, Tao Li, Zhongwen Guo
We propose a new method referring to conditional GAN, which equipments the latent noise with mixture of Student's t-distribution with attention mechanism in addition to class information.
no code implementations • 31 Oct 2018 • Guoqiang Zhong, Wencong Jiao, Wei Gao
Based on reinforcement learning and taking advantages of the superiority of these networks, we propose a novel automatic process to design a multi-block neural network, whose architecture contains multiple types of blocks mentioned above, with the purpose to do structure learning of deep neural networks and explore the possibility whether different blocks can be composed together to form a well-behaved neural network.
no code implementations • 30 Oct 2018 • Guoqiang Zhong, Guohua Yue, Xiao Ling
In this paper, we propose a RNN model, called Recurrent Attention Unit (RAU), which seamlessly integrates the attention mechanism into the interior of GRU by adding an attention gate.
no code implementations • 30 Oct 2018 • Guoqiang Zhong, Xin Lin, Kang Chen, Qingyang Li, Kai-Zhu Huang
Attention is an important cognition process of humans, which helps humans concentrate on critical information during their perception and learning.
no code implementations • 25 Oct 2018 • Guoqiang Zhong, Tao Li, Wenxue Liu, Yang Chen
The indicates that: 1) Using DNA computing algorithm to learn deep architectures is feasible; 2) Local minima should not be a problem of deep networks; 3) We can use early stop to kill the models with the bad performance just after several runs of training.
1 code implementation • 11 Jul 2018 • Guoqiang Zhong, Wei Gao, Yongbin Liu, Youzhao Yang
The deep convolutional generative adversarial networks (DCGANs) were then proposed to leverage the quality of generated images.
no code implementations • 19 May 2017 • Qin Zhang, Hui Wang, Junyu Dong, Guoqiang Zhong, Xin Sun
We formulate the SST prediction problem as a time series regression problem.
no code implementations • 24 Mar 2017 • Yanhai Gan, Huifang Chi, Ying Gao, Jun Liu, Guoqiang Zhong, Junyu Dong
In this paper, we propose a joint deep network model that combines adversarial training and perceptual feature regression for texture generation, while only random noise and user-defined perceptual attributes are required as input.
no code implementations • 25 Nov 2016 • Guoqiang Zhong, Li-Na Wang, Junyu Dong
Since about 100 years ago, to learn the intrinsic structure of data, many representation learning approaches have been proposed, including both linear ones and nonlinear ones, supervised ones and unsupervised ones.
no code implementations • 22 Jul 2015 • Jianyuan Sun, Guoqiang Zhong, Junyu Dong, Yajuan Cai
Random forests are a type of ensemble method which makes predictions by combining the results of several independent trees.
no code implementations • 16 Jul 2015 • Guoqiang Zhong, Pan Yang, Sijiang Wang, Junyu Dong
For most existing hashing methods, an image is first encoded as a vector of hand-crafted visual feature, followed by a hash projection and quantization step to get the compact binary vector.
no code implementations • 14 May 2015 • Yanhai Gan, Jun Liu, Junyu Dong, Guoqiang Zhong
Particularly, each feature extraction stage includes two layers: a convolutional layer and a feature pooling layer.
no code implementations • 11 Jun 2013 • Guoqiang Zhong, Mohamed Cheriet
Furthermore, the generalization of our model based on similarity between the learned low dimensional embeddings can be viewed as counterpart of recognition of human brain.