no code implementations • 25 Jun 2023 • Haohan Zhang, Fengrui Hua, Chengjin Xu, Hao Kong, Ruiting Zuo, Jian Guo
The rapid advancement of Large Language Models (LLMs) has spurred discussions about their potential to enhance quantitative trading strategies.
1 code implementation • CVPR 2023 • HsiaoYuan Hsu, Xiangteng He, Yuxin Peng, Hao Kong, Qing Zhang
Content-aware visual-textual presentation layout aims at arranging spatial space on the given canvas for pre-defined elements, including text, logo, and underlay, which is a key to automatic template-free creative graphic design.
1 code implementation • 30 Aug 2022 • Xiangzhong Luo, Di Liu, Hao Kong, Shuo Huai, Hui Chen, Weichen Liu
Benefiting from the search efficiency, differentiable neural architecture search (NAS) has evolved as the most dominant alternative to automatically design competitive deep neural networks (DNNs).
no code implementations • 26 May 2022 • Mingjie Li, Hao Kong, Zhouchen Lin
Furthermore, we analyze the constraints of the inversion layer to ensure the output stability of the network to a certain extent.
no code implementations • 1 Jan 2021 • Xingyu Xie, Hao Kong, Jianlong Wu, Guangcan Liu, Zhouchen Lin
First of all, to perform matrix inverse, we provide a differentiable yet efficient way, named LD-Minv, which is a learnable deep neural network (DNN) with each layer being an $L$-th order matrix polynomial.
no code implementations • 25 Nov 2020 • Di Liu, Hao Kong, Xiangzhong Luo, Weichen Liu, Ravi Subramaniam
To bridge the gap, a plethora of deep learning techniques and optimization methods are proposed in the past years: light-weight deep learning models, network compression, and efficient neural architecture search.
1 code implementation • EMNLP 2020 • Xin Lv, Xu Han, Lei Hou, Juanzi Li, Zhiyuan Liu, Wei zhang, Yichi Zhang, Hao Kong, Suhui Wu
On the one hand, sparse KGs contain less information, which makes it difficult for the model to choose correct paths.
1 code implementation • ICML 2020 • Xingyu Xie, Hao Kong, Jianlong Wu, Wayne Zhang, Guangcan Liu, Zhouchen Lin
While successful in many fields, deep neural networks (DNNs) still suffer from some open problems such as bad local minima and unsatisfactory generalization performance.
no code implementations • 26 Oct 2019 • Hao Kong, Canyi Lu, Zhouchen Lin
Recently, the \textit{Tensor Nuclear Norm~(TNN)} regularization based on t-SVD has been widely used in various low tubal-rank tensor recovery tasks.