no code implementations • 17 Feb 2024 • Xiaolu Wang, Zijian Li, Shi Jin, Jun Zhang
Federated learning (FL) is an emerging distributed training paradigm that aims to learn a common global model without exchanging or transferring the data that are stored locally at different clients.
no code implementations • 10 Sep 2023 • Xiaolu Wang, Cheng Jin, Hoi-To Wai, Yuantao Gu
This paper considers a type of incremental aggregated gradient (IAG) method for large-scale distributed optimization.
no code implementations • 21 Aug 2022 • Xiaolu Wang, Ziqi Ding, Liangyu Chen
In this paper, K12 math problems taken as the research object, the LABS model based on label-semantic attention and multi-label smoothing combining textual features is proposed to improve the automatic tagging of knowledge points for math problems.
no code implementations • 22 Feb 2022 • Xiaolu Wang, Peng Wang, Anthony Man-Cho So
Signed graphs encode similarity and dissimilarity relationships among different entities with positive and negative edges.
no code implementations • 12 May 2021 • Xiaolu Wang, Yuen-Man Pun, Anthony Man-Cho So
To address this issue, we propose a novel graph learning model based on the distributionally robust optimization methodology, which aims to identify a graph that not only provides a smooth representation of but is also robust against uncertainties in the observed signals.
no code implementations • 19 Sep 2014 • Samuel Rönnqvist, Xiaolu Wang, Peter Sarlin
Probabilistic topic modeling is a popular and powerful family of tools for uncovering thematic structure in large sets of unstructured text documents.