no code implementations • 6 Apr 2024 • Zhiyuan Peng, Xuyang Wu, Qifan Wang, Sravanthi Rajanala, Yi Fang
Parameter Efficient Fine-Tuning (PEFT) methods have been extensively utilized in Large Language Models (LLMs) to improve the down-streaming tasks without the cost of fine-tuing the whole LLMs.
no code implementations • 4 Apr 2024 • YuAn Wang, Xuyang Wu, Hsin-Tai Wu, Zhiqiang Tao, Yi Fang
The integration of Large Language Models (LLMs) in information retrieval has raised a critical reevaluation of fairness in the text-ranking models.
no code implementations • 26 Mar 2024 • Huei-Chung Hu, Xuyang Wu, YuAn Wang, Yi Fang, Hsin-Tai Wu
This paper presents (1) code and algorithms for inferring coordinate system from provided source code, code for Euler angle application order and extracting precise rotation matrices and the Euler angles, (2) code and algorithms for converting poses from one rotation system to another, (3) novel formulae for 2D augmentations of the rotation matrices, and (4) derivations and code for the correct drawing routines for rotation matrices and poses.
no code implementations • 11 Dec 2023 • Xuyang Wu, Changxin Liu, Sindri Magnusson, Mikael Johansson
In contrast to alternatives, our algorithms can converge to the fixed point set of their synchronous counterparts using step-sizes that are independent of the delays.
1 code implementation • 17 Jul 2023 • Zhiyuan Peng, Xuyang Wu, Qifan Wang, Yi Fang
We design a filter to select high-quality example document-query pairs in the prompt to further improve the quality of weak tagged queries.
no code implementations • 17 Feb 2022 • Xuyang Wu, Sindri Magnusson, Hamid Reza Feyzmahdavian, Mikael Johansson
In this paper, we show that it is possible to use learning rates that depend on the actual time-varying delays in the system.
no code implementations • 10 Feb 2022 • Xuyang Wu, Alessandro Magnani, Suthee Chaidaroon, Ajit Puthenputhussery, Ciya Liao, Yi Fang
The proposed model utilizes domain-specific BERT with fine-tuning to bridge the vocabulary gap and employs multi-task learning to optimize multiple objectives simultaneously, which yields a general end-to-end learning framework for product search.
no code implementations • 25 Feb 2021 • Xuyang Wu, He Wang, Jie Lu
In this paper, we develop a novel distributed algorithm for addressing convex optimization with both nonlinear inequality and linear equality constraints, where the objective function can be a general nonsmooth convex function and all the constraints can be fully coupled.
Distributed Optimization Optimization and Control