no code implementations • 5 May 2024 • Fares Abawi, Di Fu, Stefan Wermter
We hypothesize that this outcome is a result of the group saliency representations instilling universal attention in the model, while the supervisory signal and fixation history guide it to learn personalized attentional behaviors, providing the unified model a benefit over individual models due to its implicit representation of universal attention.
1 code implementation • 1 Apr 2024 • Ruohong Zhang, Liangke Gui, Zhiqing Sun, Yihao Feng, Keyang Xu, Yuanhan Zhang, Di Fu, Chunyuan Li, Alexander Hauptmann, Yonatan Bisk, Yiming Yang
Preference modeling techniques, such as direct preference optimization (DPO), has shown effective in enhancing the generalization abilities of large language model (LLM).
no code implementations • 2 Nov 2021 • Di Fu, Fares Abawi, Hugo Carneiro, Matthias Kerzel, Ziwei Chen, Erik Strahl, Xun Liu, Stefan Wermter
Our saliency prediction model was trained to detect social cues, predict audio-visual saliency, and attend selectively for the robot study.
no code implementations • 8 Oct 2021 • Siqi Cao, Di Fu, Xu Yang, Stefan Wermter, Xun Liu, Haiyan Wu
Furthermore, we discuss challenges for responsible evaluation of cognitive methods and computational techniques and show approaches to future work to contribute to affective assistants capable of empathy.
no code implementations • 5 Sep 2019 • Di Fu, Cornelius Weber, Guochun Yang, Matthias Kerzel, Weizhi Nan, Pablo Barros, Haiyan Wu, Xun Liu, Stefan Wermter
Selective attention plays an essential role in information acquisition and utilization from the environment.
no code implementations • 15 Oct 2018 • Di Fu, Pablo Barros, German I. Parisi, Haiyan Wu, Sven Magg, Xun Liu, Stefan Wermter
The efficient integration of multisensory observations is a key property of the brain that yields the robust interaction with the environment.
1 code implementation • NeurIPS 2018 • Kuan Han, Haiguang Wen, Yizhen Zhang, Di Fu, Eugenio Culurciello, Zhongming Liu
When unfolded over time, the recurrent processing gives rise to an increasingly deeper hierarchy of non-linear transformation, allowing a shallow network to dynamically extend itself into an arbitrarily deep network.
no code implementations • 23 Jan 2018 • Pablo Barros, German I. Parisi, Di Fu, Xun Liu, Stefan Wermter
The human brain is able to learn, generalize, and predict crossmodal stimuli.