2 code implementations • 20 Mar 2024 • Charles Goddard, Shamane Siriwardhana, Malikeh Ehghaghi, Luke Meyers, Vlad Karpukhin, Brian Benedict, Mark McQuade, Jacob Solawetz
The rapid expansion of the open-source language model landscape presents an opportunity to merge the competencies of these model checkpoints by combining their parameters.
1 code implementation • 6 Oct 2022 • Shamane Siriwardhana, Rivindu Weerasekera, Elliott Wen, Tharindu Kaluarachchi, Rajib Rana, Suranga Nanayakkara
We propose \textit{RAG-end2end}, an extension to RAG, that can adapt to a domain-specific knowledge base by updating all components of the external knowledge base during training.
1 code implementation • 22 Jun 2021 • Shamane Siriwardhana, Rivindu Weerasekera, Elliott Wen, Suranga Nanayakkara
In this paper, we illustrate how to fine-tune the entire Retrieval Augment Generation (RAG) architecture in an end-to-end manner.
Ranked #2 on Question Answering on SQuAD
1 code implementation • Interspeech 2020 • Shamane Siriwardhana, Andrew Reis, Rivindu Weerasekera, Suranga Nanayakkara
Multimodal emotion recognition from speech is an important area in affective computing.
1 code implementation • 15 Aug 2020 • Shamane Siriwardhana, Andrew Reis, Rivindu Weerasekera, Suranga Nanayakkara
Multimodal emotion recognition from speech is an important area in affective computing.
2 code implementations • 18 Aug 2019 • Shamane Siriwardhana, Rivindu Weerasakera, Denys J. C. Matthies, Suranga Nanayakkara
In this paper, we show how novel transfer reinforcement learning techniques can be applied to the complex task of target driven navigation using the photorealistic AI2THOR simulator.
no code implementations • Master's Thesis - UOA 2019 • Shamane Siriwardhana
To cope with the challenges in transfer learning and performance, we present a new approach using Universal Successor Features (USF) in this thesis.
no code implementations • 27 Nov 2018 • Shamane Siriwardhana, Rivindu Weerasekera, Suranga Nanayakkara
Being able to navigate to a target with minimal supervision and prior knowledge is critical to creating human-like assistive agents.