Search Results for author: Mayanka Chandra Shekar

Found 3 papers, 0 papers with code

Ultra-Long Sequence Distributed Transformer

no code implementations4 Nov 2023 Xiao Wang, Isaac Lyngaas, Aristeidis Tsaris, Peng Chen, Sajal Dash, Mayanka Chandra Shekar, Tao Luo, Hong-Jun Yoon, Mohamed Wahib, John Gouley

This paper presents a novel and efficient distributed training method, the Long Short-Sequence Transformer (LSS Transformer), for training transformer with long sequences.

Integration of Domain Knowledge using Medical Knowledge Graph Deep Learning for Cancer Phenotyping

no code implementations5 Jan 2021 Mohammed Alawad, Shang Gao, Mayanka Chandra Shekar, S. M. Shamimul Hasan, J. Blair Christian, Xiao-Cheng Wu, Eric B. Durbin, Jennifer Doherty, Antoinette Stroup, Linda Coyle, Lynne Penberthy, Georgia Tourassi

Word embeddings that effectively capture the meaning and context of the word that they represent can significantly improve the performance of downstream DL models for various NLP tasks.

Word Embeddings

Cannot find the paper you are looking for? You can Submit a new open access paper.