no code implementations • 12 Apr 2024 • Joshua Feinglass, Jayaraman J. Thiagarajan, Rushil Anirudh, T. S. Jayram, Yezhou Yang
Current approaches in Generalized Zero-Shot Learning (GZSL) are built upon base models which consider only a single class attribute vector representation over the entire image.
1 code implementation • 10 Jul 2023 • Rakshith Subramanyam, T. S. Jayram, Rushil Anirudh, Jayaraman J. Thiagarajan
In this paper, we explore the potential of Vision-Language Models (VLMs), specifically CLIP, in predicting visual object relationships, which involves interpreting visual features from images into language-based relations.
no code implementations • 27 Nov 2019 • T. S. Jayram, Vincent Marois, Tomasz Kornuta, Vincent Albouy, Emre Sevgen, Ahmet S. Ozcan
Transfer learning has become the de facto standard in computer vision and natural language processing, especially where labeled data is scarce.
no code implementations • 15 Nov 2018 • Vincent Marois, T. S. Jayram, Vincent Albouy, Tomasz Kornuta, Younes Bouhadjar, Ahmet S. Ozcan
We introduce a variant of the MAC model (Hudson and Manning, ICLR 2018) with a simplified set of equations that achieves comparable accuracy, while training faster.
no code implementations • 28 Sep 2018 • T. S. Jayram, Tomasz Kornuta, Ryan L. McAvoy, Ahmet S. Ozcan
We propose a new architecture called Memory-Augmented Encoder-Solver (MAES) that enables transfer learning to solve complex working memory tasks adapted from cognitive psychology.
no code implementations • 28 Sep 2018 • T. S. Jayram, Younes Bouhadjar, Ryan L. McAvoy, Tomasz Kornuta, Alexis Asseman, Kamil Rocki, Ahmet S. Ozcan
Typical neural networks with external memory do not effectively separate capacity for episodic and working memory as is required for reasoning in humans.