no code implementations • 15 Feb 2024 • Zehao Xiao, Jiayi Shen, Mohammad Mahdi Derakhshani, Shengcai Liao, Cees G. M. Snoek
To effectively encode the distribution information and their relationships, we further introduce a transformer inference network with a pseudo-shift training mechanism.
no code implementations • 28 Nov 2023 • Mohammad Mahdi Derakhshani, Menglin Xia, Harkirat Behl, Cees G. M. Snoek, Victor Rühle
We propose CompFuser, an image generation pipeline that enhances spatial comprehension and attribute assignment in text-to-image generative models.
no code implementations • 30 Sep 2023 • Mohammad Mahdi Derakhshani, Ivona Najdenkoska, Cees G. M. Snoek, Marcel Worring, Yuki M. Asano
We present Self-Context Adaptation (SeCAt), a self-supervised approach that unlocks few-shot abilities for open-ended classification with small visual language models.
1 code implementation • 10 Mar 2023 • Tom van Sonsbeek, Mohammad Mahdi Derakhshani, Ivona Najdenkoska, Cees G. M. Snoek, Marcel Worring
Most existing methods approach it as a multi-class classification problem, which restricts the outcome to a predefined closed-set of curated answers.
Ranked #1 on Medical Visual Question Answering on OVQA
1 code implementation • ICCV 2023 • Mohammad Mahdi Derakhshani, Enrique Sanchez, Adrian Bulat, Victor Guilherme Turrisi da Costa, Cees G. M. Snoek, Georgios Tzimiropoulos, Brais Martinez
Our approach regularizes the prompt space, reduces overfitting to the seen prompts and improves the prompt generalization on unseen prompts.
Ranked #1 on Few-Shot Learning on food101
1 code implementation • 12 Apr 2022 • Mohammad Mahdi Derakhshani, Ivona Najdenkoska, Tom van Sonsbeek, XianTong Zhen, Dwarikanath Mahapatra, Marcel Worring, Cees G. M. Snoek
Task and class incremental learning of diseases address the issue of classifying new samples without re-training the models from scratch, while cross-domain incremental learning addresses the issue of dealing with datasets originating from different institutions while retaining the previously obtained knowledge.
no code implementations • 26 Dec 2021 • Mohammad Mahdi Derakhshani, XianTong Zhen, Ling Shao, Cees G. M. Snoek
Kernel continual learning by \citet{derakhshani2021kernel} has recently emerged as a strong continual learner due to its non-parametric ability to tackle task interference and catastrophic forgetting.
1 code implementation • 12 Jul 2021 • Mohammad Mahdi Derakhshani, XianTong Zhen, Ling Shao, Cees G. M. Snoek
We further introduce variational random features to learn a data-driven kernel for each task.
no code implementations • CVPR 2019 • Mohammad Mahdi Derakhshani, Saeed Masoudnia, Amir Hossein Shaker, Omid Mersa, Mohammad Amin Sadeghi, Mohammad Rastegari, Babak N. Araabi
We present a simple and effective learning technique that significantly improves mAP of YOLO object detectors without compromising their speed.
1 code implementation • 28 May 2018 • Danial Maleki, Soheila Nadalian, Mohammad Mahdi Derakhshani, Mohammad Amin Sadeghi
For artifact removal, we input a JPEG image and try to remove its compression artifacts.