no code implementations • 25 Mar 2024 • Zizhao Hu, Shaochong Jia, Mohammad Rostami
Diffusion models have been widely used for conditional data cross-modal generation tasks such as text-to-image and text-to-video.
1 code implementation • 24 Mar 2024 • Zeyu Shangguan, Daniel Seita, Mohammad Rostami
Cross-modal feature extraction and integration have led to steady performance improvements in few-shot learning tasks due to generating richer features.
no code implementations • 5 Mar 2024 • Mohammad Rostami, Amin Ghariyazi, Hamed Dashti, Mohammad Hossein Rohban, Hamid R. Rabiee
This is because most existing methods are trained on separate datasets with different genes and cells, which limits their generalizability.
no code implementations • 27 Feb 2024 • Mohammad Rostami, Atik Faysal, Huaxia Wang, Avimanyu Sahoo, Ryan Antle
The ability to generalize effectively on both novel and training tasks is a significant barrier to FSL.
1 code implementation • 31 Jan 2024 • Mohammad Rostami
Our solution is based on stabilizing the learned internal distribution to enhances the model generalization on new domains.
no code implementations • 27 Jan 2024 • Yuliang Cai, Mohammad Rostami
We propose a transformer-based CL framework focusing on learning tasks that involve both vision and language, known as Vision-and-Language (VaL) tasks.
no code implementations • 14 Jan 2024 • Mohammad Rostami
To further enhance the performance of unsupervised domain adaptation (UDA), we develop an additional technique which makes the internal distribution of the source domain more compact, thereby improving the model's ability to generalize in the target domain. We demonstrate that by increasing the margins between data representations for different classes in the embedding space, we can improve the model performance for UDA.
no code implementations • 2 Jan 2024 • Mohammad Rostami, Dayuan Jian
By applying self-supervised learning, the algorithm learns to align the representations of event-based data with those from frame-based camera data, thereby facilitating knowledge transfer. Furthermore, the inclusion of uncorrelated conditioning ensures that the adapted model effectively distinguishes between event-based and conventional data, enhancing its ability to classify event-based images accurately. Through empirical experimentation and evaluation, we demonstrate that our algorithm surpasses existing approaches designed for the same purpose using two benchmarks.
1 code implementation • 2 Jan 2024 • Serban Stan, Mohammad Rostami
Semantic segmentation models trained on annotated data fail to generalize well when the input data distribution changes over extended time period, leading to requiring re-training to maintain performance.
no code implementations • 2 Jan 2024 • Navapat Nananukul, Hamid Soltanian-Zadeh, Mohammad Rostami
Our approach enables the transfer of knowledge from several annotated source domains to adapt a model for effective use in an unannotated target domain.
no code implementations • 28 Nov 2023 • Zizhao Hu, Shaochong Jia, Mohammad Rostami
Recently, diffusion models have been used successfully to fit distributions for cross-modal data translation and multimodal data generation.
1 code implementation • 19 Oct 2023 • Atik Faysal, Mohammad Rostami, Huaxia Wang, Avimanyu Sahoo, Ryan Antle
We use augmented samples as the query set during the training phase of the unsupervised meta-learning.
no code implementations • 5 Oct 2023 • Zizhao Hu, Mohammad Rostami
Learning new tasks accumulatively without forgetting remains a critical challenge in continual learning.
no code implementations • 27 Sep 2023 • Mohammad Rostami
This paper which is part of the New Faculty Highlights Invited Speaker Program of AAAI'23, serves as a comprehensive survey of my research in transfer learning by utilizing embedding spaces.
1 code implementation • 15 Aug 2023 • Zeyu Shangguan, Mohammad Rostami
Specifically, we develop a hierarchical ternary classification region proposal network (HTRPN) to localize the potential unlabeled novel objects and assign them new objectness labels to distinguish these objects from the base training dataset classes.
no code implementations • 30 May 2023 • Mehrnoosh Mirtaheri, Mohammad Rostami, Aram Galstyan
Temporal knowledge graph (TKG) completion models typically rely on having access to the entire graph during training.
no code implementations • 28 May 2023 • Zizhao Hu, Mohammad Rostami
Most existing cross-modal generative methods based on diffusion models use guidance to provide control over the latent space to enable conditional generation across different modalities.
1 code implementation • 5 Apr 2023 • Viswanath Chadalapaka, Derek Nguyen, Joonwon Choi, Shaunak Joshi, Mohammad Rostami
In this paper, we study the problem of claim verification in the context of claims about fictional stories in a low-shot learning setting.
1 code implementation • 4 Apr 2023 • Tejas Srinivasan, Furong Jia, Mohammad Rostami, Jesse Thomason
We propose Improvise to Initialize (I2I), a continual learning algorithm that initializes Adapters for incoming tasks by distilling knowledge from previously-learned tasks' Adapters.
no code implementations • 26 Mar 2023 • Ruitong Sun, Mohammad Rostami
Melanoma is a prevalent lethal type of cancer that is treatable if diagnosed at early stages of development.
no code implementations • 25 Mar 2023 • Yuliang Cai, Jesse Thomason, Mohammad Rostami
The size and the computational load of fine-tuning large-scale pre-trained neural network are becoming two major obstacles in adopting machine learning in many applications.
no code implementations • ICCV 2023 • Dayuan Jian, Mohammad Rostami
Event-based cameras offer reliable measurements for preforming computer vision tasks in high-dynamic range environments and during fast motion maneuvers.
1 code implementation • 22 Mar 2023 • Zizhao Hu, Mohammad Rostami
We propose a novel binarized regularization to facilitate learning of binary concepts to improve the quality of data generation in autoencoders.
1 code implementation • 18 Mar 2023 • Zeyu Shangguan, Mohammad Rostami
Our improved hierarchical sampling strategy for the region proposal network (RPN) also boosts the perception ability of the object detection model for large objects.
no code implementations • 29 Jan 2023 • Mengxi Wu, Mohammad Rostami
To navigate these obstacles, we develop the Denoising and Nuclear-Norm Wasserstein Adaptation Network (DNAN).
no code implementations • 29 Jan 2023 • Serban Stan, Mohammad Rostami
Our algorithm is based on updating the model such that the internal representation of data remains unbiased despite distributional shifts in the input space.
no code implementations • 2 Nov 2022 • Serban Stan, Mohammad Rostami
We rely on an approximation of the source latent features at adaptation time, and create a joint source/target embedding space by minimizing a distributional distance metric based on optimal transport.
no code implementations • 29 Sep 2022 • Mohammad Rostami
A dominant approach for addressing unsupervised domain adaptation is to map data points for the source and the target domains into an embedding space which is modeled as the output-space of a shared deep encoder.
1 code implementation • 10 Jul 2022 • Kleanthis Avramidis, Mohammad Rostami, Melinda Chang, Shrikanth Narayanan
Papilledema is an ophthalmic neurologic disorder in which increased intracranial pressure leads to swelling of the optic nerves.
1 code implementation • 18 Jun 2022 • Tejas Srinivasan, Ting-Yun Chang, Leticia Leonor Pinto Alva, Georgios Chochlakis, Mohammad Rostami, Jesse Thomason
Existing CL benchmarks have facilitated research on task adaptation and mitigating "catastrophic forgetting", but are limited to vision-only and language-only tasks.
no code implementations • NeurIPS 2021 • Mohammad Rostami
We develop an algorithm to address unsupervised domain adaptation (UDA) in continual learning (CL) settings.
no code implementations • 9 Oct 2021 • Mohammad Rostami, Aram Galstyan
Humans continually expand their learned knowledge to new domains and learn new concepts without any interference with past learned experiences.
no code implementations • ICCV 2021 • Mohammad Rostami, Leonidas Spinoulas, Mohamed Hussein, Joe Mathai, Wael Abd-Almageed
Advances in deep learning, combined with availability of large datasets, have led to impressive improvements in face presentation attack detection research.
no code implementations • 4 Jul 2021 • Mohammad Rostami, Aram Galstyan
We introduce a new domain adaptation method which induces large margins between different classes in an embedding space.
1 code implementation • 23 Jun 2021 • Serban Stan, Mohammad Rostami
Multi-source unsupervised domain adaptation (MUDA) is a framework to address the challenge of annotated data scarcity in a target domain via transferring knowledge from multiple annotated source domains.
Multi-Source Unsupervised Domain Adaptation Unsupervised Domain Adaptation
1 code implementation • Findings (EMNLP) 2021 • Xisen Jin, Bill Yuchen Lin, Mohammad Rostami, Xiang Ren
The ability to continuously expand knowledge over time and utilize it to rapidly generalize to new tasks is a key feature of human linguistic intelligence.
1 code implementation • 2 Jan 2021 • Serban Stan, Mohammad Rostami
In this work, we develop an algorithm for UDA where the source domain data is inaccessible during target adaptation.
no code implementations • 1 Jan 2021 • Mohammad Rostami, Aram Galstyan
Large margins in the source domain help to reduce the effect of ``domain shift'' on the performance of a trained classifier in the target domain.
no code implementations • AKBC 2021 • Mehrnoosh Mirtaheri, Mohammad Rostami, Xiang Ren, Fred Morstatter, Aram Galstyan
Most real-world knowledge graphs are characterized by a long-tail relation frequency distribution where a significant fraction of relations occurs only a handful of times.
1 code implementation • 26 Sep 2020 • Serban Stan, Mohammad Rostami
We develop an algorithm for adapting a semantic segmentation model that is trained using a labeled source domain to generalize well in an unlabeled target domain.
1 code implementation • 1 Jul 2020 • Mohammad Rostami, Aram Galstyan
We develop an algorithm to improve the performance of a pre-trained model under concept shift without retraining the model from scratch when only unannotated samples of initial concepts are accessible.
no code implementations • 4 Jul 2019 • Alex Gabourie, Mohammad Rostami, Philip Pope, Soheil Kolouri, Kyungnam Kim
We address the problem of unsupervised domain adaptation (UDA) by learning a cross-domain agnostic embedding space, where the distance between the probability distributions of the two source and target visual domains is minimized.
no code implementations • 10 Jun 2019 • Mohammad Rostami, Soheil Kolouri, Zak Murez, Yuri Owekcho, Eric Eaton, Kuyngnam Kim
Zero-shot learning (ZSL) is a framework to classify images belonging to unseen classes based on solely semantic information about these unseen classes.
no code implementations • 10 Jun 2019 • Mohammad Rostami, Soheil Kolouri, James McClelland, Praveen Pilly
After learning a concept, humans are also able to continually generalize their learned concepts to new domains by observing only a few labeled instances without any interference with the past learned knowledge.
no code implementations • 11 Mar 2019 • Mohammad Rostami, Soheil Kolouri, Praveen K. Pilly
We sample from this distribution and utilize experience replay to avoid forgetting and simultaneously accumulate new knowledge to the abstract distribution in order to couple the current task with past experience.
no code implementations • 10 Oct 2017 • David Isele, Mohammad Rostami, Eric Eaton
Knowledge transfer between tasks can improve the performance of learned models, but requires an accurate estimate of the inter-task relationships to identify the relevant knowledge to transfer.
no code implementations • 15 Sep 2017 • Mohammad Rostami, Soheil Kolouri, Kyungnam Kim, Eric Eaton
Lifelong machine learning methods acquire knowledge over a series of consecutive tasks, continually building upon their experience.
no code implementations • 12 Sep 2017 • Soheil Kolouri, Mohammad Rostami, Yuri Owechko, Kyungnam Kim
A classic approach toward zero-shot learning (ZSL) is to map the input domain to a set of semantically meaningful attributes that could be used later on to classify unseen classes of data (e. g. visual data).
no code implementations • 22 Mar 2016 • Mohammad Rostami, Zhou Wang
However, sparse representation of a signal over a known dictionary is an ill-posed, combinatorial optimization problem.