Search Results for author: Michel Crucianu

Found 10 papers, 2 papers with code

Semantic Generative Augmentations for Few-Shot Counting

1 code implementation26 Oct 2023 Perla Doubinsky, Nicolas Audebert, Michel Crucianu, Hervé Le Borgne

This requires to generate images that correspond to a given input number of objects.

Ranked #5 on Object Counting on FSC147 (using extra training data)

Image Classification Object Counting

Multimodal Representations for Teacher-Guided Compositional Visual Reasoning

no code implementations24 Oct 2023 Wafa Aissa, Marin Ferecatu, Michel Crucianu

Neural Module Networks (NMN) are a compelling method for visual question answering, enabling the translation of a question into a program consisting of a series of reasoning sub-tasks that are sequentially executed on the image to produce an answer.

Question Answering Visual Question Answering +1

Curriculum Learning for Compositional Visual Reasoning

no code implementations27 Mar 2023 Wafa Aissa, Marin Ferecatu, Michel Crucianu

Visual Question Answering (VQA) is a complex task requiring large datasets and expensive training.

Question Answering Visual Question Answering +1

Why is the prediction wrong? Towards underfitting case explanation via meta-classification

no code implementations20 Feb 2023 Sheng Zhou, Pierre Blanchart, Michel Crucianu, Marin Ferecatu

In this paper we present a heuristic method to provide individual explanations for those elements in a dataset (data points) which are wrongly predicted by a given classifier.

Multi-Attribute Balanced Sampling for Disentangled GAN Controls

1 code implementation28 Oct 2021 Perla Doubinsky, Nicolas Audebert, Michel Crucianu, Hervé Le Borgne

We propose to address disentanglement by subsampling the generated data to remove over-represented co-occuring attributes thus balancing the semantics of the dataset before training the classifiers.

Attribute Disentanglement

Zero-shot Learning with Deep Neural Networks for Object Recognition

no code implementations5 Feb 2021 Yannick Le Cacheux, Hervé Le Borgne, Michel Crucianu

The general approach is to learn a mapping from visual data to semantic prototypes, then use it at inference to classify visual samples from the class prototypes only.

Object Recognition Zero-Shot Learning

Using Sentences as Semantic Representations in Large Scale Zero-Shot Learning

no code implementations6 Oct 2020 Yannick Le Cacheux, Hervé Le Borgne, Michel Crucianu

Zero-shot learning aims to recognize instances of unseen classes, for which no visual instance is available during training, by learning multimodal relations between samples from seen classes and corresponding class semantic representations.

Word Embeddings Zero-Shot Learning

Aggregating Image and Text Quantized Correlated Components

no code implementations CVPR 2016 Thi Quynh Nhi Tran, Herve Le Borgne, Michel Crucianu

To address this problem, we first put forward here a new representation method that aggregates information provided by the projections of both modalities on their aligned subspaces.

8k Cross-Modal Retrieval +2

Cannot find the paper you are looking for? You can Submit a new open access paper.