1 code implementation • 15 Sep 2023 • Amir Rahimi, Vanessa D'Amario, Moyuru Yamada, Kentaro Takemoto, Tomotake Sasaki, Xavier Boix
We demonstrate that this result is independent of the similarity between the training and testing data and applies to well-known families of neural network architectures for VQA (i. e. monolithic architectures and neural module networks).
1 code implementation • 27 Jan 2022 • Moyuru Yamada, Vanessa D'Amario, Kentaro Takemoto, Xavier Boix, Tomotake Sasaki
We reveal that Neural Module Networks (NMNs), i. e., question-specific compositions of modules that tackle a sub-task, achieve better or similar systematic generalization performance than the conventional Transformers, even though NMNs' modules are CNN-based.
no code implementations • 13 Jul 2021 • Vanessa D'Amario, Sanjana Srivastava, Tomotake Sasaki, Xavier Boix
Datasets often contain input dimensions that are unnecessary to predict the output label, e. g. background in object recognition, which lead to more trainable parameters.
1 code implementation • NeurIPS 2021 • Vanessa D'Amario, Tomotake Sasaki, Xavier Boix
Neural Module Networks (NMNs) aim at Visual Question Answering (VQA) via composition of modules that tackle a sub-task.
no code implementations • 1 Jan 2021 • Vanessa D'Amario, Sanjana Srivastava, Tomotake Sasaki, Xavier Boix
In this paper, we investigate the impact of unnecessary input dimensions on one of the central issues of machine learning: the number of training examples needed to achieve high generalization performance, which we refer to as the network's data efficiency.
1 code implementation • 10 Dec 2019 • Stephen Casper, Xavier Boix, Vanessa D'Amario, Ling Guo, Martin Schrimpf, Kasper Vinken, Gabriel Kreiman
We identify two distinct types of "frivolous" units that proliferate when the network's width is increased: prunable units which can be dropped out of the network without significant change to the output and redundant units whose activities can be expressed as a linear combination of others.