no code implementations • 17 Feb 2024 • Neta Glazer, Aviv Navon, Aviv Shamsian, Ethan Fetaya
One of the challenges in applying reinforcement learning in a complex real-world environment lies in providing the agent with a sufficiently detailed reward function.
no code implementations • 6 Feb 2024 • Aviv Shamsian, Aviv Navon, David W. Zhang, Yan Zhang, Ethan Fetaya, Gal Chechik, Haggai Maron
Learning in deep weight spaces (DWS), where neural networks process the weights of other neural networks, is an emerging research direction, with applications to 2D and 3D neural fields (INRs, NeRFs), as well as making inferences about other types of neural networks.
no code implementations • 15 Nov 2023 • Aviv Shamsian, David W. Zhang, Aviv Navon, Yan Zhang, Miltiadis Kofinas, Idan Achituve, Riccardo Valperga, Gertjan J. Burghouts, Efstratios Gavves, Cees G. M. Snoek, Ethan Fetaya, Gal Chechik, Haggai Maron
Learning in weight spaces, where neural networks process the weights of other deep neural networks, has emerged as a promising research direction with applications in various fields, from analyzing and editing neural fields and implicit neural representations, to network pruning and quantization.
no code implementations • 30 Oct 2023 • Daniel Eitan, Menachem Pirchi, Neta Glazer, Shai Meital, Gil Ayach, Gidon Krendel, Aviv Shamsian, Aviv Navon, Gil Hetz, Joseph Keshet
In this work, we introduce a novel approach that integrates domain-specific or secondary LM into general-purpose LM.
1 code implementation • 20 Oct 2023 • Aviv Navon, Aviv Shamsian, Ethan Fetaya, Gal Chechik, Nadav Dym, Haggai Maron
To accelerate the alignment process and improve its quality, we propose a novel framework aimed at learning to solve the weight alignment problem, which we name Deep-Align.
no code implementations • 13 Sep 2023 • Aviv Navon, Aviv Shamsian, Neta Glazer, Gill Hetz, Joseph Keshet
Open vocabulary keyword spotting is a crucial and challenging task in automatic speech recognition (ASR) that focuses on detecting user-defined keywords within a spoken utterance.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
no code implementations • 4 Jul 2023 • Guy Berger, Aviv Navon, Ethan Fetaya
In computer vision and machine learning, a crucial challenge is to lower the computation and memory demands for neural network inference.
1 code implementation • 31 Jan 2023 • Aviv Shamsian, Aviv Navon, Neta Glazer, Kenji Kawaguchi, Gal Chechik, Ethan Fetaya
Auxiliary learning is an effective method for enhancing the generalization capabilities of trained models, particularly when dealing with small datasets.
1 code implementation • 30 Jan 2023 • Aviv Navon, Aviv Shamsian, Idan Achituve, Ethan Fetaya, Gal Chechik, Haggai Maron
Designing machine learning architectures for processing neural networks in their raw weight matrix form is a newly introduced research direction.
1 code implementation • 22 Jun 2022 • Eyal Betzalel, Coby Penso, Aviv Navon, Ethan Fetaya
In this work, we study the evaluation metrics of generative models by generating a high-quality synthetic dataset on which we can estimate classical metrics for comparison.
2 code implementations • 2 Feb 2022 • Aviv Navon, Aviv Shamsian, Idan Achituve, Haggai Maron, Kenji Kawaguchi, Gal Chechik, Ethan Fetaya
In this paper, we propose viewing the gradients combination step as a bargaining game, where tasks negotiate to reach an agreement on a joint direction of parameter update.
Ranked #1 on Multi-Task Learning on Cityscapes test
1 code implementation • NeurIPS 2021 • Idan Achituve, Aviv Shamsian, Aviv Navon, Gal Chechik, Ethan Fetaya
A key challenge in this setting is to learn effectively across clients even though each client has unique data that is often limited in size.
Ranked #1 on Personalized Federated Learning on CIFAR-100
2 code implementations • 8 Mar 2021 • Aviv Shamsian, Aviv Navon, Ethan Fetaya, Gal Chechik
In this approach, a central hypernetwork model is trained to generate a set of models, one model for each client.
Ranked #1 on Personalized Federated Learning on CIFAR-10
1 code implementation • 15 Feb 2021 • Idan Achituve, Aviv Navon, Yochai Yemini, Gal Chechik, Ethan Fetaya
As a result, our method scales well with both the number of classes and data size.
1 code implementation • ICLR 2021 • Aviv Navon, Aviv Shamsian, Gal Chechik, Ethan Fetaya
Here, we tackle the problem of learning the entire Pareto front, with the capability of selecting a desired operating point on the front after training.
1 code implementation • ICLR 2021 • Aviv Navon, Idan Achituve, Haggai Maron, Gal Chechik, Ethan Fetaya
Two main challenges arise in this multi-task learning setting: (i) designing useful auxiliary tasks; and (ii) combining auxiliary tasks into a single coherent loss.
1 code implementation • 10 Dec 2018 • Aviv Navon, Saharon Rosset
This setting naturally induces a group structure over the coefficient matrix, in which every explanatory variable corresponds to a set of related coefficients.