no code implementations • 17 Mar 2024 • Vladimir Korviakov, Denis Koposov
Most of the computer vision architectures nowadays are built upon the well-known foundation operations: fully-connected layers, convolutions and multi-head self-attention blocks.
no code implementations • 29 Aug 2022 • Eugene Golikov, Eduard Pokonechnyy, Vladimir Korviakov
A seminal work [Jacot et al., 2018] demonstrated that training a neural network under specific parameterization is equivalent to performing a particular kernel method as width goes to infinity.
9 code implementations • 4 Sep 2021 • Alexey Letunovskiy, Vladimir Korviakov, Vladimir Polovnikov, Anastasiia Kargapoltseva, Ivan Mazurenko, Yepan Xiong
To address this problem we propose a measure of hardware efficiency of neural architecture search space - matrix efficiency measure (MEM); a search space comprising of hardware-efficient operations; a latency-aware scaling method; and ISyNet - a set of architectures designed to be fast on the specialized neural processing unit (NPU) hardware and accurate at the same time.
Ranked #20 on Neural Architecture Search on ImageNet
no code implementations • 9 Aug 2021 • Anuar Taskynov, Vladimir Korviakov, Ivan Mazurenko, Yepan Xiong
Nowadays Deep Learning became widely used in many economic, technical and scientific areas of human interest.