no code implementations • 10 Sep 2020 • Akhmedkhan Shabanov, Ilya Krotov, Nikolay Chinaev, Vsevolod Poletaev, Sergei Kozlukov, Igor Pasechnik, Bulat Yakupov, Artsiom Sanakoyeu, Vadim Lebedev, Dmitry Ulyanov
Consumer-level depth cameras and depth sensors embedded in mobile devices enable numerous applications, such as AR games and face identification.
no code implementations • 23 May 2019 • Vadim Lebedev, Vladimir Ivashkin, Irina Rudenko, Alexander Ganshin, Alexander Molchanov, Sergey Ovcharenko, Ruslan Grokhovetskiy, Ivan Bushmarinov, Dmitry Solomentsev
Precipitation nowcasting is a short-range forecast of rain/snow (up to 2 hours), often displayed on top of the geographical map by the weather service.
no code implementations • 28 Dec 2018 • Vladimir Ivashkin, Vadim Lebedev
Precipitation nowcasting using neural networks and ground-based radars has become one of the key components of modern weather prediction services, but it is limited to the regions covered by ground-based radars.
no code implementations • 13 Jun 2018 • Vadim Lebedev, Artem Babenko, Victor Lempitsky
In this work we introduce impostor networks, an architecture that allows to perform fine-grained recognition with high accuracy and using a light-weight convolutional network, making it particularly suitable for fine-grained applications on low-power and non-GPU enabled platforms.
no code implementations • NeurIPS 2016 • Oleg Grinchuk, Vadim Lebedev, Victor Lempitsky
We propose a new approach to designing visual markers (analogous to QR-codes, markers for augmented reality, and robotic fiducial tags) based on the advances in deep generative networks.
10 code implementations • 10 Mar 2016 • Dmitry Ulyanov, Vadim Lebedev, Andrea Vedaldi, Victor Lempitsky
Gatys et al. recently demonstrated that deep networks can generate beautiful textures and stylized images from a single texture example.
no code implementations • CVPR 2016 • Vadim Lebedev, Victor Lempitsky
We revisit the idea of brain damage, i. e. the pruning of the coefficients of a neural network, and suggest how brain damage can be modified and used to speedup convolutional layers.
10 code implementations • 19 Dec 2014 • Vadim Lebedev, Yaroslav Ganin, Maksim Rakhuba, Ivan Oseledets, Victor Lempitsky
We propose a simple two-step approach for speeding up convolution layers within large convolutional neural networks based on tensor decomposition and discriminative fine-tuning.