no code implementations • 23 Apr 2024 • Michal Nazarczuk, Jan Kristof Behrens, Karla Stepanova, Matej Hoffmann, Krystian Mikolajczyk
Embodied reasoning systems integrate robotic hardware and cognitive processes to perform complex tasks typically in response to a natural language query about a specific physical environment.
no code implementations • 20 Dec 2023 • Richard Shaw, Jifei Song, Arthur Moreau, Michal Nazarczuk, Sibi Catley-Chandar, Helisa Dhamo, Eduardo Perez-Pellitero
We model the dynamics of a scene using a tunable MLP, which learns the deformation field from a canonical space to a set of 3D Gaussians per frame.
no code implementations • 1 Jun 2022 • Michal Nazarczuk, Tony Ng, Krystian Mikolajczyk
Humans exhibit incredibly high levels of multi-modal understanding - combining visual cues with read, or heard knowledge comes easy to us and allows for very accurate interaction with the surrounding environment.
no code implementations • 23 Mar 2022 • Michal Nazarczuk, Sibi Catley-Chandar, Ales Leonardis, Eduardo Pérez Pellitero
Recent High Dynamic Range (HDR) techniques extend the capabilities of current cameras where scenes with a wide range of illumination can not be accurately captured with a single low-dynamic-range (LDR) image.
no code implementations • 6 Apr 2020 • Michal Nazarczuk, Krystian Mikolajczyk
In this paper we present an approach and a benchmark for visual reasoning in robotics applications, in particular small object grasping and manipulation.