no code implementations • 10 May 2023 • Ayush Aggarwal, Rustam Stolkin, Naresh Marturi
It is observed that the proposed method can classify articulated and rigid objects with good accuracy.
no code implementations • 25 Nov 2022 • Dimitris Panagopoulos, Giannis Petousakis, Aniketh Ramesh, Tianshu Ruan, Grigoris Nikolaou, Rustam Stolkin, Manolis Chiou
This paper presents a Mixed-Initiative (MI) framework for addressing the problem of control authority transfer between a remote human operator and an AI agent when cooperatively controlling a mobile robot.
1 code implementation • 4 Jul 2022 • Aniketh Ramesh, Rustam Stolkin, Manolis Chiou
This paper addresses the problem of automatically detecting and quantifying performance degradation in remote mobile robots during task execution.
1 code implementation • 26 Aug 2021 • Giannis Petousakis, Manolis Chiou, Grigoris Nikolaou, Rustam Stolkin
The controller leverages a state-of-the-art computer vision method and an off-the-shelf web camera to infer the cognitive availability of the operator and inform the AI-initiated LOA switching.
no code implementations • 18 Jul 2019 • Brice Denoun, Beatriz Leon, Claudio Zito, Rustam Stolkin, Lorenzo Jamone, Miles Hansard
In this work, we present a geometry-based grasping algorithm that is capable of efficiently generating both top and side grasps for unknown objects, using a single view RGB-D camera, and of selecting the most promising one.
no code implementations • 19 Jun 2019 • Claudio Zito, Tomasz Deregowski, Rustam Stolkin
Our approach also reduce the number of controllable dimensions for the user by providing only control on x- and y-axis, while orientation of the end-effector and the pose of its fingers are inferred by the system.
no code implementations • 13 May 2019 • Jochen Stüber, Claudio Zito, Rustam Stolkin
In doing so, we dedicate a separate section to deep learning approaches which have seen a recent upsurge in the literature.
no code implementations • 13 Mar 2019 • Claudio Zito, Valerio Ortenzi, Maxime Adjigble, Marek Kopicki, Rustam Stolkin, Jeremy L. Wyatt
However, this planning approach was tried successfully only on simplified control problems.
no code implementations • 1 Aug 2018 • Jose Carlos Villarreal Guerra, Zeba Khanam, Shoaib Ehsan, Rustam Stolkin, Klaus McDonald-Maier
Weather conditions often disrupt the proper functioning of transportation systems.
no code implementations • 4 Jul 2018 • Mubariz Zaffar, Shoaib Ehsan, Rustam Stolkin, Klaus McDonald Maier
Simultaneous Localization and Mapping, commonly known as SLAM, has been an active research area in the field of Robotics over the past three decades.
no code implementations • 6 Mar 2018 • Cheng Zhao, Li Sun, Pulak Purkait, Tom Duckett, Rustam Stolkin
Dense 2D flow and a depth image are generated from monocular images by sub-networks, which are then used by a 3D flow associated layer in the L-VO network to generate dense 3D flow.
no code implementations • 30 Sep 2017 • Cheng Zhao, Li Sun, Pulak Purkait, Rustam Stolkin
For intelligent robotics applications, extending 3D mapping to 3D semantic mapping enables robots to, not only localize themselves with respect to the scene's geometrical features but also simultaneously understand the higher level meaning of the scene contexts.
no code implementations • 22 Jul 2017 • Li Sun, Gerardo Aragon-Camarasa, Simon Rogers, Rustam Stolkin, J. Paul Siebert
Our visual feature is robust to deformable shapes and our approach is able to recognise the category of unknown clothing in unconstrained and random configurations.
1 code implementation • 19 Mar 2017 • Li Sun, Cheng Zhao, Rustam Stolkin
We also propose a novel way to pretrain a DCNN for the depth modality, by training on virtual depth images projected from CAD models.
no code implementations • 14 Mar 2017 • Cheng Zhao, Li Sun, Rustam Stolkin
We present the results of experiments, in which we trained our system to perform real-time 3D semantic reconstruction for 23 different materials in a real-world application.
no code implementations • CVPR 2015 • Jingjing Xiao, Rustam Stolkin, Ales Leonardis
This paper presents a method for single target tracking of arbitrary objects in challenging video sequences.