1 code implementation • 29 Mar 2018 • Dominic Jack, Jhony K. Pontes, Sridha Sridharan, Clinton Fookes, Sareh Shirazi, Frederic Maire, Anders Eriksson
Representing 3D shape in deep learning frameworks in an accurate, efficient and compact manner still remains an open challenge.
no code implementations • 22 Sep 2017 • Fahimeh Rezazadegan, Sareh Shirazi, Mahsa Baktashmotlagh, Larry S. Davis
Anticipating future actions is a key component of intelligence, specifically when it applies to real-time systems, such as robots or autonomous cars.
no code implementations • 18 Jan 2017 • Fahimeh Rezazadegan, Sareh Shirazi, Ben Upcroft, Michael Milford
Deep learning models have achieved state-of-the- art performance in recognizing human activities, but often rely on utilizing background cues present in typical computer vision datasets that predominantly have a stationary camera.
no code implementations • 2 Dec 2016 • Basura Fernando, Sareh Shirazi, Stephen Gould
On the MPII Cooking dataset we detect action segments with a precision of 21. 6% and recall of 11. 7% over 946 long video pairs and over 5000 ground truth action segments.
no code implementations • 18 Oct 2016 • Markus Eich, Sareh Shirazi, Gordon Wyeth
Common approaches make use of high-level knowledge, such as object affordances, semantics or understanding of actions in terms of pre- and post-conditions.
no code implementations • 10 Dec 2015 • Fahimeh Rezazadegan, Sareh Shirazi, Michael Milford, Ben Upcroft
Object detection is a fundamental task in many computer vision applications, therefore the importance of evaluating the quality of object detection is well acknowledged in this domain.
1 code implementation • 17 Jan 2015 • Niko Sünderhauf, Feras Dayoub, Sareh Shirazi, Ben Upcroft, Michael Milford
Computer vision datasets are very different in character to robotic camera data, real-time performance is essential, and performance priorities can be different.
no code implementations • 11 Aug 2014 • Sareh Shirazi, Conrad Sanderson, Chris McCool, Mehrtash T. Harandi
We propose an adaptive tracking algorithm where the object is modelled as a continuously updated bag of affine subspaces, with each subspace constructed from the object's appearance over several consecutive frames.
no code implementations • 3 Mar 2014 • Sareh Shirazi, Mehrtash T. Harandi, Brian C. Lovell, Conrad Sanderson
A robust visual tracking system requires an object appearance model that is able to handle occlusion, pose, and illumination variations in the video stream.