no code implementations • 26 Mar 2024 • Hanz Cuevas-Velasquez, Alejandro Galán-Cuenca, Antonio Javier Gallego, Marcelo Saval-Calvo, Robert B. Fisher
In this paper, we present ReLaTo (Registration for Large Transformations), an architecture that faces the cases where large transformations happen while maintaining good performance for local transformations.
no code implementations • 23 Dec 2023 • Sun Zhaole, Jihong Zhu, Robert B. Fisher
We present DexDLO, a model-free framework that learns dexterous dynamic manipulation policies for deformable linear objects with a fixed-base dexterous hand in an end-to-end way.
no code implementations • 3 Nov 2023 • Longfei Chen, Robert B. Fisher
A new application for real-time monitoring of the lack of movement in older adults' own homes is proposed, aiming to support people's lives and independence in their later years.
1 code implementation • 10 May 2023 • Can Pu, Chuanyu Yang, Jinnian Pu, Radim Tylecek, Robert B. Fisher
Next, the refined disparity maps are converted into full-view point clouds or single-view point clouds for the pose fusion module.
no code implementations • 9 Feb 2023 • Can Pu, Chuanyu Yang, Jinnian Pu, Robert B. Fisher
More specifically, in the automation stage, the robot navigates to the specified location without the need of a precise parking.
no code implementations • 18 Nov 2021 • Jie Zhang, Robert B. Fisher
We define a motion divergence measure using 3D lip landmarks to quantify the interframe dynamics of a 3D speaking lip.
1 code implementation • NeurIPS 2020 • Li Nanbo, Cian Eastwood, Robert B. Fisher
In order to sidestep the main technical difficulty of the multi-object-multi-view scenario -- maintaining object correspondences across views -- MulMON iteratively updates the latent object representations for a scene over multiple views.
no code implementations • NeurIPS 2021 • Li Nanbo, Muhammad Ahmed Raza, Hu Wenbin, Zhaole Sun, Robert B. Fisher
We train DyMON on multi-view-dynamic-scene data and show that DyMON learns -- without supervision -- to factorize the entangled effects of observer motions and scene object dynamics from a sequence of observations, and constructs scene object spatial representations suitable for rendering at arbitrary times (querying across time) and from arbitrary viewpoints (querying across space).
1 code implementation • 30 Oct 2021 • Hanz Cuevas-Velasquez, Antonio Javier Gallego, Robert B. Fisher
We present an innovative two-headed attention layer that combines geometric and latent features to segment a 3D scene into semantically meaningful subsets.
no code implementations • 2 Jul 2020 • Andrés Fuster-Guilló, Jorge Azorín-López, Marcelo Saval-Calvo, Juan Miguel Castillo-Zaragoza, Nahuel Garcia-DUrso, Robert B. Fisher
The 3D body models will be used for studying the effect of visualization on adherence to obesity treatment using 2D and VR devices.
no code implementations • 6 Mar 2020 • Victor Villena-Martinez, Sergiu Oprea, Marcelo Saval-Calvo, Jorge Azorin-Lopez, Andres Fuster-Guillo, Robert B. Fisher
Recent advancements in machine learning could be a turning point in these issues, particularly with the development of deep learning (DL) techniques, which are helping to improve multiple computer vision problems through an abstract understanding of the input data.
1 code implementation • 13 Jan 2020 • Antonio-Javier Gallego, Jorge Calvo-Zaragoza, Robert B. Fisher
In the context of supervised statistical learning, it is typically assumed that the training set comes from the same distribution that draws the test samples.
no code implementations • 26 Jun 2019 • Ammar Mahmood, Ana Giraldo Ospina, Mohammed Bennamoun, Senjian An, Ferdous Sohel, Farid Boussaid, Renae Hovey, Robert B. Fisher, Gary Kendrick
Across the globe, remote image data is rapidly being collected for the assessment of benthic communities from shallow to extremely deep waters on continental slopes to the abyssal seas.
no code implementations • 22 Apr 2019 • Can Pu, Robert B. Fisher
In this paper, a mathematical model for disparity fusion is proposed to guide an adversarial network to train effectively without ground truth disparity data.
no code implementations • 18 Mar 2018 • Can Pu, Runzi Song, Radim Tylecek, Nanbo Li, Robert B. Fisher
into a refiner network to better refine raw disparity inputs.
2 code implementations • 18 Mar 2018 • Can Pu, Nanbo Li, Radim Tylecek, Robert B. Fisher
Existing rigid registration methods failed to use the physical 3D uncertainty distribution of each point from a real sensor in the dynamic alignment process mainly because the uncertainty model for a point is static and invariant and it is hard to describe the change of these physical uncertainty models in the registration process.
no code implementations • 6 Mar 2018 • Hanz Cuevas-Velasquez, Nanbo Li, Radim Tylecek, Marcelo Saval-Calvo, Robert B. Fisher
A Master supervisor task selects between using the EtoH or the EinH, depending on the distance between the robot and target.
no code implementations • 5 Feb 2018 • Marcelo Saval-Calvo, Jorge Azorin-Lopez, Andres Fuster-Guillo, Victor Villena-Martinez, Robert B. Fisher
Evaluation is performed using synthetic and real data.
no code implementations • 6 Aug 2017 • Jie Zhang, Christos Maniatis, Luis Horna, Robert B. Fisher
The availability of high-speed 3D video sensors has greatly facilitated 3D shape acquisition of dynamic and deformable objects, but high frame rate 3D reconstruction is always degraded by spatial noise and temporal fluctuations.
no code implementations • 1 Aug 2017 • Christos Maniatis, Marcelo Saval-Calvo, Radim Tylecek, Robert B. Fisher
The problem of finding a next best viewpoint for 3D modeling or scene mapping has been explored in computer vision over the last decade.
no code implementations • 26 Jul 2017 • Can Pu, Nanbo Li, Robert B. Fisher
Matching 3D rigid point clouds in complex environments robustly and accurately is still a core technique used in many applications.
no code implementations • 21 Oct 2016 • Erik Rodner, Marcel Simon, Robert B. Fisher, Joachim Denzler
In this paper, we study the sensitivity of CNN outputs with respect to image transformations and noise in the area of fine-grained recognition.
no code implementations • 4 Aug 2016 • Han Gong, Graham D. Finlayson, Robert B. Fisher
A powerful form of shading adjustment is shown to be a global shading curve by which the same shading homography can be applied elsewhere.
no code implementations • 20 Jul 2016 • Graham D. Finlayson, Han Gong, Robert B. Fisher
Homographies -- a mathematical formalism for relating image points across different camera viewpoints -- are at the foundations of geometric methods in computer vision and are used in geometric camera calibration, image registration, and stereo vision and other tasks.
no code implementations • 13 May 2016 • Graham D. Finlayson, Han Gong, Robert B. Fisher
We show the surprising result that colors across a change in viewing condition (changing light color, shading and camera) are related by a homography.