1 code implementation • 17 Dec 2023 • Elisa Alboni, Gianluigi Grandesso, Gastone Pietro Rosati Papini, Justin Carpentier, Andrea Del Prete
Recently, we have proposed CACTO (Continuous Actor-Critic with Trajectory Optimization), an algorithm that uses TO to guide the exploration of an actor-critic RL algorithm.
no code implementations • 27 Jun 2023 • Etienne Moullet, François Bailly, Justin Carpentier, Christine Azevedo Coste
Assistive devices for indivuals with upper-limb movement often lack controllability and intuitiveness, in particular for grasping function.
no code implementations • 13 Dec 2022 • Yann Labbé, Lucas Manuelli, Arsalan Mousavian, Stephen Tyree, Stan Birchfield, Jonathan Tremblay, Justin Carpentier, Mathieu Aubry, Dieter Fox, Josef Sivic
Second, we introduce a novel approach for coarse pose estimation which leverages a network trained to classify whether the pose error between a synthetic rendering and an observed image of the same object can be corrected by the refiner.
no code implementations • 19 Sep 2022 • Quentin Le Lidec, Wilson Jallet, Ivan Laptev, Cordelia Schmid, Justin Carpentier
Reinforcement learning (RL) and trajectory optimization (TO) present strong complementary advantages.
1 code implementation • NeurIPS 2021 • Oumayma Bounou, Jean Ponce, Justin Carpentier
Identifying an effective model of a dynamical system from sensory data and using it for future state prediction and control is challenging.
no code implementations • 2 Nov 2021 • Zongmian Li, Jiri Sedlar, Justin Carpentier, Ivan Laptev, Nicolas Mansard, Josef Sivic
First, we introduce an approach to jointly estimate the motion and the actuation forces of the person on the manipulated object by modeling contacts and the dynamics of the interactions.
no code implementations • NeurIPS 2021 • Quentin Le Lidec, Ivan Laptev, Cordelia Schmid, Justin Carpentier
Notably, images depend both on the properties of observed scenes and on the process of image formation.
1 code implementation • 22 Jun 2021 • Armand Jordana, Justin Carpentier, Ludovic Righetti
In this work, we introduce a generic and scalable method based on multiple shooting to learn latent representations of indirectly observed dynamical systems.
no code implementations • CVPR 2021 • Yann Labbé, Justin Carpentier, Mathieu Aubry, Josef Sivic
We introduce RoboPose, a method to estimate the joint angles and the 6D camera-to-robot pose of a known articulated robot from a single RGB image.
Ranked #3 on Robot Pose Estimation on DREAM-dataset
1 code implementation • 25 Aug 2020 • Robin Strudel, Ricardo Garcia, Justin Carpentier, Jean-Paul Laumond, Ivan Laptev, Cordelia Schmid
Motion planning and obstacle avoidance is a key challenge in robotics applications.
Robotics
3 code implementations • ECCV 2020 • Yann Labbé, Justin Carpentier, Mathieu Aubry, Josef Sivic
Second, we develop a robust method for matching individual 6D object pose hypotheses across different input images in order to jointly estimate camera viewpoints and 6D poses of all objects in a single consistent scene.
2 code implementations • 11 Sep 2019 • Carlos Mastalli, Rohan Budhiraja, Wolfgang Merkt, Guilhem Saurel, Bilal Hammoud, Maximilien Naveau, Justin Carpentier, Ludovic Righetti, Sethu Vijayakumar, Nicolas Mansard
Additionally, we propose a novel optimal control algorithm called Feasibility-driven Differential Dynamic Programming (FDDP).
Robotics Optimization and Control
2 code implementations • 23 Apr 2019 • Yann Labbé, Sergey Zagoruyko, Igor Kalevatykh, Ivan Laptev, Justin Carpentier, Mathieu Aubry, Josef Sivic
We address the problem of visually guided rearrangement planning with many movable objects, i. e., finding a sequence of actions to move a set of objects from an initial arrangement to a desired one, while relying on visual inputs coming from an RGB camera.
1 code implementation • 10 Apr 2019 • Rohan Budhiraja, Justin Carpentier, Carlos Mastalli, Nicolas Mansard
A common strategy today to generate efficient locomotion movements is to split the problem into two consecutive steps: the first one generates the contact sequence together with the centroidal trajectory, while the second one computes the whole-body trajectory that follows the centroidal pattern.
1 code implementation • CVPR 2019 • Zongmian Li, Jiri Sedlar, Justin Carpentier, Ivan Laptev, Nicolas Mansard, Josef Sivic
First, we introduce an approach to jointly estimate the motion and the actuation forces of the person on the manipulated object by modeling contacts and the dynamics of their interactions.