no code implementations • 19 Apr 2024 • Arthur Aubret, Timothy Schaumlöffel, Gemma Roig, Jochen Triesch
To achieve this, the model exploits two distinct strategies: the visuo-language alignment ensures that different objects of the same category are represented similarly, whereas the temporal alignment leverages that objects from the same context are frequently seen in succession to make their representations more similar.
1 code implementation • 11 Apr 2024 • Markus R. Ernst, Francisco M. López, Arthur Aubret, Roland W. Fleming, Jochen Triesch
Color constancy (CC) describes the ability of the visual system to perceive an object as having a relatively constant color despite changes in lighting conditions.
1 code implementation • 7 Dec 2023 • Dominik Mattern, Pierre Schumacher, Francisco M. López, Marcel C. Raabe, Markus R. Ernst, Arthur Aubret, Jochen Triesch
Human intelligence and human consciousness emerge gradually during the process of cognitive development.
1 code implementation • 7 Dec 2023 • Timothy Schaumlöffel, Arthur Aubret, Gemma Roig, Jochen Triesch
For this we propose a computational model of visual representation learning during dyadic play.
no code implementations • 19 Sep 2022 • Arthur Aubret, Laetitia Matignon, Salima Hassas
The reinforcement learning (RL) research area is very active, with an important number of new contributions; especially considering the emergent field of deep RL (DRL).
no code implementations • 27 Jul 2022 • Arthur Aubret, Markus Ernst, Céline Teulière, Jochen Triesch
Specifically, our analyses reveal that: 1) 3-D object manipulations drastically improve the learning of object categories; 2) viewing objects against changing backgrounds is important for learning to discard background-related information from the latent representation.
no code implementations • 12 May 2022 • Arthur Aubret, Céline Teulière, Jochen Triesch
During each play session the agent views an object in multiple orientations before turning its body to view another object.
no code implementations • 6 Jun 2021 • Arthur Aubret, Laetitia Matignon, Salima Hassas
The optimal way for a deep reinforcement learning (DRL) agent to explore is to learn a set of skills that achieves a uniform distribution of states.
Hierarchical Reinforcement Learning reinforcement-learning +2
no code implementations • ICML Workshop LifelongML 2020 • Arthur Aubret, Laetitia Matignon, Salima Hassas
Then we show that our approach can scale on more difficult MuJoCo environments in which our agent is able to build a representation of skills which improve over a baseline both transfer learning and exploration when rewards are sparse.
no code implementations • 19 Aug 2019 • Arthur Aubret, Laetitia Matignon, Salima Hassas
In this article, we provide a survey on the role of intrinsic motivation in DRL.