Search Results for author: Hyogo Hiruma

Found 1 papers, 0 papers with code

Deep Active Visual Attention for Real-time Robot Motion Generation: Emergence of Tool-body Assimilation and Adaptive Tool-use

no code implementations29 Jun 2022 Hyogo Hiruma, Hiroshi Ito, Hiroki Mori, Tetsuya OGATA

The model incorporates a state-driven active top-down visual attention module, which acquires attentions that can actively change targets based on task states.

Cannot find the paper you are looking for? You can Submit a new open access paper.