Search Results for author: Yusuke Sekikawa

Found 10 papers, 3 papers with code

Event-based Camera Tracker by $\nabla$t NeRF

no code implementations7 Apr 2023 Mana Masuda, Yusuke Sekikawa, Hideo Saito

To enable the computation of the temporal gradient of the scene, we augment NeRF's camera pose as a time function.

Pose Estimation Pose Tracking

Toward Unsupervised 3D Point Cloud Anomaly Detection using Variational Autoencoder

1 code implementation7 Apr 2023 Mana Masuda, Ryo Hachiuma, Ryo Fujii, Hideo Saito, Yusuke Sekikawa

We propose a deep variational autoencoder-based unsupervised anomaly detection network adapted to the 3D point cloud and an anomaly score specifically for 3D point clouds.

Unsupervised Anomaly Detection

Tangentially Elongated Gaussian Belief Propagation for Event-Based Incremental Optical Flow Estimation

1 code implementation CVPR 2023 Jun Nagata, Yusuke Sekikawa

An existing method using local plane fitting of events could utilize the sparsity to realize incremental updates for low-latency estimation; however, its output is merely a normal component of the full optical flow.

Optical Flow Estimation

Implicit Neural Representations for Variable Length Human Motion Generation

1 code implementation25 Mar 2022 Pablo Cervantes, Yusuke Sekikawa, Ikuro Sato, Koichi Shinoda

We confirm that our method with a Transformer decoder outperforms all relevant methods on HumanAct12, NTU-RGBD, and UESTC datasets in terms of realism and diversity of generated motions.

Decoder

Neural Implicit Event Generator for Motion Tracking

no code implementations6 Nov 2021 Mana Masuda, Yusuke Sekikawa, Ryo Fujii, Hideo Saito

Our framework use pre-trained event generation MLP named implicit event generator (IEG) and does motion tracking by updating its state (position and velocity) based on the difference between the observed event and generated event from the current state estimate.

Position

Irregularly Tabulated MLP for Fast Point Feature Embedding

no code implementations13 Nov 2020 Yusuke Sekikawa, Teppei Suzuki

Aiming at drastic speedup for point-feature embeddings at test time, we propose a new framework that uses a pair of multi-layer perceptrons (MLP) and a lookup table (LUT) to transform point-coordinate inputs into high-dimensional features.

Rethinking PointNet Embedding for Faster and Compact Model

no code implementations31 Jul 2020 Teppei Suzuki, Keisuke Ozawa, Yusuke Sekikawa

PointNet, which is the widely used point-wise embedding method and known as a universal approximator for continuous set functions, can process one million points per second.

CorsNet: 3D Point Cloud Registration by Deep Neural Network

no code implementations3 Feb 2020 Akiyoshi Kurobe, Yusuke Sekikawa, Kohta Ishikawa, and Hideo Saito

For comparison, we also developed a novel deep learning approach (DirectNet) that directly regresses the pose between point clouds.

Point Cloud Registration

Tabulated MLP for Fast Point Feature Embedding

no code implementations23 Nov 2019 Yusuke Sekikawa, Teppei Suzuki

Aiming at a drastic speedup for point-data embeddings at test time, we propose a new framework that uses a pair of multi-layer perceptron (MLP) and look-up table (LUT) to transform point-coordinate inputs into high-dimensional features.

EventNet: Asynchronous Recursive Event Processing

no code implementations CVPR 2019 Yusuke Sekikawa, Kosuke Hara, Hideo Saito

Event cameras are bio-inspired vision sensors that mimic retinas to asynchronously report per-pixel intensity changes rather than outputting an actual intensity image at regular intervals.

Cannot find the paper you are looking for? You can Submit a new open access paper.