Search Results for author: Timothée Masquelier

Found 28 papers, 19 papers with code

SpikingJelly: An open-source machine learning infrastructure platform for spike-based intelligence

1 code implementation25 Oct 2023 Wei Fang, Yanqi Chen, Jianhao Ding, Zhaofei Yu, Timothée Masquelier, Ding Chen, Liwei Huang, Huihui Zhou, Guoqi Li, Yonghong Tian

Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency by introducing neural dynamics and spike properties.

Code Generation

Audio classification with Dilated Convolution with Learnable Spacings

2 code implementations25 Sep 2023 Ismail Khalfaoui-Hassani, Timothée Masquelier, Thomas Pellegrini

Dilated convolution with learnable spacings (DCLS) is a recent convolution method in which the positions of the kernel elements are learned throughout training by backpropagation.

Audio Classification Audio Tagging

Dilated Convolution with Learnable Spacings: beyond bilinear interpolation

1 code implementation1 Jun 2023 Ismail Khalfaoui-Hassani, Thomas Pellegrini, Timothée Masquelier

Dilated Convolution with Learnable Spacings (DCLS) is a recently proposed variation of the dilated convolution in which the spacings between the non-zero elements in the kernel, or equivalently their positions, are learnable.

Parallel Spiking Neurons with High Efficiency and Ability to Learn Long-term Dependencies

1 code implementation NeurIPS 2023 Wei Fang, Zhaofei Yu, Zhaokun Zhou, Ding Chen, Yanqi Chen, Zhengyu Ma, Timothée Masquelier, Yonghong Tian

Vanilla spiking neurons in Spiking Neural Networks (SNNs) use charge-fire-reset neuronal dynamics, which can only be simulated serially and can hardly learn long-time dependencies.

Optical flow estimation from event-based cameras and spiking neural networks

1 code implementation13 Feb 2023 Javier Cuadrado, Ulysse Rançon, Benoît Cottereau, Francisco Barranco, Timothée Masquelier

Event-based sensors are an excellent fit for Spiking Neural Networks (SNNs), since the coupling of an asynchronous sensor with neuromorphic hardware can yield real-time systems with minimal power requirements.

Optical Flow Estimation

Dilated convolution with learnable spacings

2 code implementations7 Dec 2021 Ismail Khalfaoui-Hassani, Thomas Pellegrini, Timothée Masquelier

We call this method "Dilated Convolution with Learnable Spacings" (DCLS) and generalize it to the n-dimensional convolution case.

Image Classification Object Detection +1

StereoSpike: Depth Learning with a Spiking Neural Network

1 code implementation28 Sep 2021 Ulysse Rançon, Javier Cuadrado-Anibarro, Benoit R. Cottereau, Timothée Masquelier

Here we solved it using an end-to-end neuromorphic approach, combining two event-based cameras and a Spiking Neural Network (SNN) with a slightly modified U-Net-like encoder-decoder architecture, that we named StereoSpike.

Autonomous Vehicles Depth Estimation

Spiking neural networks trained via proxy

1 code implementation27 Sep 2021 Saeed Reza Kheradpisheh, Maryam Mirsadeghi, Timothée Masquelier

By assuming IF neuron with rate-coding as an approximation of ReLU, we backpropagate the error of the SNN in the proxy ANN to update the shared weights, simply by replacing the ANN final output with that of the SNN.

Spike time displacement based error backpropagation in convolutional spiking neural networks

no code implementations31 Aug 2021 Maryam Mirsadeghi, Majid Shalchian, Saeed Reza Kheradpisheh, Timothée Masquelier

To do so, we consider a convolutional SNN (CSNN) with two sets of weights: real-valued weights that are updated in the backward pass and their signs, binary weights, that are employed in the feedforward process.

Image Classification

Deep Residual Learning in Spiking Neural Networks

1 code implementation NeurIPS 2021 Wei Fang, Zhaofei Yu, Yanqi Chen, Tiejun Huang, Timothée Masquelier, Yonghong Tian

Previous Spiking ResNet mimics the standard residual block in ANNs and simply replaces ReLU activation layers with spiking neurons, which suffers the degradation problem and can hardly implement residual learning.

Encrypted Internet traffic classification using a supervised Spiking Neural Network

no code implementations24 Jan 2021 Ali Rasteh, Florian Delpech, Carlos Aguilar-Melchor, Romain Zimmer, Saeed Bagheri Shouraki, Timothée Masquelier

Internet traffic recognition is an essential tool for access providers since recognizing traffic categories related to different data packets transmitted on a network help them define adapted priorities.

General Classification Traffic Classification

Low-activity supervised convolutional spiking neural networks applied to speech commands recognition

1 code implementation13 Nov 2020 Thomas Pellegrini, Romain Zimmer, Timothée Masquelier

Deep Neural Networks (DNNs) are the current state-of-the-art models in many speech related tasks.

BS4NN: Binarized Spiking Neural Networks with Temporal Coding and Learning

1 code implementation8 Jul 2020 Saeed Reza Kheradpisheh, Maryam Mirsadeghi, Timothée Masquelier

We recently proposed the S4NN algorithm, essentially an adaptation of backpropagation to multilayer spiking neural networks that use simple non-leaky integrate-and-fire neurons and a form of temporal coding known as time-to-first-spike coding.

Technical report: supervised training of convolutional spiking neural networks with PyTorch

2 code implementations22 Nov 2019 Romain Zimmer, Thomas Pellegrini, Srisht Fateh Singh, Timothée Masquelier

Indeed, the most commonly used spiking neuron model, the leaky integrate-and-fire neuron, obeys a differential equation which can be approximated using discrete time steps, leading to a recurrent relation for the potential.

S4NN: temporal backpropagation for spiking neural networks with one spike per neuron

1 code implementation21 Oct 2019 Saeed Reza Kheradpisheh, Timothée Masquelier

In particular, in the readout layer, the first neuron to fire determines the class of the stimulus.

SpykeTorch: Efficient Simulation of Convolutional Spiking Neural Networks with at most one Spike per Neuron

1 code implementation6 Mar 2019 Milad Mozafari, Mohammad Ganjtabesh, Abbas Nowzari-Dalini, Timothée Masquelier

Application of deep convolutional spiking neural networks (SNNs) to artificial intelligence (AI) tasks has recently gained a lot of interest since SNNs are hardware-friendly and energy-efficient.

Bio-inspired digit recognition using reward-modulated spike-timing-dependent plasticity in deep convolutional networks

1 code implementation31 Mar 2018 Milad Mozafari, Mohammad Ganjtabesh, Abbas Nowzari-Dalini, Simon J. Thorpe, Timothée Masquelier

We trained it using a combination of spike-timing-dependent plasticity (STDP) for the lower layers and reward-modulated STDP (R-STDP) for the higher ones.

Optimal localist and distributed coding of spatiotemporal spike patterns through STDP and coincidence detection

no code implementations1 Mar 2018 Timothée Masquelier, Saeed Reza Kheradpisheh

Here we investigated how a single spiking neuron can optimally respond to one given pattern (localist coding), or to either one of several patterns (distributed coding, i. e. the neuron's response is ambiguous but the identity of the pattern could be inferred from the response of multiple neurons), but not to random inputs.

First-spike based visual categorization using reward-modulated STDP

no code implementations25 May 2017 Milad Mozafari, Saeed Reza Kheradpisheh, Timothée Masquelier, Abbas Nowzari-Dalini, Mohammad Ganjtabesh

In the highest layers, each neuron was assigned to an object category, and it was assumed that the stimulus category was the category of the first neuron to fire.

Game of Go Object Recognition +1

STDP-based spiking deep convolutional neural networks for object recognition

1 code implementation4 Nov 2016 Saeed Reza Kheradpisheh, Mohammad Ganjtabesh, Simon J. Thorpe, Timothée Masquelier

Coding was very sparse, with only a few thousands spikes per image, and in some cases the object category could be reasonably well inferred from the activity of a single higher-order neuron.

Object Recognition

STDP allows close-to-optimal spatiotemporal spike pattern detection by single coincidence detector neurons

no code implementations24 Oct 2016 Timothée Masquelier

Our results indicate that a relatively small $\tau$ (at most a few tens of ms) is usually optimal, even when the pattern is much longer.

Humans and deep networks largely agree on which kinds of variation make object recognition harder

no code implementations21 Apr 2016 Saeed Reza Kheradpisheh, Masoud Ghodrati, Mohammad Ganjtabesh, Timothée Masquelier

This feed-forward architecture has inspired a new generation of bio-inspired computer vision systems called deep convolutional neural networks (DCNN), which are currently the best algorithms for object recognition in natural images.

Object Object Recognition +1

Deep Networks Can Resemble Human Feed-forward Vision in Invariant Object Recognition

no code implementations17 Aug 2015 Saeed Reza Kheradpisheh, Masoud Ghodrati, Mohammad Ganjtabesh, Timothée Masquelier

Deep convolutional neural networks (DCNNs) have attracted much attention recently, and have shown to be able to recognize thousands of object categories in natural image databases.

Object Recognition

Bio-inspired Unsupervised Learning of Visual Features Leads to Robust Invariant Object Recognition

no code implementations15 Apr 2015 Saeed Reza Kheradpisheh, Mohammad Ganjtabesh, Timothée Masquelier

Retinal image of surrounding objects varies tremendously due to the changes in position, size, pose, illumination condition, background context, occlusion, noise, and nonrigid deformations.

Object Object Categorization +1

Cannot find the paper you are looking for? You can Submit a new open access paper.