Covert Attention with a Spiking Neural Network Sylvain Chevallier and Philippe Tarroux LIMSI - CNRS Orsay, France {sylvain.chevallier,philippe.tarroux}@limsi.fr

April, 25th. 2008

Recurrent spiking neural network Visual Focus Conclusion

Introduction

Motivations I

Neural mechanism of early attentional processes

I

Bio-inspired spiking neural model

I

Active vision for robotics

I

Towards overt and covert attention

In this work I

Design and evaluate a recurrent SNN for visual focus

I

Embed this network in a visual architecture

S. Chevallier (LIMSI - CNRS)

Covert Attention

ESANN’08

2 / 20

Recurrent spiking neural network Visual Focus Conclusion

Recurrent spiking neural network Experimental validation Discussion

Outline

1. Recurrent spiking neural network Recurrent spiking neural network Experimental validation Discussion 2. Visual Focus Visual architecture Experimental validation Results

S. Chevallier (LIMSI - CNRS)

Covert Attention

ESANN’08

3 / 20

Recurrent spiking neural network Visual Focus Conclusion

Recurrent spiking neural network Experimental validation Discussion

Emergence of attention within a neural population

Nicolas Rougier & Julien Vitay Emergence of attention within a neural population Neural Networks, 2006.

S. Chevallier (LIMSI - CNRS)

Covert Attention

ESANN’08

4 / 20

Recurrent spiking neural network Visual Focus Conclusion

Recurrent spiking neural network Experimental validation Discussion

Emergence of attention within a neural population

Key points I

Neural field

I

Spatio-temporal continuity

I

Temporal dynamic

I

Attentional process

S. Chevallier (LIMSI - CNRS)

Covert Attention

ESANN’08

4 / 20

Recurrent spiking neural network Visual Focus Conclusion

Recurrent spiking neural network Experimental validation Discussion

Emergence of attention within a neural population

Key points I

Neural field

I

Spatio-temporal continuity

I

Temporal dynamic

I

Attentional process

equations

S. Chevallier (LIMSI - CNRS)

Covert Attention

ESANN’08

4 / 20

Recurrent spiking neural network Visual Focus Conclusion

Recurrent spiking neural network Experimental validation Discussion

Recurrent spiking neural network Why spiking neurons? I

Also a temporal process

I

Simple equations

I

Discrete events

I

Increased biological realism

equations

S. Chevallier (LIMSI - CNRS)

Covert Attention

ESANN’08

5 / 20

Recurrent spiking neural network Visual Focus Conclusion

Recurrent spiking neural network Experimental validation Discussion

Connection masks

Masks I

Static weight matrices

I

Gaussian and difference of Gaussians

I

Only spikes are propagated

S. Chevallier (LIMSI - CNRS)

Covert Attention

ESANN’08

6 / 20

Recurrent spiking neural network Visual Focus Conclusion

Recurrent spiking neural network Experimental validation Discussion

Experimental set Stimulus

Noise

Input Map

Focus Map

Error measure I

Stimulus occupied a spatial position for a time interval i

I

N spatial positions

I I

Efocus =

Stimulus center xi,S ¯i,A Centroid of activity x

S. Chevallier (LIMSI - CNRS)

¯i,A = x

Covert Attention

1 M

M

∑ ^xi,j

j=1

1 N ∑ d(xi,S , x¯i,A ) N i=1 ESANN’08

7 / 20

Recurrent spiking neural network Visual Focus Conclusion

Recurrent spiking neural network Experimental validation Discussion

Experimental results I

Presence of 1, 2, 3, 5, 10 or 25 distractors

I

With a gaussian noise η ∼ N(0, σ ) for σ = {0.0; 0.1; 0.25; 0.5; 0.75; 1.0}

example S. Chevallier (LIMSI - CNRS)

Covert Attention

ESANN’08

8 / 20

Recurrent spiking neural network Visual Focus Conclusion

Recurrent spiking neural network Experimental validation Discussion

Discussion

Why spiking neurons? I

Subthreshold filtering: ⇒ only relevant informations are propagated

I

Reduced complexity: ⇒ only active neurons are processed

I

Unified framework: ⇒ information encoded in a temporal structure

S. Chevallier (LIMSI - CNRS)

Covert Attention

ESANN’08

9 / 20

Recurrent spiking neural network Visual Focus Conclusion

Visual architecture Experimental validation Results

Outline

1. Recurrent spiking neural network Recurrent spiking neural network Experimental validation Discussion 2. Visual Focus Visual architecture Experimental validation Results

S. Chevallier (LIMSI - CNRS)

Covert Attention

ESANN’08

10 / 20

Recurrent spiking neural network Visual Focus Conclusion

Visual architecture Experimental validation Results

A visual architecture for saliency extraction

Spiking neural network (SNN) with LIF neurons Clock-driven simulator

S. Chevallier (LIMSI - CNRS)

Covert Attention

ESANN’08

11 / 20

Recurrent spiking neural network Visual Focus Conclusion

Visual architecture Experimental validation Results

A visual architecture for saliency extraction

I

Luminance contrasts

I

Normalized oriented contours

I

Neurons of saliency map are synchrony detectors

I

Color opponents

I

I

Multi-scale analysis

I

Similar to parvo- and magno-cellular pathways

Saliency: Nested regions of interest in different spatial scale range

S. Chevallier (LIMSI - CNRS)

Saliency extraction

Covert Attention

ESANN’08

11 / 20

Recurrent spiking neural network Visual Focus Conclusion

Visual architecture Experimental validation Results

Are SNN real-time?

⇒ Focus on and follow the most salient region ⇒ Are SNN suitable for real-time processing ?

S. Chevallier (LIMSI - CNRS)

Covert Attention

ESANN’08

12 / 20

Recurrent spiking neural network Visual Focus Conclusion

Visual architecture Experimental validation Results

Are SNN real-time?

How many computation steps for a correct answer? High sampling rate → small variation in frame → few computation needed S. Chevallier (LIMSI - CNRS)

Covert Attention

ESANN’08

12 / 20

Recurrent spiking neural network Visual Focus Conclusion

Visual architecture Experimental validation Results

Experimental set

Image acquired with EVID31 Sony camera Stimulus is a Khepera robot Analysis of a 30 frames sequence Each input frame is presented during N computation steps S. Chevallier (LIMSI - CNRS)

Covert Attention

ESANN’08

13 / 20

Recurrent spiking neural network Visual Focus Conclusion

Visual architecture Experimental validation Results

Experimental results Efocus error for different number of integration steps:

S. Chevallier (LIMSI - CNRS)

Covert Attention

ESANN’08

14 / 20

Recurrent spiking neural network Visual Focus Conclusion

Visual architecture Experimental validation Results

Experimental results Computation time per frame for different integration steps per frame:

∼ 20 frames/sec for 3 integrations step for ∼ 53,000 neurons S. Chevallier (LIMSI - CNRS)

Covert Attention

ESANN’08

14 / 20

Recurrent spiking neural network Visual Focus Conclusion

Discussions

Differences with first spike computation I

Spike train continuously feed the focus map ⇒ increased temporal continuity

Overt and covert attention I

Implementation of an attentional spotlight

I

Focalisation on the most salient region

I

Follow a moving stimulus

S. Chevallier (LIMSI - CNRS)

Covert Attention

ESANN’08

15 / 20

Recurrent spiking neural network Visual Focus Conclusion

Conclusion

Conclusion I

Saliency extraction

I

Visual focus

I

Robust to noise and perturbations

I

Suitable for a real-time framework

Perspective I

Design a neural controller ⇒ saccadic moves with a camera

S. Chevallier (LIMSI - CNRS)

Covert Attention

ESANN’08

16 / 20

Recurrent spiking neural network Visual Focus Conclusion

The end!

Thank you for your attention!

S. Chevallier (LIMSI - CNRS)

Covert Attention

ESANN’08

17 / 20

Neural field return

Continuum Neural Field: ∂ u(x, t) τ ∂t

= −u(x, t) +

M

Z

+ wM (x − x0 ) = Ae s(x, y) = Ce

S. Chevallier (LIMSI - CNRS)

Z

M0

wM (x − x0 )f [u(x0 , t)]dx0

s(x, y)I(y, t)dy + h

|x−x0 |2 a2 |x−y|2 c2

− Be

|x−x0 |2 b2

with A, B, a, b ∈ R∗+

with C, c ∈ R∗+

Covert Attention

ESANN’08

18 / 20

Spiking model return

Leaky Integrate-and-Fire (LIF):  C V˙ i = gleak (Vi − Eleak ) + PSPi (t) + I(t), if V ≤ ϑ spike and reset V otherwise

No synaptic conductance: Sj (t)

=

(f )

∑ δ (t − tj

+ dj )

f

PSPi (t)

=

∑ wij Sj (t) j

Vi (t) ← Vi (t) + PSPi (t)

S. Chevallier (LIMSI - CNRS)

Covert Attention

ESANN’08

19 / 20

Spiking model Leaky Integrate-and-Fire (LIF):  C V˙ i = gleak (Vi − Eleak ) + PSPi (t) + I(t), if V ≤ ϑ spike and reset V otherwise Input encoding:

S. Chevallier (LIMSI - CNRS)

Covert Attention

ESANN’08

19 / 20

Activity Centroid

return

Example with background noise (σ = 0.5) Mean input map activity

S. Chevallier (LIMSI - CNRS)

Activity centroid of input map

Covert Attention

Activity centroid of focus map

ESANN’08

20 / 20

Covert Attention with a Spiking Neural Network

Neural field. ▻ Spatio-temporal ... Neural field. ▻ Spatio-temporal ... Input Map. Focus Map. Error measure. ▻ Stimulus occupied a spatial position for a time ...

1MB Sizes 1 Downloads 125 Views

Recommend Documents

Covert Attention with a Spiking Neural Network
tions leading to place this work in a bio-inspired framework are explained in 1.1. The properties of the ... approaches offer a good framework for designing efficient and robust methods for extracting .... 5 have been obtained with an desktop Intel.

An asynchronous spiking neural network which can ...
I. PROBLEM SPECIFICATION. Our aim is to implement a memory in spiking neurons that can learn any given number of sequences online (a sequence machine) in a single pass or single presentation of the sequence, and predict any learnt sequence correctly.

Attention-Based Convolutional Neural Network for ...
the document are irrelevant for a given question. .... Feature maps for phrase representations pi and the max pooling steps that create sentence representations.

Self-organization of a neural network with ...
Mar 13, 2009 - potential Vi a fast variable compared to the slow recovery variable Wi. i is the .... 0.9gmax blue line, and the others black line. b and c The average in-degree and ... active cells a powerful drive to the inactive ones. Hence,.

SpikeAnts, a spiking neuron network modelling ... - NIPS Proceedings
observed in many social insect colonies [2, 4, 5, 7], where synchronized patterns of ... chrony [10], order-chaos phase transition [20] or polychronization [11].

Neural Network Toolbox
3 Apple Hill Drive. Natick, MA 01760-2098 ...... Joan Pilgram for her business help, general support, and good cheer. Teri Beale for running the show .... translation of spoken language, customer payment processing systems. Transportation.

Neural Network Toolbox
[email protected] .... Simulation With Concurrent Inputs in a Dynamic Network . ... iii. Incremental Training (of Adaptive and Other Networks) . . . . 2-20.

Neural Network Toolbox
to the government's use and disclosure of the Program and Documentation, and ...... tool for industry, education and research, a tool that will help users find what .... Once there, you can download the TRANSPARENCY MASTERS with a click.

Evolution of Spiking Neural Controllers for Autonomous ...
ing neural controllers for a vision-based mobile robot. All the evolution- ... follow lines [10]. In another line of work, Lewis et al. have developed an analog.