Covert Attention with a Spiking Neural Network Sylvain Chevallier and Philippe Tarroux LIMSI - CNRS Orsay, France {sylvain.chevallier,philippe.tarroux}@limsi.fr
April, 25th. 2008
Recurrent spiking neural network Visual Focus Conclusion
Introduction
Motivations I
Neural mechanism of early attentional processes
I
Bio-inspired spiking neural model
I
Active vision for robotics
I
Towards overt and covert attention
In this work I
Design and evaluate a recurrent SNN for visual focus
I
Embed this network in a visual architecture
S. Chevallier (LIMSI - CNRS)
Covert Attention
ESANN’08
2 / 20
Recurrent spiking neural network Visual Focus Conclusion
Recurrent spiking neural network Experimental validation Discussion
Outline
1. Recurrent spiking neural network Recurrent spiking neural network Experimental validation Discussion 2. Visual Focus Visual architecture Experimental validation Results
S. Chevallier (LIMSI - CNRS)
Covert Attention
ESANN’08
3 / 20
Recurrent spiking neural network Visual Focus Conclusion
Recurrent spiking neural network Experimental validation Discussion
Emergence of attention within a neural population
Nicolas Rougier & Julien Vitay Emergence of attention within a neural population Neural Networks, 2006.
S. Chevallier (LIMSI - CNRS)
Covert Attention
ESANN’08
4 / 20
Recurrent spiking neural network Visual Focus Conclusion
Recurrent spiking neural network Experimental validation Discussion
Emergence of attention within a neural population
Key points I
Neural field
I
Spatio-temporal continuity
I
Temporal dynamic
I
Attentional process
S. Chevallier (LIMSI - CNRS)
Covert Attention
ESANN’08
4 / 20
Recurrent spiking neural network Visual Focus Conclusion
Recurrent spiking neural network Experimental validation Discussion
Emergence of attention within a neural population
Key points I
Neural field
I
Spatio-temporal continuity
I
Temporal dynamic
I
Attentional process
equations
S. Chevallier (LIMSI - CNRS)
Covert Attention
ESANN’08
4 / 20
Recurrent spiking neural network Visual Focus Conclusion
Recurrent spiking neural network Experimental validation Discussion
Recurrent spiking neural network Why spiking neurons? I
Also a temporal process
I
Simple equations
I
Discrete events
I
Increased biological realism
equations
S. Chevallier (LIMSI - CNRS)
Covert Attention
ESANN’08
5 / 20
Recurrent spiking neural network Visual Focus Conclusion
Recurrent spiking neural network Experimental validation Discussion
Connection masks
Masks I
Static weight matrices
I
Gaussian and difference of Gaussians
I
Only spikes are propagated
S. Chevallier (LIMSI - CNRS)
Covert Attention
ESANN’08
6 / 20
Recurrent spiking neural network Visual Focus Conclusion
Recurrent spiking neural network Experimental validation Discussion
Experimental set Stimulus
Noise
Input Map
Focus Map
Error measure I
Stimulus occupied a spatial position for a time interval i
I
N spatial positions
I I
Efocus =
Stimulus center xi,S ¯i,A Centroid of activity x
S. Chevallier (LIMSI - CNRS)
¯i,A = x
Covert Attention
1 M
M
∑ ^xi,j
j=1
1 N ∑ d(xi,S , x¯i,A ) N i=1 ESANN’08
7 / 20
Recurrent spiking neural network Visual Focus Conclusion
Recurrent spiking neural network Experimental validation Discussion
Experimental results I
Presence of 1, 2, 3, 5, 10 or 25 distractors
I
With a gaussian noise η ∼ N(0, σ ) for σ = {0.0; 0.1; 0.25; 0.5; 0.75; 1.0}
example S. Chevallier (LIMSI - CNRS)
Covert Attention
ESANN’08
8 / 20
Recurrent spiking neural network Visual Focus Conclusion
Recurrent spiking neural network Experimental validation Discussion
Discussion
Why spiking neurons? I
Subthreshold filtering: ⇒ only relevant informations are propagated
I
Reduced complexity: ⇒ only active neurons are processed
I
Unified framework: ⇒ information encoded in a temporal structure
S. Chevallier (LIMSI - CNRS)
Covert Attention
ESANN’08
9 / 20
Recurrent spiking neural network Visual Focus Conclusion
Visual architecture Experimental validation Results
Outline
1. Recurrent spiking neural network Recurrent spiking neural network Experimental validation Discussion 2. Visual Focus Visual architecture Experimental validation Results
S. Chevallier (LIMSI - CNRS)
Covert Attention
ESANN’08
10 / 20
Recurrent spiking neural network Visual Focus Conclusion
Visual architecture Experimental validation Results
A visual architecture for saliency extraction
Spiking neural network (SNN) with LIF neurons Clock-driven simulator
S. Chevallier (LIMSI - CNRS)
Covert Attention
ESANN’08
11 / 20
Recurrent spiking neural network Visual Focus Conclusion
Visual architecture Experimental validation Results
A visual architecture for saliency extraction
I
Luminance contrasts
I
Normalized oriented contours
I
Neurons of saliency map are synchrony detectors
I
Color opponents
I
I
Multi-scale analysis
I
Similar to parvo- and magno-cellular pathways
Saliency: Nested regions of interest in different spatial scale range
S. Chevallier (LIMSI - CNRS)
Saliency extraction
Covert Attention
ESANN’08
11 / 20
Recurrent spiking neural network Visual Focus Conclusion
Visual architecture Experimental validation Results
Are SNN real-time?
⇒ Focus on and follow the most salient region ⇒ Are SNN suitable for real-time processing ?
S. Chevallier (LIMSI - CNRS)
Covert Attention
ESANN’08
12 / 20
Recurrent spiking neural network Visual Focus Conclusion
Visual architecture Experimental validation Results
Are SNN real-time?
How many computation steps for a correct answer? High sampling rate → small variation in frame → few computation needed S. Chevallier (LIMSI - CNRS)
Covert Attention
ESANN’08
12 / 20
Recurrent spiking neural network Visual Focus Conclusion
Visual architecture Experimental validation Results
Experimental set
Image acquired with EVID31 Sony camera Stimulus is a Khepera robot Analysis of a 30 frames sequence Each input frame is presented during N computation steps S. Chevallier (LIMSI - CNRS)
Covert Attention
ESANN’08
13 / 20
Recurrent spiking neural network Visual Focus Conclusion
Visual architecture Experimental validation Results
Experimental results Efocus error for different number of integration steps:
S. Chevallier (LIMSI - CNRS)
Covert Attention
ESANN’08
14 / 20
Recurrent spiking neural network Visual Focus Conclusion
Visual architecture Experimental validation Results
Experimental results Computation time per frame for different integration steps per frame:
∼ 20 frames/sec for 3 integrations step for ∼ 53,000 neurons S. Chevallier (LIMSI - CNRS)
Covert Attention
ESANN’08
14 / 20
Recurrent spiking neural network Visual Focus Conclusion
Discussions
Differences with first spike computation I
Spike train continuously feed the focus map ⇒ increased temporal continuity
Overt and covert attention I
Implementation of an attentional spotlight
I
Focalisation on the most salient region
I
Follow a moving stimulus
S. Chevallier (LIMSI - CNRS)
Covert Attention
ESANN’08
15 / 20
Recurrent spiking neural network Visual Focus Conclusion
Conclusion
Conclusion I
Saliency extraction
I
Visual focus
I
Robust to noise and perturbations
I
Suitable for a real-time framework
Perspective I
Design a neural controller ⇒ saccadic moves with a camera
S. Chevallier (LIMSI - CNRS)
Covert Attention
ESANN’08
16 / 20
Recurrent spiking neural network Visual Focus Conclusion
The end!
Thank you for your attention!
S. Chevallier (LIMSI - CNRS)
Covert Attention
ESANN’08
17 / 20
Neural field return
Continuum Neural Field: ∂ u(x, t) τ ∂t
= −u(x, t) +
M
Z
+ wM (x − x0 ) = Ae s(x, y) = Ce
S. Chevallier (LIMSI - CNRS)
Z
M0
wM (x − x0 )f [u(x0 , t)]dx0
s(x, y)I(y, t)dy + h
|x−x0 |2 a2 |x−y|2 c2
− Be
|x−x0 |2 b2
with A, B, a, b ∈ R∗+
with C, c ∈ R∗+
Covert Attention
ESANN’08
18 / 20
Spiking model return
Leaky Integrate-and-Fire (LIF): C V˙ i = gleak (Vi − Eleak ) + PSPi (t) + I(t), if V ≤ ϑ spike and reset V otherwise
No synaptic conductance: Sj (t)
=
(f )
∑ δ (t − tj
+ dj )
f
PSPi (t)
=
∑ wij Sj (t) j
Vi (t) ← Vi (t) + PSPi (t)
S. Chevallier (LIMSI - CNRS)
Covert Attention
ESANN’08
19 / 20
Spiking model Leaky Integrate-and-Fire (LIF): C V˙ i = gleak (Vi − Eleak ) + PSPi (t) + I(t), if V ≤ ϑ spike and reset V otherwise Input encoding:
S. Chevallier (LIMSI - CNRS)
Covert Attention
ESANN’08
19 / 20
Activity Centroid
return
Example with background noise (σ = 0.5) Mean input map activity
S. Chevallier (LIMSI - CNRS)
Activity centroid of input map
Covert Attention
Activity centroid of focus map
ESANN’08
20 / 20