Computational Vision Beyond the LN • Gain control / normalization

• Contextual interactions

• Pooling / invariance

RF organization in V1

Complex cell

Hubel & Wiesel

Hubel & Wiesel ’59 ’62 ’68

RF organization in V1

ganglion 
 cells

simple
 cells

complex

cells

Hubel & Wiesel

Hubel & Wiesel ’59 ’62 ’68

Hubel & Wiesel (feedforward) model

simple cell ganglion cells Hubel & Wiesel ’62

Validation of the feedforward model V1-LGN simultaneous recordings

cross-correlogram

© 1995 Nature Publishing Group

© 1995 Nature Publish

LGN cell

V1 cell Reid & Alonso 1995

Alternative models

The Journal

Feedforward

of Neuroscience,

August

1995,

7~178) 5449

nection structure. As we discuss Recurrent in the rest of this article, the relationship Inhibitory

between these two types of connections, the mathematical and the physiological, is more than linguistic.

2.1 The Geometry of Orientation in the Retinal Plane. Orientation in the 2D (retinal) plane is best represented as a unit length tangent vector ˆ q) attached to point of interest q ⃗ = (x, y) ∈ R2 . Having such a tangent E(⃗ vector attached to every point of an object of interest (e.g., a smooth curve or oriented texture) results in a unit length vector field (O’Neill, 1966). Assum⃗ from the ing good continuation (Wertheimer, 1955), a small translation V ˆ q). To apply ⃗ results in a small change (i.e., rotation) in the vector E(⃗ point q

Figure 1. Models of visual cortical orientation selectivity. a, In feedforward models all “first-order” cortical neurons (triangle, excitatory; hexagon, inhibitory) receive converging input (gray arrow) from a population of LGN neurons that cover a strongly oriented region of visual space. The bandwidth or sharpness of a cortical cell’s orientation tuning is determined by the aspect ratio of its LGN projection. b, Many inhibitory models employ a mild feedforward bias to establish the initial orientation preference of cortical neurons and utilize inhibitory inputs (white arrows), from cortical neurons preferring different orientations, to suppress nonpreferred responses. Here, we present a model, c, in which recurrent cortical excitation (hluck arrows)among cells preferring similar orientations, combined with iso-orientation inhibition from a broader range of orientations, Somers integrates and amplifies a weak thalamic orientation bias, which is distributed across the cortical columnar population.

et al 1995

Validation of the feedforward model

• Silencing/cooling the cortex does not seem to affect orientation tuning

• Suggest that core computation is outside cortex, i.e., in LGN

Ferster et al 1996

How would you go about building a complex cell?

Hubel & Wiesel (feedforward) model cont’d

Hubel & Wiesel

Hubel & Wiesel (feedforward) model cont’d squaring / “energy model”

complete model

Heeger & Carandini ’94; Lampl et al ’01; Touryan et al ’02; Rust et al ’05; Finn & Ferster ’07

22

10,11,13

23–26

Results We recorded simultaneously from 141 pairs of layer IV simple cells and layer II and III complex cells in vertical alignment within cat visual cortex (misalignment, < 200 µm). This precise electrode alignment was made possible by using a multielectrode matrix of thin and independently movable probes27.

simple-complex cells connected pair b

c

Spikes

a

% max. resp.

© 1998 Nature America Inc. • http://neurosci.nat

tioned ) and do not of receivethe direct geniculate input . Here Validation we provide evidence for the functional connectivity predicted by these anatomical studies and other theoretical models based on a feedforward hierarchical organization model .

squares29,30), and the field from the complex cell was mapped with a moving bar. These simultaneously recorded cells had overlapping receptive fields (Fig. 2a) and similar orientation preferences (Fig. 2b). Cross-correlation analysis showed a strong positive peak displaced from zero, indicating that the simple cell tended to fire before the complex cell (Fig. 2c). The fast rise time and peak time of this displaced positive peak is consistent with a monosynaptic excitatory connection 32 (Fig. 2c, inset). Because the complex cell was recorded outside of the range of most geniculate afferents, these two cortical cells could not share strong thalamic inputs. (Within the superficial layers, only cells near the layer III/IV border have been demonstrated to receive direct geniculate input10,11,13.) The second example is from a layer IV simple cell simulta-

Orientation

ms

Fig. 2. A layer IV simple cell simultaneously recorded with a layer II complex cell. (The electrode track for the complex cell ran within the top 100 µm of layer II.) (a) Superimposed receptive fields of the simple cell, mapped with white noise29,30, and the complex cell, mapped with a moving bar. Solid lines, ‘on’ responses; dashed lines, ‘off’ responses. The darkest gray corresponds to the strongest responses. As expected from previous studies21, vertically aligned layer IV simple cells and layer II and III complex cells had similar receptive field sizes (complex/simple, 1.18 ± 0.41). (b) Orientation tuning curves of the simple cell and the complex cell obtained with a moving bar. Solid line, simple cell; dotted line, complex cell. (c) Cross-correlation between the firing patterns of the simple cell and the complex cell. The ordinate gives the number of simple-cell spikes that occurred before (right) or after (left) a complex-cell spike. This correlogram shows a peak to the Alonso & displaced Martinez1998 right of zero, indicating that the simple cell tended to fire before the complex cell. The characteristics of the positive peak are consistent with

Energy mechanisms and divisive normalization Complex Cell Receptive Fields and Natural Images 783

Figu

(A) S 2°. S perp line, diffe (B) F ted show data

mat enta lute 4B: 6.3° mea (me 2D) the lus T

Figu

Figure 3. Relationship between the Two Significant Eigenvectors (A) Significant eigenvectors of an example complex cell. Scale bar, 2°. Solid line, spatial profiles of each eigenvector along the axis perpendicular to the preferred orientation (see Figure 4). Dashed line, Gabor fit. The Gabor fits of the two eigenvectors had a phase difference of 85°.

Touryan et al ’05

(A) T matr (see (B) conf foun sent the this teria (C) T spon bar, (D) T (E) D each

Energy mechanisms and divisive normalization Neuron 784

rive the spatial frequency tuning of the neuron. The aca reasonable Touryan et al ’05 co

Computational models of complex cells

Changing phase has little effect

source: Field et al 1993

Max pooling Motivation: Superposition problem and robustness to clutter

Increase in tolerance to position C1

Local max over pool of S1 cells

S1

Riesenhuber & Poggio 1999

Increase in tolerance to scale C1

Local max over pool of S1 cells

Superposition problem

Linear pooling

Max pooling

Computational diversity

9638 • The Journal of Neuroscience, September 5, 2007 • 27(36):9638 –9648

Behavioral/Systems/Cognitive

Computational Diversity in Complex Cells of Cat Primary Visual Cortex Ian M. Finn and David Ferster Department of Neurobiology and Physiology, Northwestern University, Evanston, Illinois 60208

A previous study has suggested that complex cells perform a MAX-like operation on their inputs: when two bar stimuli are presented within the receptive field, regardless of their relative separation, the cell’s response is similar in amplitude to the larger of the responses elicited by the individual stimuli. This description of complex cells seems at odds with the classical energy model in which complex cells receive input from multiple simple cells with overlapping receptive fields. The energy model predicts, and experiments have confirmed, that bar stimuli should facilitate or suppress one another depending on their relative separation. We have recorded intracellularly from

Max-like computation in the visual cortex Lampl et al ‘04

Gawne & Martin ‘02

Max-like in V1

Max-like in V4

(3) selective max-like pooling over nearby positions and scales for tolerance to 2D transformations

Complex cell model

Increase in tolerance to position (1) half-rectification and summing over phases at each location for tolerance 8622 J. Neurosci., November 1, 1997, 17(21):8621–8644to contrast reversal

(

2 ) +

(

2 ) +...

(2) gain control / divisive normalization

C1

Carandini et al. • Linearity and Normalization in Simple Cells

bromide (Norcuron, 0.1 mgzkg ⇤1zhr ⇤1) in lactated Ringer’s solution with dextrose (5.4 ml / hr). The pupils were dilated and accommodation paralyzed with topical atropine. The corneas were protected with zero power gas-permeable contact lenses; supplementary lenses were chosen to focus the eyes on a tangent screen plotting table set up at a distance of 57 in. To maintain the animal in good physiological condition during experiments (typically 72–96 hr), intravenous supplementation of 2.5% dextrose/ lactated Ringer’s was given at 5–15 ml / hr. Animals received daily injections of a broad-spectrum antibiotic (Bicillin) as well as an anti-inflammatory agent (dexamethasone) to prevent cerebral edema.

S1

Increase in tolerance to scale

Stimuli Stimuli were generated by a Truevision ATVista board operating at a resolution of 582 ⇥ 752 and a frame rate of 106 Hz, the output Local of which max over was directed to a Nanao T560i monitor (mean luminance, 72pool cd /mof2, S1 cells subtending 10 –25° of visual angle). Nonlinearities in the relation between applied voltage and phosphor luminance were compensated by appropriate look-up tables. Stimulus strength is measured in units of contrast, defined as the difference between the highest and lowest intensities, divided by the sum of the two. Drifting luminance-modulated sinusoidal gratings were presented alone or superimposed on another grating or on a noise background. Superposition was obtained by interleaving, i.e., by presenting the two components in alternate frames. When two gratings were presented together they had the same temporal frequency and differed in orientation and /or spatial frequency. Their contrast could be varied independently. The noise background was composed of square pixels, the size of which was chosen for each cell to be approximately one-fourth of the spatial period of the optimal grating. Occasionally we used one-

C1

Figure 1. T wo models of simple cell f unction. A, The linear model, composed of a linear stage (receptive field) and a rectification stage. The linear stage performs a weighted sum of the light intensities over local space and recent time. This sum is converted into a positive firing rate by

Local max over pool of S1 cells

RF organization in V1

Hubel & Wiesel

Hubel & Wiesel ’59 ’62 ’68

Feedforward hierarchical models • Earlier models: Hubel & Wiesel ’62, Fukushima ’80, Wallis & Rolls ’97, Mel’ 97)

• HMAX (Riesenhuber & Poggio ’99, Serre Kouh Cadieu Knoblich Kreiman Poggio ’05 ’07; Serre Oliva & Poggio ’07) and many extensions (e.g., Mutch & Lowe ’06; Masquelier & Thorpe ’07)

• High-throughput screening (Pinto et al ’09, Cadieu et al ’14, Yamins et al ’14)

Learning invariances from temporal continuity

Learning invariances from temporal continuity

• Hebbian learning:

• Neurons as coincidence detectors

• ‘What fires together, wires together’

• Hypothesis:

• S cells learn corr. in space

7R X 7

F

• C cells learn corr. in time

ON

F OF

4 C1

LGN LGN

x 4 Cortical Columns 4x4 4cortical column 16 S1 in each of simple cells

4 complex cells

Figure 3: Overview of the specific implementation of the Hubel & Wiesel V1 model used. LGN-like ON- and OFF-center units are modeled by Difference-of-Gaussian (DoG) filters. Simple units (denoted S1 ) sample their inputs from a 7 7 grid of LGN-type afferent units. Simple S1 units are organized in cortical hypercolumns grid,Foldiak 3 pixels1991) apart, 16 S1 units Masquelier et al 2007(4(see4also

Figure 4: Reconstructed S1 preferred stimuli for each on the 4 4 cortical hypercolumns (on this figure the posi

Learning invariances from temporal continuity 7R 7X

F

ON

F OF

4 C1

LGN

7R X 7

ON

F

4 x 4 Cortical Columns 16 S1 in each

Figure 3: Overview of the specific implementation of the Hubel & Wiesel V1 model used. LGN-like ON- and OFF-center units are modeled by Difference-of-Gaussian (DoG) filters. Simple units (denoted S1 ) sample their inputs from a 7 7 grid of LGN-type afferent units. Simple S1 units are organized in cortical hypercolumns (4 4 grid, 3 pixels apart, 16 S1 units per hypercolumn). At the next stage, 4 complex units C1 cells receive inputs from these 4 4 16 S1 cells. This paper focuses on the learning of the S1 to C1 connectivity.

Figure 4: Reconstructed S1 preferred stimuli for each one the 4 4 cortical hypercolumns (on this figure the positio of the reconstructions within a cortical column is arbitrary Most units show a Gabor-like selectivity similar to what h been previously reported in the literature (see text).

complex cells

and complex C1 units (see Fig. 3), which constitutes and Sejnowski, 1998; Stringer and Rolls, 2000; Rolls and direct implementation of the Hubel and Wiesel (196 Milward, 2000; Wiskott and Sejnowski, 2002; Einh¨auser F model of striate cortex (see Box 1). The goal of a C1 un et al., 2002; Spratling, 2005). 4 C1 OF is to pool over S1 units with the same preferred orien However, as pointed out by Spratling (2005), the trace tation, but with shifted receptive fields. In this contex rule by itself is inappropriate when multiple objects 4cortical x 4 Cortical Columns our hypothesis becomes: ‘in a given neighborhood, th 4x4 column are present in a scene: it cannot distinguish which inLGN LGN complex cells 16 S1 put in each dominant orientation is likely to be the same from on corresponds to which4object, and it may end-up of simple cells frame to another’. As our results suggests (see later combining multiple objects in the same representation. this constitutes a reasonable hypothesis, which leads t Hence most trace-rule based algorithm require stimuli Figure 3: Overview of the specific implementation of the appropriate pooling. ¨ ak, 1991; Oram and to be presented in isolation (Foldi´ simple cells Hubel & Wiesel V1 model used.Foldi´ LGN-like ONand OFF-cen¨ ak, 1996; Wallis, 1996; Stringer and Rolls, 2000), ter units are modeled by Difference-of-Gaussian (DoG) filters. natural input se- 2 Results and would fail to learn from cluttered Simple units (denoted S1 ) sample their inputs from a 7 7 grid quences. We tested the proposed learning mechanisms in of LGN-type afferent units. Simple S units are organized in 1 this problem, Spratling made the Figure 4: Reconstructed S1 preferred for the each on To solve hypothe3 layer feedforward networkstimuli mimicking Later sis that the same object 16 could activate the two 4 distinct cortical hypercolumns grid, 3 pixels apart, S1 not units Masquelier et al 2007(4(see4also Foldiak 1991) movie courtesy Wolfgang Einhauser 4 cortical hypercolumns (on this figure posi Geniculate Nucleus (LGN) and V1 (see Fig.the 3). Detai

Learning the invariance from temporal continuity

Masquelier et al 2007

(a) S1 units (n=73) that remain connected to C1 unit # 1 after learning

(b) S1 units (n=35) that remain connected to C1 unit # 2 after learning

(c) S1 units (n=59) that remain connected to C1 unit # 3 after learning

(d) S1 units (n=38) that remain connected to C1 unit # 4 after learning

the manner in which recognition is generalized across views reflects a process by which object representations are built up, in part, from associated sequential views.

attend to the images during training, they were told that their performance in the discrimination task would be affected by what they learned in the training phase. They were, however, not told that learning might actually lead to worse performance! 3D head model displayed centrally, and subtending an angle of approximately. Each experiment consisted of three interleaved blocks of sequence presentation and testing. During the exposure phase, participants were presented with a total of ten heads. Each presentation consisted of the head being displayed in seven different poses at 200 ms per imageVsee Figure 1. By presenting the images in rapid sequential order the head appeared to undergo a smooth transformation. The sequence was played back and forth for a total of 8.4 seconds. During each exposure phase, the faces were presented in pseudo-random order twice. Each subject

Effects of temporal associations on learning and General methods memory

Background

The temporal association hypothesis predicts that views of objects are associated as examples of a single object

Results. Overall performance was good: on average, 74.7% of the

face pairs were correctly categorized as being the same or different. In a study with the same face database, subjects with no prior exposure to the faces managed only 65.4% correct (10). This figure is lower than the worst performance of 72.6% recorded in the first block of the experiment, confirming that

Figure 1. The faces presented during the experiment are rendered views of a three-dimensional head model. Each head consists of a) a textured surface and b) a surface mesh. c) Examples of the face pairs used in the three experiments. Each experiment used a unique set of twenty heads of this type.

Wallis & Bulthoff ’01

An MS per ran P # ros A shi WG F(1 on but Th L(1 4A ma fac blo this gro F(3 A pre tem spe to bec cia

Effects of temporal associations on learning and memory • Discrimination worst for prototypes that are part of the same training sequence

• Control: Performance unaffected when faces presented simultaneously rather than in sequence

• Control: Performance unaffected when faces presented during rd sequence

. (A) Twelve three-dimensional head models were generated by scanthe heads of 12 female volunteers. These scans, which contained both ral and shape information, were then cropped to remove extraneous uch as hair. (B) The heads were split into three groups (!, ", and #), each ining four individuals (1, 2, 3, and 4). The figure shows the grouping used ne of the 10 observers.

Wallis & Bulthoff ’01

Fig. 3. (A) During training, subjects were exposed to the morph sequences such that for each sequence, a single head appeared to rotate twice from left profile to right and back. Examples of the training sequences can be viewed at the following web sites: http:!!www.kyb.tuebingen.mpg.de!bu!people! guy!morph.html and http:!!www.kyb.tuebingen.mpg.de!bu!people!guy! webexpt!index.html. (B) After training, individual faces were tested in a delayed match-to-sample task, in which observers were asked to indicate whether the two faces were different views of the same head. Test images always depicted a face either directly from the front or in profile, i.e., no

Learning in IT

nated (neuron-by-neuron) so that it was counter) Prediction for responses collected in the test phase: tolerance using temporal contiguity (here driven by re should cause incorrect grouping of two different www.sciencemag.org

SCIENCE

object images (here P and N). Thus, the predicted effect is a decrease in ob selectivity at the swap position that increases with increasing exposure (in limit, reversing object preference), and little or no change in object selectivit the non-swap position. VOL 321

12 SEPTEMBER 2008

Li & DiCarlo ’08

Learning in IT

Li & DiCarlo ’08

Learning in IT

Li & DiCarlo ’08

Computational Vision

Gain control / normalization ... Increase in tolerance to position. Local max .... memory. Wallis & Bulthoff '01 but signif. The BG. L(10,4). 4A), alth mance on faces.

10MB Sizes 2 Downloads 352 Views

Recommend Documents

Computational Vision
Why not just minimizing the training error? • Never select a classifier using the test set! - e.g., don't report the accuracy of the classifier that does best on your test ...

Computational Vision
Computer vision is ... source: http://www.persontyle.com/deep-learning-making-ai-intelligent-smarter ... Slides and video recordings will be available online.

vision
The new sub-notebook computer employs a 233 MHz processor, 32 MB of RAM, 4 GB ... Page 9 ... Edge​ ​Detected​ ​Sub-notebook​ ​computer​ ​image.

pdf-1866\the-computational-brain-computational-neuroscience-by ...
... apps below to open or edit this item. pdf-1866\the-computational-brain-computational-neurosc ... -by-patricia-smith-churchland-terrence-j-sejnowski.pdf.

computational electromagnetics
the so-called Euler´s brachistochrone problem [Gould 1957]. ..... challenge on how we should educate the graduate students in this rapidly changing world. The.

computational abilities
The analysis of networks with strong backward coupling proved intractable. ..... This same analysis shows that the system generally fails in a "soft" fashion, with.

Computational Stereo
Another advantage is that stereo is a passive ... Computing Surveys, VoL 14, No. 4, December ...... conditions, cloud cover present in one im- age and not in the ...

Computational Stereo
For correspondence: S. T. Barnard, Artificial Intelligence Center, SRI ... notice is given that copying is by permission of the Association for Computing Machinery. To ... 3.2 Control Data CorporatJon ...... conditions, cloud cover present in one im-

computational abilities
quential memory also emergent properties and collective in origin? This paperexamines a .... rons like a conventional digital computer. There is no evidence.

Computational Economics
of applications in labor search, inequality and business cycles to illustrate the practical use ... Method (hours per week): Lecture (2) + practical class (1).

pdf-1595\embedded-computer-vision-advances-in-computer-vision ...
... apps below to open or edit this item. pdf-1595\embedded-computer-vision-advances-in-comput ... sion-and-pattern-recognition-from-brand-springer.pdf.

DIGITALEUROPE Vision 2020
Chapter 2: The transformational power of digital technologies: sector and case examples. 50. Chapter 3: ...... operation of major new renewable energy sources.

Robot Vision
data is available in the Internet at. . ..... uses only cheap off-the-shelf cameras to fulfill its tasks. .... Chapter four deals with vision systems.

Statistics Online Computational Resource - CiteSeerX
http://www.stat.ucla.edu/~dinov/courses_students.html www. ... 2 General Expectation Maximization (GEM) Algorithms. ... Application 2 (Pattern Recognition) .

Computational stereoscopic zoom
10, 2012; published online Apr. 3, 2012. .... by computing the gradient of binary occlusion map where occlusion ..... PhD degrees from Korea Advanced Institute.

Computational Developmental Neuroscience
The author is with the Department of Psychological and Brain Sciences, In- diana University, Bloomington, IN 47405 USA ... developmental program that is biologically grounded, highly. “open-ended,” and stochastic. .... frames show 300 ms of activ

COMPUTATIONAL ACCURACY IN DSP IMPLEMENTATION ...
... more logic gates are required to implement floating-point. operations. Page 3 of 13. COMPUTATIONAL ACCURACY IN DSP IMPLEMENTATION NOTES1.pdf.

Computational Molecular Evolution -
Auckland CapeTown DaresSalaam HongKong Karachi. Kuala Lumpur Madrid ...... Cysteine, however, has low rates of exchange with all other amino acids. Second, the ...... fluctuations of the market value of a stock. In the implementation of ...

Vision Statement.pdf
Timeline: The work of the Committee shall not be longer than 12 months from the date of its first. meeting. The Committee will update the Board on a regular ...