1. How sensitive are individual rods? And how do we know? Answer:! The rods have a pigment called rhodospin, which is 500 times more sensitive than cones. Rods are well suited to low-level “night” vision (scotopic vision). Rods are more sensitive to green light, and it reflects red and blue light relatively well. The psychologists know that we are sensitive to green light more than red and blue light because they held an experiments by testing the subjects on computer display. They show the bridges (control room) of ships sailing at night to be covered with a red filter on the computer display. Afterwards, they found that the rods are less sensitive to red light and so the filters preserve their rhodopsin for responding to the dim light coming from potential hazards. ! ! On the other hand, a psychophysical experiment by Hecht et al. indicates that we need at least 7 rod receptors to fire at the same time for the subjects to “see.” They use contrast stimuli to determine the absolute threshold. 2. What are the factors that degrade the retinal image? Answer:! The size of pupil leads to the degradation of the retinal image. The pupil size will affect the intensity of diffraction, aberration, and point spread function (PSF). First, diffraction occurs when wave encounter obstacles. For this reason, diffraction will occur when the light wave enter the pupil. Diffraction will decrease as the size of pupil decrease. Second, wave aberration occurs when the light wave comes form a single point and reflected by the lens. The light wave become non-parallel when they arrive on the retina. Therefore, the retina image will be blurred. Besides on the wave aberration, there will be chromatic aberration in our eyes, which mean the different rate of reflection occurs among different light wave. As a result, the red light wave will fall behind the retina; the green light will fall on the retina; the blue light will fall in front of the retina. The wave and chromatic aberration will decrease when the size of pupil decrease. Third, the PSF is the image that an optical system forms of a point source. The point source is the most fundamental object, and forms the basis for any complex object. The PSF for a perfect optical system is Airy disc, which is the Fraunhofer diffraction pattern for a circular pupil. The Airy disc will get smaller when the pupil size get larger. ! To conclude, there are little spreading due to defocus or aberrations but larger diffraction to have a smaller pupil. However, there are more aberration and blurred image formed but less diffraction if we have a larger pupil. 3. Describe and illustrate the functional benefits of having lateral inhibition in a sensory system. Answer:! Lateral inhibition occurs when the activity of one cell suppress the activity of a nearby cell. Lateral inhibition illustrates that vision is not a passive process of seeing merely what is objectively there; it also explains optical illusions, mach bands. Different photoreceptors in the eye respond to varying degrees of light. When one cell activates in respond to light, its activity impairs or prevent neighboring cell from activating. This causes the edges between light and dark areas to appear more prominent than they would be otherwise. For example, without lateral inhibition, the border between a black tile and a white tile would appear less obvious. Some psychologists believe that the ability to identify edges easily has survival value.

4. Why can we not perceive every photon of light that arrives at the retina in neardarkness? Answer:! We have two major photoreceptors: rods and cones. We have about 120 million roads and about 5 million cones. Rods are extremely sensitive, and can be triggered by a very small number of photons. Thus, rods are at play in near-darkness, or, in very low light levels. Cones, on the other hand, are more responsible for color vision and are much less sensitive to low light.  We  can not perceive every photon of light because in near darkness, cones are not in play. What usually appears in color are no longer salient so these photons are not processed. This explains why colors cannot be seen at low light levels: only one type of photoreceptor cell is active. Also, studies have shown that rods can respond to only a single photon of light because they are so sensitive. There is no need for a large number f photons to enter the eye in order for processing. When we enter a dark room, the eyes first adapt by opening up the iris to allow more light in.  Over a period of about 30 minutes, there are other chemical adaptations that make the rods become sensitive to light at about a 10,000th of the level needed for the cones to work.  After this time we see much better in the dark, but we have very little color vision. This is known as scotopic vision . There was also an interesting The rod sensitivity is shifted toward shorter wavelengths compared to daylight vision, accounting for the growing apparent brightness of green leaves in twilight. 5. Explain how the Contrast Sensitivity Function is defined, and its significance for vision. Answer:! Contrast sensitivity scores obtained for each of the sine-wave gratings examined are plotted as a function of target spatial frequency yielding the contrast sensitivity function (CSF). The CSF is used to show how well certain individuals can see, as opposed to using inadequate methods like the Eye Chart, or more technical and expensive techniques. • Contrast sensitivity is a measure of the ability to disc ern between luminances of different levels in a static image Figure 1: (upwards in the picture = less contrast; downwards = more contrast) Contrast Amplitude Wide Med Narrow Spatial Frequency • In Figure 1, the Contrast Amplitude depends only on the vertical coordinate, while the Spatial Frequency depends on the horizontal coordinate. Observe that for medium frequency you need less contrast than for wide or narrow frequency to detect the CSF curve. • Traditionally, Eye Charts have been used to measure visual acuity. This method however, does not adequately describe the spatial visual abilities of a given individual. • This is because visual spatial processing is organized as a series of parallel but independent - channels in the nervous system; each "tuned" to targets of a different size. (Figure 2: each dotted line hump equates a single channel within the CSF curve) Figure 2:

! • Vision research has clearly demonstrated that the capacity to detect and identify spatial form varies widely as a function of target size, contrast, and spatial orientation. ! • As a consequence of the above, a simple assessment of visual acuity often does not predict an individual's ability to detect objects of larger size. ! • Contrast sensitivity testing complements and extends the assessment of visual function provided by simple acuity tests.  At the cost of more complex and timeconsuming procedures, contrast sensitivity measurements yield information about an individual's ability to see low-contrast targets over an extended range of target size (and orientation). ! • Contrast sensitivity tests use sine-wave gratings as targets instead of the letters typically used in tests of acuity.  Sine-wave gratings possess useful mathematical properties and researchers have discovered that early stages of visual processing are optimally "tuned" to such targets. ! • A contrast sensitivity assessment procedure consists of presenting the observer with a sine-wave grating target of a given spatial frequency (i.e., the number of sinusoidal luminance cycles per degree of visual angle).  The contrast of the target grating is then varied while the observer's contrast detection threshold is determined. Typically, contrast thresholds of this sort are collected using vertically oriented sinewave gratings varying in spatial frequency from 0.5 (very wide) to 32 (very narrow) cycles per degree of visual angle. ! • Because high levels of visual sensitivity for spatial form are associated with low contrast thresholds (upwards in the picture in Figure 1), a reciprocal measure (1/ threshold) termed the contrast sensitivity score is computed.  The contrast sensitivity scores obtained for each of the sine-wave gratings examined are then plotted as a function of target spatial frequency yielding the contrast sensitivity function (CSF).   Some typical CSF's of different age groups are depicted in Figure 3 below.  Note the characteristic inverted-U shape of the CSF and its logarithmic axes. 6. Explain, with a diagram or two, how having looked at a grating of high contrast for a long time (pattern adaptation) changes your perception of similar test gratings, or ones slightly different in frequency or orientation. Answer:! The neurons in a specific visual channel (“visual channels,” Figure 3) become desensitized to the grating pattern they are being adapted to. This means that the channel (and other channels with similar spatial frequency or orientation) requires a higher contrast (“Differing Contrast,” Figure 2) in order to be stimulated again.  [Page 89 “Effect of adaptation to contrast sens. Function”] ! • When a subject becomes adapted to a grating of a certain contrast for a long time, then the subjects perception of other similar test gratings changes (imagine taking a square chunk out of Figure 1 from question 5; it will have its own specific spatial frequency). ! • Blakemore and Campbell (ʼ69) conduct ed an experiment in which they adapted subjects to certain spatial frequencies (f or this example: 7 cycles/degree).  The subjectsʼ pre-adaptation contrast sensitivity functions looked something like Figure 3, with the hump that is typical of normal CSFʼs.  The CSF has a hump because the mid-range of the spatial frequencies can be seen with lower contrasts (like the middle row of Figure 1) than the wide or narrow spatial frequencies (refer to Figure 2).

! Figure 2: (upwards in the picture = less contrast; downwards = more contrast) Contrast Amplitude ! ! ! ! ! Wide! Med! Narrow Spatial Frequency •     The subjectʼs post-adaptation CSFʼs looked like that of Figure 4, with a slight bow (meaning a lower contrast sensitivity) in the “7 cycles/degree” channel (refer to Figure 3). Figure 3:              Figure 4: ! • What is this channel? And why is the contrast sensitivity lower in this channel now? ! • Well, the simplest way to explain the dip in this specific channel is to think of the Con. Sens. Function as an “envelope.” This envelope marks the boundary of individual sensitivities of a set of component channels (where visible light sensitivities begin and end, ie. Figure 5). Figure 5: ! • The CSF curve of Figure 3 is the envelope that constrains the individual channels (the narrower, dotted line humps). ! • Each channel is a population of visual neurons that are tuned to receive only a narrow range of spatial frequencies (remember for our example, we used 7 cycles/ degree). ! • As subjects become adapted to specific test gratings, then their CSF becomes altered, as in Figure 4. The dip in the curve means that in order to see that specific spatial frequency (7 cycles/degree), that region, and a very small portion of the channels near the adapted channel,  needs greater contrast to be seen accurately now. This is because the adaptation to specific gratings has caused the channels to become less sensitive to that particular grating, and to ones that are closely similar to it. 7. Review the evidence that particular areas of the cortex are specialized for representing and recognizing faces. Answer:! First of I'm not sure but I think it is asking you to maybe evaluate the evidence not just say, I'm not sure but all mostly state the evidence, you guys can review/evaluate it. Disclaimer some of the info is from the presentation on faces so you can take a look at that for more info but there stuff might be more than you need to know. So, First we have Charles Gross who did experiments on monkeys using micro-electrodes which showed that when monkeys saw human and monkey faces their was activity across inferior temporal cortex but this was not seen when they saw other objects. Then we have studies by Nacy Kanwisher she was on of the first people to test the fusiform gyrus for face recognition. She found that using fMRI that specific regions of cells respond to faces in right fusiform gyrus AKA fusiform face area. She proved this by testing faces vs. objects in which case there was higher activity for faces in FFA than for objects. She also tested faces vs. scrambled faces to prove that FFA was not merely responding to features but to whole faces, results proved this more activity in FFA for faces than scrambled ones. Then she tested partial faces vs. hands to see if it was not just responding to body parts, results more activity to partial than hands in FFA.

! ! Other studies followed Like Gauthier et al to challenge Kanwisher findings they believed Fusiform gyrus was related to expertise not just faces.  They used greebles little creatures that people had to study and become experts in recognizing them, they varied in many dimensions. They found that people the were "greeble" experts had activity in fusiform gyrus when they were looking at these creatures. Objections to this finding are that greebles look a lot like faces and they were not able to replicate results. ! Some neuropsychological evidence for areas of the cortex being specialized for faces are: Some agnosia patience, those that can recognize certain objects can still recognize faces. Also prosopagnosia patients those that can't recognize faces, can still recognize objects. This is associated with damage to right fusiform gyrus. This suggest a double dissociation between object and face recognition which suggest that faces are processed by separate brain areas, in so facto there might be special areas for recognizing faces. 8. Discuss and evaluate the „inner screen‰, „human interface‰ and „isomorphic model ‰ conceptions of the perceptual representation. Answer:! The inner screen theory of seeing proposes that there is a set of brain cells in the cortex whose level of activity directly represents the brightness of the points of any scene the person is looking at. The inner screen is essentially a photograph of what the observer is looking at.  So the neurons in the cortex are a sheet where the neurons are active or not active or active to a degree. Completely inactive cells signal black and completely active ones signal white, the rest are different shades of grey in between. Sounds like a possible idea, like a tv dot-by-dot our brain cells mirror the outside world.  This image is a kind of representation of the outside world.  So there is a direct relationship between our conscious visual experience and activity in our brain cells. However, based on research we know that what comes closest to an inner screen is the topographic cortical map on the striate cortex where the image is split in half (left hemisphere receives from right visual field and the right hemisphere receives from the left visual field) but the maps contain the images upside down and distorted. The foveal area (where the observer is focusing, looking at directly) of the image takes up a lot more cortical space on the map then peripheral areas.  The inner screen theory doesnʼt help us understand any depth, motion, textures and spatial relationships of the objects in the image. Only points in the scene are represented (not even really shapes) but there is a lot more to our environment to be encoded and processed in order to get a subjective experience that we all have. ! ! How does the observer recognize objects?  The issue is how and who will do that processing of analysis and recognition? ! This theory has the infinite regress problem, we would need a homoculus (a little man inside our brain) to make sense of the experience and without that addition we wouldnʼt be able to have a subjective experience(Frisby &Stone). ! ! The human interface model (Hoffman) claims that faithful depiction of the world by perception is unlikely based on an evolutionary perspective. According to Hoffman perceptions of an organism are a “user interface” between the organism and the objective world. Although Hoffman doesnʼt offer physiological evidence, only some evolutionary games he argues that perceptions are only functional

representations of the world.   Just like a computer interface perception represents the environment but doesnʼt necessarily resemble it.  Like icons on a computer interface, our perceptions make the world “user friendly” for our species without indulging us in the complexities of it.  Each species has its own interface evolved to guide adaptive behavior specifically for the species.  Hoffman denies any lawful correspondence between our perception and reality. Evolutionary competition between and within species exploits the strength and limitations of their interface. Hoffman argues that supernormal stimuli – such as the beer bottle for the jewel beetle – are proof that perceptions do not depict the objective world accurately or isomorphically but in a simplified way. According to the human interface theory fitness and not accuracy is the objective function optimized by evolution. ! ! The isomorphic model essentially claims that our perception is a depiction of our environment that isnʼt necessarily identical to it but there is a lawful correspondence between what we perceive and the objective world.  (According to Prof MacLeod this might be closest to the truth. ) Our phenomenal perception generates a sort of model of the world.  So like any model it has mappings of the environment that directly relate to what itʼs representing.  The fact that we are able to walk around without bumping into things supports the idea of isomorphism which is a systematic lawful correspondence between our perception and the objective world. 9. Explain the packing problem‚ created by the orientation-selective nature of single cells in primary visual cortex, and discuss how the visual system solves it. Answer:! The package problem refers to the fact that the brain tries to represent information extracted from the retinal image into a collection of brain maps. The striate cortex do not have enough room to represent every color or orientation, and this is referred to as the package problem. To make sense of our visual world we need to be able to see every orientation irrespective of where it is located on retina. Meaning we need to be able to represent every possible orientation for every retinal location. Each column in the striate cortex represents a single orientation- in which means that the cells in every column are sensitive to the same preference of orientation. The cortex must try to pack all orientations into each cortical location which represent a part of retina. We would think that this could be solved by representing different orientations at different vertical depths within each cortical column but this does not seem to be the case because of the way single orientations are represented. The brain tries to cover every value of orientation at every cortical location and it always tries to maximize this coverage of orientation, so what are the solutions to the packing problem? Several explanations has been put forward. One of them is relating this to the different parameters horizontal and vertical positions x and y, and the orientation position. The challenge with three parameters is to try to fit these into two dimensions in the cortex. The solution offered is first ignoring one of the parameters, but then we will loose this information. Second, it is possible to use a point in cortex to represent all the combinations of the parameters, in this way we will have maximum coverage. Third, it is possible to treat two parameters with a high priority, and one with lower. There are disagreement to whether the brain actually is trying to solve the package problem, or if it is not just maps that are the result of trying to minimize the wire length of the neurons. However, findings seem to suggest that the brain does

indeed try to maximum the coverage. We know that the consequences of the packing problem are singularities in respect to orientation. This is also the case with fingerprints. A theory was posed in relating striate cortex to this fingerprint principle in terms of singularities. Tal and Schwartz found two things : first, for any pair of singularities, it is possible to draw a smooth line between them making them same orientation preference. Second, two singularities joined like this have topological indexes with different signs. This is called the sign principle in which states that singularities must have topological indices with opposite signs. To try to reduce the packing problem we can ensure that related parameters are packed in an efficient manner. Another suggestion involves off loading information to other parts of the brain, the striate cortex would load off to the extrastriate area.  This seems like a good idea, but it turns out that extrastriate cortex represent more than two parameters or they extract new information from their inputs causing a new packing problem. Marr´s explanation of the package problem has to be understood in terms of the hardware level, and also the other levels : through the computational and the algorithm level. Hardware problems arise in terms of how to implement seeing algorithms in ways to cope with the constraints imposed by nerve cells. The package problem is a hardware problem in which the brain solves by having brain maps organize cells so to maximize coverage and continuity. This is from chapter 10: seeing with Brain Maps 10. What can we learn about perception from the study of adults who recover vision after prolonged blindness? Answer:! Mike May is what we will base our knowledge off of.  The key point that we can take away from his case is that vision needs to be interpreted by the mind; it is not a simple view/perceive process. Vision needs to be learned, in a sense, by the brain (specifically the occipital lobe). • After Mike recovered his vision after 40+ years, he was able to see the world, but could not visually comprehend it in the way that most people do: ! o Mike has a severe neural resolution loss, improving slowly if at all. ! o He doesnʼt see 2D subjective contours. ! o 2D Geometrical illusions are present. ! o Both perspective and shape from shading are ineffective for depth ! perception. ! o But motion cues are effective. ! o Recognizing faces and common objects is a challenge. ! o We think of Mike as having “the eye of the artist”, inhabiting a world of ! abstract 2-dimensional shapes and colors. ! o This may be why he now uses his vision, as he puts it, “mainly for ! entertainment” • Mike May is a perfect example for this question.  He was blinded in a chemical accident at the age of three; he was sensitive to light, but could not see forms. At the age of 46, his sight was restored through a new procedure- corneal [epithelial] stem cell replacement. • The main thing that we learned from Mikeʼs recovery was that vision is not an independent sense: meaning, we have to learn how to use our vision in relation to

our world, it doesnʼt simply come naturally. • In order for someone to develop normal vision, they need almost constant visual input from their environment.  This helps them learn concepts like depth perception, and how to distinguish shapes, faces, images, etc. Since Mikeʼs impairment happened so early on in his life, his brain was never trained to see the way that normal-sighted people do.  His occipital cortex lost sensitivity, and when his vision was restored so much later on in life, that part of his cortex didnʼt function in the way that it traditionally would for other humans. This led to interesting consequences, which are listed above in the short answer area… • LOOK AT THE POWERPOINT ON THE COURSE WEBSITE IF ANY OF THAT WAS UNCLEAR: THE SHORT ANSWER SHOULD BE ENOUGH…The info below will help if you wanna glance through the powerpoint, its basically a summary of it: • Mike interprets certain visual situations in his environment differently than most people.  We will discuss the following categories in relation to this idea:         1.  Resolution         2.  2 & 3d Form         3.  Motion         4.  Object/Face Recognition 1) RESOLUTION •  Resolution limit < 2 cycles per degree (in regards to CSF), despite good optics [See slide 13] •  No improvement over time 2) 2D and 3D Form • MM could identify simple shapes…but not shapes defined by illusory contours. • Mike can identify simple 2d forms (100% correct)…But “constructive” 2d perception is harder • Could NOT recognize a stationary (rotating) cube from any angle - “square with lines” • Can exploit motion to construct 3D structures • Sensitive (100% correct) to occlusion …But shading gave no automatic impression of depth: The circle was seen as a flat disc, with non- uniform surface lightness [see slide 19] • Fails with: ! o Shape from Shading: ! o Perspective: [ slide 20] 3) MOTION • Could make sense of… ! o Point-light walker ! o Rotational Glass patterns ! o Structure from motion (100% correct) [slide 25] •         Sophisticated processing of MOTION: ! o Can see form from motion (KDE cube) ! o Saw depth in face masks by rocking his head ! o Could see Johanssonʼs walking man ! o Can play catch

! o Skiing: vision now helps! •  SB (Ackroyd et al) • “His only signs of appreciation were to moving objects, particularly the pigeons in Trafalgar square… He clearly enjoyed … watching … the movement of other cars on the road …He spotted a speeder coming up very fast behind us” • Virgil (Sacks) • “when [the gorilla] finally came into the open he thought that, though it moved differently, it looked just like a large man” 4) OBJECT AND FACE RECOGNITION Poor… •  Object: ! o MM 25% correct ! o control 100% • People • Gender ! § MM 70% correct ! § control 100% • Expression (happy/sad/neutral) ! § MM 61% correct ! § control 100% • These dissociations between form and motion tasks were consistent with the size and activation of visual areas measured using fMRI • V1 and (especially) extrastriate areas in the temporal stream, thought to be responsible for form processing, were small and showed low activity levels. • The Medial Temporal complex, thought to be responsible for motion processing, was normal in both size and activation • Mike still has no intuitive grasp of depth perception. As people walk away from him, he perceives them as literally shrinking in size • He has problems distinguishing male from female faces, and recognizing emotional expressions on unfamiliar faces “The eye of the artist” or “The Flat World" • Matches projective shapes, not real shapes, e.g. NOT susceptible to Shepardʼs illusion: Tables have the SAME projective shape, and to MM they look the same [slide 34] • Mike correctly sees the diamonds as similar in lightness, and responds photometrically to illumination and shadow in pictures, seeing shadows as dark things. [slide43] Phenomenal Regression to the Real Object • Normally sighted subjects cannot retrieve any aspect of experience that is a function of retinal illuminance or projected size. But MM has these (and nothing else) available to undirected introspection. In this sense he is free (unfortunately), of the good ʻillusionsʼ on which normal vision is founded. • One example of resulting difficulties: Shadows at the edges of sidewalks appeared to him as black ridges that could present a potential hazard in walking • Why canʼt we see and judge whatʼs present at the sensory input, as MM can? • William James wrote: “Pure sensations can only be realised in theearliest days of life. They are all but impossible to adults with memories and stores of association

acquired." For MM (though NOT necessarily for a newborn: Granrud), James may be right, perhaps because the irrepressible interpretative processes of the normally sighted brain are not involved. • For the normally sighted, interpretation is not an integument that can be peeled away to reveal sensory bedrock: it penetrates all our consciousness, presumably   thanks to the continuously bidirectional flow of information through the visual system. • So we have no ʻpure sensationsʼ…but those are all that MM has. • The visual process as a causal chain: • In the normal  visual system, each neural representation depends on the later ones. [slide 46] • MM was not sensitive to perspective cues; yet he WAS susceptible to the MullerLyer and related illusions [slide 48] • Aesthetics: ! o Color ! ! § Variety and vividness were new and impressive ! o Bodies ! ! § Innate sign stimuli vs. interest based on association ! o Dust, waves, fireworks ! ! § Meaning confers no aesthetic advantage 11. Discuss how the theoretical framework of Bayes‚ Theorem can be applied to our selection among possible interpretations of ambiguous sensory data, with one or two specific examples. Answer:! The Bayesian framework allows us to compute and obtain the information we need based on our experience, knowledge and the information weʼre given (retinal image in the case of vision).  Using the Bayesian framework we can make a rational adjustment to the subjective likelihood of something in light of our experience or prior knowledge of the world. We can calculate a new assessment mathematically based on the likelihood of given ambiguous information and our experience in the world.  In vision the use of the Bayesian framework will identify an ambiguous 3D image based on the 2D retinal image and the prior probabilities of possible 3D images.  The Bayesʼ framework is the best available mathematical tool we can use to estimate what our visual system (overall our perception) must be doing to decide and identify ambiguous incoming information. Example: A person gets a positive test result in a cancer screening and we want to know what the probability is of this person actually having cancer. We have the prior probability (number of people who have cancer in the general population) -> got new evidence/ knowledge (positive cancer test) -> figure out posterior probability (the chances of having cancer based on the positive test and the number of cancer patients in the population) We know that - only 1 in 1000 ppl get this type of cancer = Prior Probability = 0.001 = P(H) - the screening detects cancer when itʼs present 100% of the time = 1 = P(I/H) - the test also has 10% false positives = 0.1 = P(I)

- Posterior probability P(H/I)  = [P(I/H) x P(H)] / P(I) =  (1 x 0.001)/0.1 = 0.01 -> 1% The person receiving the positive cancer test now has a 1 in 100 chance of having cancer. Before this positive test result the chance was only 1 in 1000. 12. Show how the perception of shape from texture can be considered at each of Marr‚s 3 levels of analysis. Answer:! Amount of texture in an image can vary from place to place in a picture. How texture varies depends on the slant of the surface. Texture density is increased as we move from foreground to background of the image, and this density is related to the surface slant. If the amount of texture is the same everywhere on a surface then it is said to be homogenous. This can be a constraint in solving the problem of estimating orientation of the surface. To proceed we need the perspective projection in which is compared to a pinhole camera. This is used to give us many different texture gradients, in which is gradual changes in texture density across the image. What we see in an image will rotate around lines, or axes, and the orientation is specified by rotation among these axes. These axes can further be used to find parameters p: rotation around a vertical axis, and q: rotation around horizontal axis. Marrs computational framework can be applied to explain what happens when we see shape from texture. In terms of Marr´s framework, the computational problem are the general nature of the problem to be solved. In this case this would be finding three- dimensional orientation of a textured surface. At this level it also identifies the different constraints that might be useful to solve the problem, and these are usually derived from examining the information that are available. In this case the amount of surface texture per unit area is about the same in all surface regions, that is surface texture is homogeneous, and this constraint can be used for solving the problem. This involves measuring and then comparing texture density in a different image region. ! Further, the algorithmic level- the representation, specifies three entities: the nature of input data, a particular method or algorithm, and the nature of output produced by the algorithm. Here, the input data would be the form of image texture and the output consists of the two parameters p and q, which together specifies the orientation of a surface. The algorithm, the method developed in this case consists of measuring the image texture density in two pairs of image regions: one around the horizontal axis, and one around the vertical one. Ratio of texture density were found, and then a graph can be used as a look up table. ! Lastly the third level, hardware level describes how the algorithm is executed in hardware. These could be neurons in the brain or the electric components in a computer, or a simple thing like a pencil and paper or calculator as in the case of identifying shape from texture. An important point to this level of analysis are that there is a degree of modularity in which means that the inner workings of each level are to an extent independent of the inner workings of the other levels. This is from chapter 2: seeing shape from texture 13. Outline how Marr and Hildreth proposed that edges are detected in noisy images, showing how it is consistent with retinal receptive fields. Answer:      Page 118: the particular shape of the weighting distribution has to

optimize two conflicting goals: smoothing away noise and not disturbing too greatly where edges are to be found in the convolved image. There is a theorem stating that the shape of weighting distribution that achieve an optimal trade-off between the two goals is bell-shaped.  The technical name for a distribution with this shape is a guassian. The pixels nearest the center of the receptive field have largest weights and thus have most influence.  The weights then reduce, smoothly and gradually, to zero at the boundary of the field. This pattern of weights is a 3D view of a receptive field with weight strength again plotted on the vertical axis. 14. Some neurophysiological results suggest an increasing diversity and selectivity of sensory neurons at successively more central stages of processing. Do you agree with this characterization, and do you think it makes functional sense? Answer:! Striate cortex- concerned with a limited patch of retina and is the first area of cortex for the signals of the retina to reach, contains simple cells which have simple excitatory/inhibitory receptive fields activated to spots of light, that are typically best excited with bar shaped patterns. Essentially this area of cortex has cells that respond to very basic stimuli and are very specific in their firing. V2- Receives its input primarily from V1. Cells are tuned to simple properties such as orientation, spatial frequency, and color. The responses of many V2 neurons are also modulated by more complex properties, such as the orientation of illusory contours and whether the stimulus is part of the figure or the ground. This area functions by a division of labor into areas of it being specialized to color, or orientation, or brightness etc. This area is therefore similar to area V1 with just a few more slightly complex functions. ! V4- This area receives most of its input from V2 and is considered the “color area.” However, these cells respond to simple shapes and objects in addition to just color. This area is the beginning of the decline from retinal location being the primary index to be more based on feature-based indices like color and shape. So basically, it is less of a retinotopic map and more of an object based area. ! V5- This area also receives most of its input from V2 and is considered the “motion area.” However these cells donʼt just respond to motion but includes things like stereo disparity. This area again is less based on the image on the retina and more based on the non-spatial parameters of motion and stereo disparity. ! Grandmother cells- These are hypothetical high level cell that would represent a single object (no matter how that object is viewed, this cell would still fire representing you seeing that specific object). Evidence has shown that cells near each cell of this sort donʼt respond to the same object but will respond to objects that are in the same ʻcategoryʼ. So basically it does appear that as you go to higher or more central levels of processing that the cells you encounter are much more selective as to what they will fire for (especially if you entertain the idea of the culminating in a grandmother cell firing for only one object). I would argue that this does make functional sense due to how if higher levels of processing werenʼt more object selective and just maintained the retinotopy of the lower visual areas we would have a hard time recognizing anything, though we would still probably be able to navigate through the

environment. This is just my opinion though, you should probably use the above evidence and form an opinion of your own so we donʼt all sound the same! 15. Signal Detection Theory provides a theoretical framework for understanding the sensitivity of a human subject. Sensitivity is mainly related to the parameter d‚; explain the role of the second parameter, the bias or criterion (beta). Answer:! - Psychometric function ! ! A. understanding a lot of things in our world where there s a certain ! ! amount of randomness; events not rigidly determined; you have to ! ! make decisions under uncertainty deciding how to react to any ! ! situation ! - P.282: threshold of vision ! ! A. diagram a: less than 3 luminance & greater than 3 luminance ! - Psychometric function ! ! A. P. 284: presence of randomness or noise (psychometric function ! ! is continuous rather than abrupt step) ! - Photoreceptors ! ! A. Random molecular movement molecule undergoes chemical ! ! change ! ! ! o Change in stereo geometry from 11 cis to 11 trans ! ! ! o Happens in complete darkness infrequently ! ! B. Dispersion in electrical signal (figure b on page 286) ! - Distribution like this for complete darkness (figure b on page 287) ! ! A. Noise alone red graph ! ! B. Setting criterion on dashed line ! ! C. Signal distribution green graph (when signal had been ! ! presented) ! ! D. Behavioral false alarms in red (the one to the right of the ! ! criterion dotted line) ! ! ! o The subject is generating false alarm ! - Flow diagram of decision making process 12.7 on page 287 ! - Signal very weak first graph figure 12.8 on page 288 ! ! A. Curve to the right = green line ! ! B. Criterion = dotted line ! ! C. Probability of saying yes is zero ! Second graph 12.8 ! Third graph 12.8 ! ! A. Distribution for this particular signal strength is symmetrical with criterion ! ! B. Realistic scenario 16. What are the minimal requirements to make a directionally selective neuron, and what is known how such cells can be created at the retinal level? Answer:! To understand this question it would be good to understand figures 14.2, 14.3, 14.4, 14.7. It is covered in the text between 327-333 if you would like to read it in the author’s words. A.! The minimal requirements for creating a directionally selective neuron must involve exploiting changes within the retinal image over successive       times

that reaches the brain as a continuous stream of visual information. It appears that in order to create a directionally selective neuron you must use inputs from multiple receptors using both inhibitory and excitatory signals systematically to represent changes in retinal position over time. B.!  A basic model is called the Reichardt detector: ! -figure 14.2 and 14.3 gives a basic visual representation of this method, what it says is that the movement of a small disc is mediated by what is called an AND gate. This AND gate is a neuron that has inputs from 2 photoreceptors, one of which has a delay of a specific time interval (which would equal the amount of time for the disc to stimulate the first receptor and then the second receptor with the time delay equaling the time difference in stimulation) and is stimulated when the correct sequence and timing of the 2 photoreceptors is stimulated ! -important stipulations of this model are that it would only work for a specific direction and a specific time interval between photoreceptor excitation C. ! Slightly more complicated model is the Joint excitation model - figure 14.4a. gives a good representation of this. The only real difference between this model and the Reichardt detector is that this requires 2 simultaneous excitatory inputs to the AND gate for the gate to be activated. So basically you have 3 photoreceptors and you must have the correct excitation and time interval that you need for the basic Reichardt detector for both pairs (so between receptor 1 and 2, and 2 and 3) this once again must be in the certain direction at the certain time interval for both pairs in order for the AND gate (which is basically just a confusing way of saying a cell that gets inputs from these cells) to fire D.! Adding on to this with a slightly more complicated addition is the Inhibitory veto method, the method that is now generally accepted as our best guess of what is happening - figure 14.4b. is a good representation of this model. This model now adds on what is called a NAND gate. What this now ads is the stipulation that there must be present a positive input and absent a negative input for excitation. These gates only fire if the excitatory signal is present (so if the motion is in the correct direction and has the correct time interval) and the inhibitory signal is absent (which is sent if the direction is opposite the excitatory direction with the same time interval) E.! These directionally selective cells are thought to be implemented in the visual system basically using bipolar cells, retinal ganglion cells and starburst amacrine cells that implement a kind of Inhibitory veto method.         - figure 14.7 is the best representation of what is thought to be going on at the level of the retina. a.! Preferred direction of the cell. What this figure shows (you should really look at it while reading this or this will make little sense) is that the bipolar cells (which receive their input from photoreceptors and here is labeled B) when receiving signals from their preferred direction will pass along excitatory signals (shown by the white arrows) to the retinal ganglion cell (labeled here as DR, which is acting as the AND/NAND gate). The starburst cell (S) is what will be applying the inhibitory veto but when the motion is in the correct direction the inhibition will be too weak and arrive too late and the retinal ganglion cell will subsequently receive an excitatory signal that the motion was in the correct direction. b. ! Null direction of the cell. This figure is showing how when the motion is in

the opposite direction of what the cell prefers, the starburst cell (S) will send strong inhibitory signals (black arrows) to both the bipolar cell (B) and to the retinal ganglion cell (DR) causing a lack of excitation from the bipolar cells to the retinal ganglion cells. 17. What is the “aperture problem” in motion perception, and how can it be solved? Answer: ! In a visual scene movement is seen by many little center-surround receptive fields. These receptive fields work as apertures through which visual cells can see movement through. The problem is that the movement seen through each aperture is ambiguous in that objects moving in multiple directions at multiple speeds may look the same. ( Figures 14.18 and 14.19 on page 342 of book) ! Vectors are visual representations of velocity, and velocity is the direction and motion of an objectʼs movement. So in solving the aperture problem we are finding the real velocity (global movement vector) of the object in focus, by using multiple given velocities of the aperture (local movement vectors). ! Solution is the intersection of constraints. This is the combining of the perceived movement of multiple apertures (local motion vectors) in order to find the global motion vector (actual movement). ! A diamond moving across a receptive field in either the rightward or downward direction would appear to move down and to the right, perpendicular to the edge of the diamond. This apparent motion creates a local motion vector out of the perceived movement through that receptive field. A line perpendicular to this local motion vector also connects the ends of various possible global motion vectors. (Figure 14.20 on page 343 of book) ! This information from a single receptive field narrows down the possible global motion vectors, but not enough to find the actual motion. In order to do this we need to connect the inputs of multiple closely spaced receptive fields. ! We use the local motion vectors of two apertures to find the constraint lines of these vectors. By putting these local motion vectors together, we find where the constraint lines intersect. The possible global motion vector that connects to the intersection of these constraint lines is thusly the chosen global motion vector of the object in our visual field. (Figure 14.21 on page 344 of book) ! The Intersection of Constraints Solution to the aperture problem uses the perceived movement of multiple close together receptive fields (apertures), and putting them together to find the most likely global movement of the object given all of the smaller perceived movements through each receptive field. Through this process we find both the direction and speed of movement of the object in question. 18. How is the encoding of contrast by the retina helpful for lightness constancy? And in what way is the encoding of contrast an incomplete account of lightness perception? Answer:! According to the Land-Horn theory of lightness perception, the encoding of contrast by the retina is helpful for lightness constancy because it encodes ratios of lightness in a visual image rather than absolute lightness. ! In the process of convolution (chapter 6, and chapter 16 page 379 figure 16.7), the retina is not so concerned with the actual illumination of any part of a visual scene as it is in sharp differences in lightness caused by edges. Thusly the retina, is more occupied with lightness of one object relative to others, rather than the absolute lightness of that object.

! In essence, the retina is able to discount the illuminant of a visual scene, and only pay attention to contrast changes, which are not affected very much by changes in illumination. This allows us to perceive the lightness of an object as unchanging because our retina pays most attention to relative lightness caused by contrast changes in and between objects, rather than changes in the total illumination of the visual scene. ! Because of the scaling problem. Though our retina is able to keep lightness constancy through the relative lightness of each object versus the others, this does not exactly explain why lightness is constant because since everything is relative, there is no starting point to compare contrast changes to.

Though contrast differences stay the same, there is no area of the visual scene to compare the lightness of the rest of the scene to. ! In other words, if all our retina cares about is ratios of lightness differences in an image, it would not be able to have lightness constancy because though the ratios would not change, there is nothing to scale this ratio with. ! The proposed solution to this problem is maximum luminance rule or the average luminance rule. The maximum luminance rule is the finding the part in the scene with the most luminance and assume that this object is white, and thusly scale the rest of the scene to this object. The average luminance rule is the finding of the average illumination of all parts of this scene, setting this as a middle grey, and basing all lightness contrasts off of this middle lightness. 19. Compare the explanation of Mach bands in terms of firing of retinal cells with explanations in terms of high level 3D representations in perception. Answer:! a. Page 391: Ernst Mach was an Austrian sensory physiologist whose investigation of various visual phenomena led him to conclude that there must be processes of lateral inhibition (center – surround units are built on the basis of lateral inhibition: an off surround can be said to laterally inhibit an on-center unit) operating within the visual system. ! b. Possible explanation for Mach bands in terms of retinal cells: ! ! i.      Land – horn type of lightness computation ! ! ii.     They may represent a failure of the lightness computation ! ! which is not intrinsic to the Land – Horn approach, but instead is ! ! the product of a rather poor implementation of this stratagem in the ! ! visual system. ! 1. Ex: it may be that the reconstitution network is not well designed to reconstitute from the edge information is receives at the boundaries of the gray stripes.  That is, once side of the gray strip is associated with only on – center units having high outputs, whereas the other side is associated with only off-center units having high outputs.  This may cause problem with reconstituting from the edges, with the blackness and whiteness arrays getting into a competitive struggle which ends up with Mach bands being seen ! ! iii.    Bayesʼ Theorem ! 2. Jaramillo and Pearlmutter argued in 2006 that these bands are a necessary consequence of optimal inference.  By this it is meant that the brain is constantly trying to guess or infer what is out there in the world, but it is hamstrung by the ubiquitous effects of noise. Fortunately, there are principled methods for

dealing with noise, which gives rise to the idea of optimal inference ! c. Possible explanation for Mach bands in terms of high level 3D ! representation in perception ! ! i.! Page 394: Yang and Purves have shown that various 2D ! ! lightness illusions can be explained in terms of the statistics of 2D ! ! gray level images.  Purves extended this general line of reasoning ! ! to argue that Mach bands may result from the statistics of 3D ! ! surfaces ! ! ii.! An object under natural lighting gives rise to peaks and ! ! troughs in image luminance that are a physical consequence of the ! ! objectʼs shape.  It has been proposed that these non – illusory ! ! bands underpin the illusory bands seen in Mach bands. ! ! iii.! Unlike 2D lightness illusions, no formal statistical analysis of ! ! 3D effects exists, and these argument remain speculative. 20. Describe the correspondence problem‚ that arises in stereo vision, and explain Marr and Poggio‚s suggestion about how the visual system solves it. Answer:! Having two eyes that see slightly different images of a scene (stereo vision) helps our depth perception and 3-D vision. However, it brings up the question of how we match up these two slightly different images together in our brain to create a coherent image containing depth information. In other words, how can the corresponding points of the left and right eyes be matched to create one coherent visual scene?

Marr and Poggios’s system for solving the correspondence problem is based on a series of constraints that eventually get rid of all false matches between the images in the two eyes, and being left with only the actual image.

First two constraints are used are the Compatibility and Epipolar

Constraints, which are used to establish all possible matches. ! Compatibility Constraint= initial matches need to have similar figural properties (ie contrast, color, etc.) ! Epipolar Constraint= This is based on the geometric possibility of the two images being matched. All possible matches for a point in one image fall in a line in the other image called the epipolar line. ! After these two constraints are used to find all possible matches, the Continuity and Uniqueness Constraints are then used to narrow down all possibilities and find the depth of the actual image. ! Continuity Constraint= surfaces are generally smooth and continuous. ! Uniqueness Constraint= objects are generally opaque and thusly are usually found only at one depth at a time. ! In the array of cells which codes for depth all cells coding for the same depth in visual space are mutually excitatory. At the same time, all cells on each epipolar line are mutually inhibitory. (Figure 18.22 on page 439, and figure 18.23 on page 440)

! From these constraints, inhibitory connections cancel out false positives, and excitatory connections make correct connections stronger between both visual images. (Figure 18.24 on page 441) ! This happens because each cell has a certain amount of activity according

to whether or not it was activated in the first constraints. Then inhibitory units subtract some of this activity, while excitatory connections add to it. All activity not above a threshold is cancelled. Eventually all false positives are cancelled, leaving only the correct depth perception of the image. 21. Defend the statement that we are all partially color blind. Answer:! a. Page 402: Edwin Land demonstrated that our ability to see color in a Mondrian figure is constant almost irrespective of the color of light used to illuminate that figure (when a patch is illuminated with red light then its color would not appear to change much when the patch is illuminated with green light). However if a patch is shown in isolation from the rest of the figure then a change in illumination color does alter the perceived color of that chosen patch. The perceived color of a patch depends on its context. The visual system takes into account the relative intensities of light of different wavelengths reflected from different surfaces. b. Page 405: the retina has three cones types (L: 584nm, M:533, S:420 nm) making humans trichromats. As cones outputs are measured in physical units and color perception are not this is an example of a category error. It is thus important to bear in mind that the cones provides just the starting point for color vision and that their outputs are not simple correlates of our perception of color. c. The difference between the peaks in the spectral luminosity efficiency function for scotopic and photopic vision causes a change in the appearance of colored surfaces at different illumination levels. i. Reds look much darker than greens at night because the rods which are the receptors that mediate scotopic vision, are insensitive to long wavelengths and are much more responsive to green light. The change in peak wavelength sensitivity resulting from adaptation to low light levels is called purkinje shift. d. Page 406: some women have four cones types instead of three (tetrachromats). These women have two different types of long wavelength cones due to the fact that each of their two X-chromosome code for slightly different photopigments. i. Many birds have more than two cone types. The increase color perception implied by such a large number of cones may be a reflection of their need to discriminate food in cluttered backgrounds, which would effectively act as a camouflage without very good color perception. Increase the number of receptors reduces visual acuity because extra cones have to be fitted into each image patch dedicated to each surface patch. e. Page 408: The reason that metamers (page 407: about metamers) exist is because each cone has a broad tuning width. This implies that the output of any single cone cannot specify the wavelength of light that is stimulating it. simply because many different wavelengths can activate it. in other words, if a photon of a particular wavelength is absorbed by a cone, the cone response is identical regardless of the wavelength of the absorbed photon. This is called the principle of univariance. i. Ex: tuning curve of the L-cone = if we shine red light at a single wavelength of 600nm on this cone then it has an output. But we can obtain exactly the same output using another wavelength then we can find another point along the

cone’s tuning curve (at about 525nm) where the cone’s output matches its output at 600nm. So the brain cannot tell simply by knowing the cone’s output, if the light is at 600nm or 540 nm, because, when both lights have the same intensity they give the same receptor output. 22. What are the functional benefits of encoding of color by color-opponent cells, and what is the physiological evidence for them? Answer:! a.      Page 409: The functional benefits of encoding of color by color opponent cells is that it is possible to null – off the perception of red from a red light by mixing it with a green light and vice versa. However when red and green are mixed in suitable proportions to eliminate the perception of either red or green, the result is not the perception of grey but yellow. Even when the outputs of the L- and M – cones have been nulled off within the red-green channel their combined activities are still “ in force” to create a signal that leads to the perception of yellow mediated through the blue – yellow channel. These opponent channels are optimal for transmitting as much information as possible. ! b.      Page 411: Physiological support for color opponency can be found in the form of color sensitive retinal ganglion cells.  Retinal bisaturated ganglion cells display blue-yellow opponency.  The response to yellow light is obtained by combining outputs of cones sensitive to red and green light. Cells of this type have been shown to be excited by blue and inhibited by yellow. This is true for all parts of the receptive field, and therefore such cells display only spectral opponency but not spatial opponency. This blue- yellow channel seems to be associated with the konio cells of the lateral geniculate nucleus. ! ! i.      Retinal midget ganglion cells have an output which is the difference between the outputs of cones sensitive to long and medium wavelengths, but only in the center of the retina. These cells have a center – surround receptive field and display spatial opponency (ex: red light may excite the central region and inhibit the surround of a cellʼs receptive field, whereas another cell may show this same behavior for green light). These cells project to the parvocellular layers of the LGN. 23. Review the costs and benefits of some different eye designs, including ours. Answer:! Human eyes Cost: 1. I asked professor macleod about the cost of the human eye after class because from what I understood the human eye has the perfect anatomy. So he told me that the eye could be bigger to reduce diffraction however that would require more space in the skull which means reducing brain size. So there really isn’t a cost. But if anyone can think of one please let me know! Thanks! Benefits: 1. Page 406: Having three cones types with broadly tuned and overlapping wavelengths sensitivities provides measurements of the spectral luminance spectrum at each location in the retinal image. 2. The human eye is large enough to capture details in the surrounding without being disturbed by diffraction. If the human eyes were compound eyes each facet must be large enough to fight diffraction. Many facets are needed to capture the detail that a regular human eye already captures.

Birds eyes Cost: 1.      Page406: having many receptor types comes at the price of reduced visual acuity, because these extra cones have to be fitted into each image patch dedicated to each surface patch, if the color of that image patch is to be sampled by every cone type. Color discrimination ability may reflect an evolutionary trade off with visual acuity. Benefits 1.      Page 406: Have more than two cone types, with pigeon having four and the chicken having five. Increase color perception. ! Squid and Octopus Eyes Cost 1.      Page 134: Evolution discovered that having the receptors at the back of the retina provides a way of giving mammalian eyes “sunblock” protection. Squid and octopus done have this sunlight protection problem in their aquatic world Benefits 1.      Page 134: Have evolved eyes which are similar to our own except that their receptor layer is on the surface of the retina, where light can strike receptors directly 2.      https://darchive.mblwhoilibrary.org/bitstream/handle/1912/224/chapter... They can quickly reconvert all trans retinal back to 11 – cis retinal whereas in human eyes it takes longer due to the need for a lot of blood to convert all trans back to 11cis retinal. (from class: Iʼm sure this was the point he was getting at but I canʼt remember everything he said.  If anyone wrote more of it down please let me know! Thanks) ! Shrimps Eyes Cost: 1.      Increase color perception and decrease visual acuity Benefits: 1.      Page: 406Mantis shrimp have the largest number of photopigments: 2. Packing in so many cone types suggest that perhaps this shrimp recognizes significant entities in its visual world in terms of color rather than shape. ! Jumping spiders Eyes Cost: wasn't able to come up with one. Benefits 1.      Page 56: They scan the scene using body movement and by moving the retinae of their forward – pointing eyes under the lenses which are themselves fixed in position in the spiders carapace (a rigid material which acts as an external skeleton) insect prey can be detected from about 30-40 cm.  if prey is detected it will advance slowly. When near enough, it jumps and grabs the prey with its jaws. 2.      Jumping spiders are experts at recognizing prey using vision. 3.      Have 8 eyes each of which is similar in design to the human eye. ! a.      Six of these eyes have fish like lenses, giving them a wide field of view this allows the spider to detect the presence of moving objects and prey, and then it quickly orients its body to point its high – resolution eyes toward the object of

interest. ! b.      The two forward looking eyes have very high resolution. 24. What is the role of „top-down‰ processes in vision, and how would you expect them to be implemented physiologically? Answer: ! I believe this is referring to Ch13. Bayesian framework - information we have (from the retina) combines to extra knowledge (our experiences) in order to obtain the information we want (object in view). Expl. the snake and the elephant. What you see depends on your experience. If you have never seen it before you would have no idea it was a snake or an elephant. The combination of likelihood with prior experience of each object to obtain an informed guess as to which object is most likely to have generated the observed image. Posterior probability= Likelihood X Prior Probability ------------------------------------! Constant Posterior probability is the most likely image based on this formula, the higher the number (.99) will mean it will be the most likely image verses a lower number (.01). Example

Box

Cup

Bowl

0.01

0.60

0.99

The highest number .99 will mean the brain is most likely to choose the object as a bowl due to more neurons firing for that particular object. However, strong prior knowledge can override this process. 25. Show for the various types of retinal cell how their interconnections with signinverting or sign-conserving synapses account for the polarity (hyperpolarization vs. depolarization), or increased vs. decreased firing of each cell‚s response to a small spot centered in the cell‚s receptive field. Answer: ! There are on-center surround and off center surround cells that feed into bipolar cells, which feed into the retinal ganglion cell and convolution array. The on/off center surround cells fire when the stimulus hits the center in the on-center surround the cell fires excitatory signals and on the off-center surround cells fire inhibitory signals. Excitatory signals are the depolarization of the cell making it more positive and the inhibitory signals are hyperpolarized making the cell more negative. When the on/off center surround cells stimulus hits the center it increases the firing rate, as the stimulus hits the outer edges of the cells the firing rate decreases due to the inhibition, which will make their influence on the bipolar cells less. Or if it fired excitatory signals because the stimulus didn't hit the surround it would influence the bipolar cells more. ! These signals are sent to the bipolar cells with combinations of other on/ off center cells. If more cells are excitatory or inhibitory it proceeds to send the dominating signals through the convolution array where more inputs are combined. The way the bipolar cells send their signals is by adding the total amount of positive/ negative signals. the on-center surround cell is positive +1 in the center and negative -1/6 around the center totaling a -1 with all 6 sides totaled. If more stimulus

hits these outter -1/6 layers the cell sends inhibitory signals to the convolution array. ! In the convolution array multiple ganglion cells inputs are combined, with a larger stimulus coming in on half of the convolution array and less stimulus on the other half it creates an edge on the array. Expl. 60   60   54  54! ! the convolution array !  0 -2 +2  0 60   60   54  54! ! will sum up these ! !  0 -2 +2 0 60   60   54  54! ! so they equal zero! ! 0 -2 +2 0 60   60   54  54! ! leaving an edge where the two side differentiate 0 -2 +2  0 ! so the -2 and +2 shows the edge of the stimulus coming in 26. What causes myopia, and how does it develop? Answer:! Myopia is when distant objects are formed in front of the retina making them blurry, due to the eye being to large/long from back to front. The eye cannot focus the image properly, which then reduces the clarity/acuity of the objects. It can be caused by hereditary factors in refractory errors passed from parents to children to grandchildren. More recent studies have show that the environment can also account for these errors in refractory by looking too closely at objects during early development stages. This can cause abnormal focusing of objects, thereby sending signals to the body to grow the eye to account for such focusing abnormalities, when in reality there is not a problem with the size of the eye. The sending signals grow the eye to focus images better on the retina and then the problem of the enlarged eye is the inability to focus properly on distant images. 27. What do we know about the physiological basis of the ability to recognize faces and other objects visually? Answer:! Structural descriptions -terms of objects made up of parts (expl.) the letter T  all T's share the common structure (horizontal and vertical bar) The first recognition is any patterns of the T which the visual system breaks down and separates it into parts. This is called the grouping process. Perimeters are used where the joints of the T connect and give an angle (90 degrees) this information with prior information (baysian theory), the brain finds a match between the input and the model in the stored brain areas. This is a process of both bottom up and top down processing to solve the problem of what is being seen. ! The criteria for representing shape is: - Accessibility -Scope and Uniqueness- for storing and matching object models because they reduce the possibilities that have to be considered (expl. if something has 4 legs then its probably not human so it narrows down the amount of possibilities to solve the problem - Classification and Identification-(stability and sensitivity) shared common properties (4 legs= dog, cat, fur, 2 eyes etc.) This is kind of like the COHART model, where if something starts with an H it has many words that start only with H, then A is added which reduces the amount of words to HA then T is added so it reduces the words that begin with HAT, if it is hat then it will be selected. In this case its with legs, arms, or some type of features that help narrow the classification of objects to search for the correct or most likely object. ! Also there is the notion of Geons, whcih are a limited amount of shapes

that make up all other shapes by combining a few geons together. ! Grandmother Cell- this theory is that a few selected cells are responsible of knowing features of a face that recognizes the person. These neurons or experts in face/object recognition to very well know things such as your grandmother or jennifer aniston. Although its highly unlikely that one neuron could associated these since we can only fit so many neurons in our brains, its impossible to have a neuron for each object/face we recognize. 28. In what ways does perception resemble the generation and testing of scientific hypotheses? Discuss with reference to physiology/anatomy and phenomena such as ambiguous or impossible figures. Answer: Perception as construction or hypothesis of what is out there. Constructive process is like a hallucination but abandoned as arriving data. Enormous information is coming is as top down processing. There are connections being made from low to high processes as well as high to low connections. According to professor Macleod, Important stuff is happening at high levels and influencing incoming data coming from the retina. Bottom-up processing (direct theory) and topdown processing (indirect) both influence perception of ambiguous and impossible figures although in this case it is attributed more so to top down processing. Top down perception involves making inferences about what we see and trying to make a best guess. Prior knowledge and past experience are crucial. When we look at something, we develop a perceptual hypothesis, which is based on prior knowledge. The hypotheses we develop are nearly always correct. However, on rare occasions, perceptual hypotheses can be disconfirmed by the data we perceive. A summary of this hypothesis is that a lot of information reaches the eye, but much is lost by the time it reaches the brain. Therefore, the brain has to guess what a person sees based on past experiences. We actively construct our perception of reality. This is how it resembles the scientific hypothesis because perception involves a lot of hypothesis testing to make sense of the information presented to the sense organs. Our perceptions of the world are hypotheses based on past experiences and stored information. Sensory receptors receive information from the environment, which is then combined with previously stored information about the world which we have built up as a result of experience.

Psyc 159 Final Questions -

For the normally sighted, interpretation is not an integument that can be peeled away to reveal sensory bedrock: it penetrates all our consciousness, presumably.

214KB Sizes 0 Downloads 197 Views

Recommend Documents

159-161.pdf
Page 1 of 3. Збірник тез доповідей Міжнародної науково-практичної конференції. «Актуальні питання економіки, фінансів, обліку та управлінняÂ

Psychological Sciences (PSYC).pdf
training, motivation, worker attitudes, leadership, ergonomics and job design, workplace health and safety. 2700. Social Psychology. Three credits. Prerequisite: ...

IT 130 Questions & Answers Final Eng Medium_Spandanam.pdf ...
Wiki Content Management System. d. Wiki Control Machine System. 39.Which of the following is the attribute used to define the background colour when a web page is. created including Cascading Style ? a. background b. bg-color c. bgcolor d. back-color

Final Term Subjective Solved Questions of ... - Vugujranwala.com
What is the difference between Alphanumeric and Decimal System? Decimal system is a numbering system that uses ten digits, from 0 to 9, arranged in a series of columns to represent all numerical quantities. ..... unit on Constructing Bar Graphs, payi

Final Exam Practice Questions Workings.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Final Exam ...

Final Term Subjective Solved Questions of ... - Vugujranwala.com
What is the difference between Alphanumeric and Decimal System? Decimal system is a numbering system that uses ten digits, from 0 to 9, arranged in a series of columns to represent all numerical quantities. ..... unit on Constructing Bar Graphs, payi

Midterm and Final Discussion Questions
Midterm and Final Discussion Questions. Unit I Discussion Questions. 1. Cartography is not simply a technical exercise in penmanship and coloring, nor are decision confined to scale and projection. Mapping is a politically sensitive undertaking. Look

HU118 -- QUESTIONS -- FINAL PREPARATION -- ALL GROUPS.pdf ...
How does it influence the political changes of the countries in. Southeast ... What are the challenges to training and skills development? Elicit your ... To ensure better protection for migrant workers will requires reforms in three areas. What are

159.pdf
theory of three types of capitalist welfare states (later modifications cover more ... 159.pdf. 159.pdf. Open. Extract. Open with. Sign In. Details. Comments. General ...

159 Draft.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. 159 Draft.pdf.

159.pdf
Loading… Whoops! There was a problem loading more pages. Retrying... Whoops! There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. 159.pdf. 159.pdf. Open. Extra

ECE 159
Meets Health and Safety requirement for Social Services and. Foster Parent ... Communication - reading, writing, listening, speaking and/or conversing. Critical Thinking and ... Compare and contrast various health assessment tools, policies and pract

Psychological Sciences (PSYC).pdf
There was a problem loading more pages. Retrying... Psychological Sciences (PSYC).pdf. Psychological Sciences (PSYC).pdf. Open. Extract. Open with. Sign In.

PSYC 101 Syllabus 2015.pdf
Psychology is both an applied and academic field that studies the human mind ... and societies as they shape and are shaped by history, culture, institutions,.

PSYC 101 Syllabus 2015.pdf
Think: Apply understanding of the social and/or behavioral science perspective to real world. problem/s. Demonstrate familiarity with major concepts, theories, ...

159-161.pdf
hotels, which caused increasing of the investments in hotel business. As a result of. all these new projects were created, new hotels were built in Tbilisi, Batumi and. other cities. Powerful transnational companies of hotels such as Marriott, Radiss

uec_busway_guide_web_n_MAY-159.pdf
Page 1 of 221. Universal Electric Corporation (UEC), the manufacturer of STARLINE, has been a leader. in power distribution since 1924. The company's ...

Biological Conservation, 159
mixdist/index.html). This package contains functions for fitting fi- ..... Values in bold indicate significant values for chi-square for the fit between the empirical ...

uec_busway_guide_web_n_MAY-159.pdf
14 All Accessories 14.1. 15 All Specifications 15.1 – 15.13. Page 3 of 221. uec_busway_guide_web_n_MAY-159.pdf. uec_busway_guide_web_n_MAY-159.pdf.

3- 159 08 article final Pg No. 289-290.pmd
*Corresponding author email: [email protected]. Madras Agric. J., 96 (7-12): 289-290, December 2009. Study of Genetic Parameters Involving Single ...

Some Final Exam Practice Questions with Answers.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Some Final ...

500 IMP. ITT Questions for Online Final Exam.pdf
In memory, will. execute in given. CPU time. C. Page 4 of 62. 500 IMP. ITT Questions for Online Final Exam.pdf. 500 IMP. ITT Questions for Online Final Exam.pdf.Missing:

Final Exam Review Questions Warren 2016.pdf
Whoops! There was a problem loading more pages. Retrying... Final Exam Review Questions Warren 2016.pdf. Final Exam Review Questions Warren 2016.pdf.

159-08_CME_FERTIG.qxd (Page 852) - Semantic Scholar
Jul 23, 2008 - over, physicians from the various medical specialties dealing with such .... Both the Mayo Clinic's recommendations regard- ing tonsillectomy ...