Thursday, April 28, 2011
Probabilistic knowledge and uncertain input in rational human sentence comprehension
Prof. Roger Levy
Abstract -- Considering the adversity of the conditions under which linguistic communication takes place in everyday life -- ambiguity of the signal, environmental competition for our attention, speaker error, our limited memory, and so forth -- it is perhaps remarkable that we are as successful at it as we are. Perhaps the leading explanation of this success is that (a) the linguistic signal is redundant, (b) diverse information sources are generally available that can help us obtain infer something close to the intended message when comprehending an utterance, and (c) we use these diverse information sources very quickly and to the fullest extent possible. This explanation suggests a theory of language comprehension as a rational, evidential process. In this talk, I describe recent research on how we can use the tools of computational linguistics to formalize and implement such a theory, and to apply it to a variety of problems in human sentence comprehension, including classic cases of processing difficulty both when structural ambiguity is involved and when it is absent. Using a combination of methods from rational analysis, statistics, and computational linguistics we are able to derive and empirically verify a law-like relationship between a word's probability and its processing time in the reading of naturalistic texts. In addition, I address a number of phenomena that remain clear puzzles for the rational approach, due to an apparent failure to use information available in a sentence appropriately in global or incremental inferences about the correct interpretation of a sentence. I argue that the apparent puzzle posed by these phenomena for models of rational sentence comprehension may derive from the failure of existing models to appropriately account for the environmental and cognitive constraints -- in this case, the inherent uncertainty of perceptual input, and humans' ability to compensate for it -- under which comprehension takes place. I present a new probabilistic model of language comprehension under uncertain input and show that this model leads to solutions to the above puzzles, and new behavioral data in support of novel predictions made by the model. Finally, I touch on some of our recent work using reinforcement learning to study optimal eye-movement control policies for reading. More generally, I suggest that appropriately accounting for environmental and cognitive constraints in probabilistic models can lead to a more nuanced and ultimately more satisfactory picture of key aspects of human cognition.
Tuesday, April 26, 2011
Held et al. Nature Neuroscience
Would a blind subject, on regaining sight, be able to immediately visually recognize an object previously known only by touch? We addressed this question, first formulated by Molyneux three centuries ago, by working with treatable, congenitally blind individuals. We tested their ability to visually match an object to a haptically sensed sample after sight restoration. We found a lack of immediate transfer, but such cross-modal mappings developed rapidly.
there is also a cool paper from Alex Pouget's lab in this months NN
Perceptual learning as improved probabilistic inference in early sensory areas
Monday, April 25, 2011
Sunday, April 24, 2011
Pearson, Helibronner, Barack, Hayden & Platt
When has the world changed enough to warrant a new approach? The answer depends on current needs, behavioral flexibility and prior knowledge about the environment. Formal approaches solve the problem by integrating the recent history of rewards, errors, uncertainty and context via Bayesian inference to detect changes in the world and alter behavioral policy. Neuronal activity in posterior cingulate cortex – a key node in the default network – is known to vary with learning, memory, reward and task engagement. We propose that these modulations reflect the underlying process of change detection and motivate subsequent shifts in behavior.
Friday, April 22, 2011
Saturday, May 14, 2011
9:00 am - 4:00 pm
Continental Breakfast at 8:30 am
University of California, San Diego
10100 John Jay Hopkins Drive San Diego Supercomputer Center (Auditorium- Room B211)
San Diego, CA 92093-0523
Online registration and directions
Thursday, April 21, 2011
Why We Care: The Biology of Social Preferences
Michael L. Platt
Director, Center for Cognitive Neuroscience
Professor of Neurobiology and Evolutionary Anthropology
Abstract: Humans are, perhaps, unique amongst animals in the quality and intensity of other-regarding preferences (ORPs), and indeed ORPs are a prerequisite for the institutions that form the core of human society. As any parent can attest, ORPs like fairness and envy appear early in ontogeny, and, as any psychiatrist can attest, their derangement in mental disorders like psychopathy can have devastating consequences. Clearly, ORPs are high on the list of any putative uniquely human cognitive and emotional states. In this talk, I will present evidence that demonstrates, for the very first time, that rhesus macaques share some of the basic mental processes that motivate human ORPs. These findings suggest that ORPs in humans are rooted in fundamental cognitive mechanisms that evolved early in the primate clade to navigate complex social groups, such as vicarious reward, fictive learning, and attention to others. My talk will focus on the biological mechanisms mediating simple social reward processing, fictive learning, and vigilance to the internal states of others—core systems that may support ORP.
Monday, April 18, 2011
Friday, April 15, 2011
Remembering events past: Parietal lobe contributions to episodic memory
Abstract -- Functional neuroimaging studies of episodic retrieval have revealed a surprisingly consistent pattern of retrieval-related activity in lateral posterior parietal cortex (PPC). Initial accounts of these PPC effects include reference to parietal contributions to attention and decision-making. In this talk, I will first evaluate the anatomical overlap of retrieval and attention effects in human lateral PPC. Review of the literature suggests that predominantly divergent subregions of lateral PPC are engaged during acts of episodic retrieval and during goal-directed and stimulus-driven attention, suggesting that lateral PPC retrieval effects reflect functionally distinct mechanisms from these forms of attention. Second, I will discuss data from a source memory paradigm that reveal four functionally distinct patterns of parietal activation. Consistent with the review of the attention and memory literatures, effects of graded memory strength (intraparietal sulcus, IPS) and of graded recollection (angular gyrus, AnG) appear anatomically distinct from lateral PPC regions previously implicated in attention, including attendotopically organized areas in IPS and the superior parietal lobule (SPL). By contrast, other retrieval effects in dorsal (SPL) and ventral (temporo-parietal junction, TPJ) PPC appear to anatomically converge with neural correlates of attention, suggesting an interaction between attention and episodic retrieval demands. Finally, I will discuss data documenting a relationship between mnemonic evidence – quantified as the degree to which cortical encoding patterns were reinstated at retrieval – and IPS activation during source memory decisions, suggesting a link between graded memory strength effects in IPS and the translation of mnemonic evidence to decision-making and action. Collectively, these data highlight the multi-component nature of episodic retrieval and memory-guided decision-making.
Thursday, April 14, 2011
Lennert & Martinez-Trujillo
ABSTRACT Neurons in the primate dorsolateral prefrontal cortex (dlPFC) filter attend targets distinctly from distracters through their response rates. The extent to which this ability correlates with the organism's performance, and the neural processes underlying it, remain unclear. We trained monkeys to attend to a visual target that differed in rank along a color-ordinal scale from that of a distracter. The animals' performance at focusing attention on the target and filtering out the distracter improved as ordinal distance between the stimuli increased. Importantly, dlPFC neurons also improved their filtering performance with increasing ordinal target-distracter distance; they built up their response rate in anticipation of the target-distracter onset, and then units encoding target representations increased their firing rate by similar amounts, whereas units encoding distracter representations gradually suppressed their rates as the interstimulus ordinal distance increased. These results suggest that attentional-filtering performance in primates relies upon dlPFC neurons' ability to suppress distracter representations.
Tuesday, April 12, 2011
this is the breakdown via cnn
Monday, April 11, 2011
Salk Trustees Room
Nonlinear spatial integration in the retina
Professor of Physiology & Biophysics
University of Washington
Integration of signals originating at different times and/or spatial locations defines the stimulus features extracted and represented by a sensory system. Sensory integration is often assumed to be linear, as summarized by a cell’s receptive field. Yet some retinal ganglion cells violate this assumption of linearity, causing light inputs from different spatial locations to interact. Such nonlinear spatial integration, together with heterogeneity in the synaptic inputs from different spatial locations, dramatically enhances ganglion cell sensitivity to a variety of spatial features of the light inputs. I will describe work linking such feature selectivity to anatomical measures of the distribution of synaptic weights across a ganglion cell’s dendrites and physiological measures of nonlinear integration.
Sunday, April 10, 2011
Translational Studies of Exploratory Behavior: From Rodents to Psychiatric Patients.
Mark Geyer, UCSD
The validation of animal models relevant to psychiatric disorders is accomplished (or not) by assessing their ability to predict features of the disorder(s) of interest. Such validation is constrained to aspects of disorders that can be assessed using non-verbal approaches. Hence, objective behavioral and biological measures that characterize psychiatric disorders and/or their treatments are essential to the validation of putative animal models of psychiatric phenomena. As one approach to establishing behavioral profiles that might provide criteria for validating animal models differentiating schizophrenia from bipolar mania, we developed a human “Behavioral Pattern Monitor” (BPM) analogous to the rodent BPM that has been well studied as an elaboration of the classic Open Field test. Patients in a Psychiatric Inpatient Unit are asked to wait in a room and are monitored via video for 15 minutes. The resulting profiles of behavior readily distinguish acutely ill schizophrenia and manic patients using measures that appear to parallel similar measures of mouse behavior. Mice in which the dopamine transporter is impaired pharmacologically or genetically exhibit profiles of behavior that are similar to those of manic bipolar patients and different from either schizophrenia patients or healthy comparison subjects. These and related approaches to developing cross-species translational tools for psychiatric drug discovery and validation will be discussed.
Friday, April 8, 2011
Mysore, Asadollahi & Knudsen
Essential to the selection of the next target for gaze or attention is the ability to compare the strengths of multiple competing stimuli (bottom-up information) and to signal the strongest one. Although the optic tectum (OT) has been causally implicated in stimulus selection, how it computes the strongest stimulus is unknown. Here, we demonstrate that OT neurons in the barn owl systematically encode the relative strengths of simultaneously occurring stimuli independently of sensory modality. Moreover, special “switch-like” responses of a subset of neurons abruptly increase when the stimulus inside their receptive field becomes the strongest one. Such responses are not predicted by responses to single stimuli and, indeed, are eliminated in the absence of competitive interactions. We demonstrate that this sensory transformation substantially boosts the representation of the strongest stimulus by creating a binary discrimination signal, thereby setting the stage for potential winner-take-all target selection for gaze and attention.
Tuesday, April 5, 2011
How the Brain invents the Mind
When we look at other people, the features visible on the outside are only a small part of what we see. We are much more interested in seeing, or inferring, what's going on inside: other people's thoughts, beliefs and desires. If a person checks her watch, is she uncertain about the time, late for an appointment, or bored with the conversation? If a person shoots his friend on a hunting trip, did he intend revenge or just mistake his friend for a partridge? One of the most amazing discoveries of recent human cognitive neuroscience is that humans use a specific group of brain regions for thinking about thoughts. These brain regions are intrinsically interesting, and also provide a case study in the deeper and broader question: how does the brain - an electrical and biological machine - construct abstract thoughts?
Monday, April 4, 2011
Speech experts say that the video captures the twin boys on the cusp of language development, but that it’s more babble and mimicry than real conversation.
Friday, April 1, 2011
Merck Neurosciences Seminar Series
UCSD Neurosciences Graduate Program
Center for Neural Circuits and Behavior Large Conference Room (formerly CMG)
Dr. Xiaoqin Wang
Department of Biomedical Engineering
Johns Hopkins University
Information Processing in Primate Auditory Cortex
Xiaoqin Wang is a Professor of Biomedical Engineering, Neuroscience and Otolaryngology at Johns Hopkins and the Director of the Tsinghua-Johns Hopkins Joint Center for BME Research. His lab aims to understand brain mechanisms responsible for auditory perception and vocal communication in a naturalistic environment. They are interested in revealing neural mechanisms operating in the cerebral cortex and how cortical representations of biologically important sounds emerge through development and learning. Xiaoqin's lab uses a combination of state-of-the-art neurophysiological techniques and sophisticated computational and engineering tools to tackle research questions, using a highly vocal primate (common marmoset) as the model system.
Reinforcement learning and beyond: neural systems for valuation and learning in the human brain.
John P. O’Doherty
Division of Humanities and Social Sciences
Computation and Neural Systems program
California Institute of Technology
Interest in the computational and neural underpinnings of learned valuation and choice has surged in recent years. This interest can be attributed in large part to the observation that the phasic activity of dopamine neurons bears a remarkable similarity to prediction error learning signals derived from a family of abstract computational models collectively known as reinforcement learning (RL). In RL, prediction error signals are used to update predictions of future reward for different actions. These values are then compared in order to implement action selection. In this presentation I will outline evidence from functional neuroimaging studies in humans for the existence of computational learning signals such as prediction errors in the human brain. In particular, I will show that RL-related activity in the human dorsal striatum appears to be strongly linked to behavioral expression of instrumental learning and choice. I will then consider situations under which simple RL is unlikely to be sufficient to account for choice behavior. One form of learning incompatible with such a framework is observational learning, whereby the value of choices are acquired through observing the experiences of others as opposed to through direct trial and error experience. Another form of learning that cannot be accounted for by simple RL is “latent-learning”, whereby an animal can learn to take actions for reward on the basis of acquired “latent” knowledge about the structure of the environment even in the absence of explicit reinforcement during training. Using fMRI, I will provide evidence for the existence of distinct computational signals in the brain that could underpin each of these RL-independent types of learning. Finally, I will review the implications of these findings for our current understanding of the neural and computational basis of valuation and choice.