Posterside Hangouts is a new Hangouts On Air, which is hosted by the Science on Google+ Community (http://goo.gl/uhJCN
). The main goal of this HOA series is to recreate a poster session-like atmosphere here on G+, so researchers can present their recent findings. Presentations will be grouped by discipline and individual presentations will last approximately 10 – 15 minutes.
Do you have a recent conference presentation, manuscript, or book that you would like to share with the Google+ community? Do you want to give your undergraduate or graduate students practice presenting their research? If yes, then let us know by filling out this short form: http://goo.gl/e0KPhE
================================Psychology Talks for Posterside Hangouts #1, Authors (Affiliations)When audition dominates vision: Evidence from cross-modal statistical learning
+Chris Robinson (The Ohio State University at Newark)Automatic selection of eye tracking variables in visual categorization for adults and infants
+Samuel Rivera (The Ohio State University at Columbus)Foreign accent does not inﬂuence cognitive judgments
+Andre L. Souza (Concordia University) and +Art Markman (The University of Texas at Austin)Positive mood may enhance cognitive flexibility: Evidence from category learning
+Paul Minda (The University of Western Ontario) and +Ruby Nadler(The University of Western Ontario)
================================Abstracts and LinksWhen audition dominates vision: Evidence from cross-modal statistical learning
Presenting information to multiple sensory modalities sometimes facilitates and sometimes interferes with processing of this information. Research examining interference effects shows that auditory input often interferes with processing of visual input in young children (i.e., auditory dominance effect), whereas visual input often interferes with auditory processing in adults (i.e., visual dominance effect). The current study used a cross-modal statistical learning task to examine modality dominance in adults. Participants ably learned auditory and visual statistics when auditory and visual sequences were presented unimodally and when auditory and visual sequences were correlated during training. However, increasing task demands resulted in an important asymmetry: Increased task demands attenuated visual statistical learning, while having no effect on auditory statistical learning. These findings are consistent with auditory dominance effects reported in young children and have important implications for our understanding of how sensory modalities interact while learning the structure of cross-modal information.
Link to Manuscript: http://goo.gl/VFBVkD
Personal Website: http://goo.gl/glUXv2Automatic selection of eye tracking variables in visual categorization for adults and infants
We present a computational approach for the selection of diagnostic eye tracking variables. Previous methods for the selection of eye tracking variables have been ad-hoc or hypothesis driven. In the absence of a good hypothesis, researchers are left to experiment with many alternatives. To resolve this problem, we use feature extraction and classification algorithms from machine learning to automatically identify the eye tracking variables that best correlate within sample eye tracking sequences belonging to the same category yet discriminate between categories. This approach allows us to extract the few (i.e., two to four) most diagnostic features from a pool of dozens. While previous work required the testing of a large number of hypotheses, we demonstrate how the proposed methodology yields the same result without the need to test a large number of alternative hypotheses. Instead, our method is data driven, i.e., the resulting model is obtained from the data. The proposed methodology was verified in a visual categorization task with adults and infants. Here, we presented infants and adults with a category learning task and tracked their eye movements. We extracted an over-complete set of eye tracking variables encompassing durations, probabilities, latencies, and the order of fixations and saccadic eye movements. The method defined identified a small set of variables that allows us to predict category learning among adults and 6- to 8-month-old infants and suggests that the looking strategies of adults and infants are distinct.
Source Code: http://goo.gl/bcVeOy
Link to Poster: http://goo.gl/U9WnbO
Link to Manuscript: http://goo.gl/b1xqfp
Personal website: http://goo.gl/M73p6BForeign accent does not inﬂuence cognitive judgments
A recent paper by Lev-Ari and Keysar (2010) reported that the processing ﬂuency associated with non-native speech causes non-native speakers to sound less credible. The authors found that the same trivia statements were rated as less truthful when spoken by a non-native speaker of English. The present paper reports the results of three studies that attempted to replicate the ﬁndings of Lev-Ari and Keysar (2010) by focusing on processing ﬂuency manipulations other than accent. Although we used virtually the same methodology as Lev-Ari and Keysar (2010), we failed to replicate the key ﬁnding that foreign-accented speech is less credible than native-accented speech. The implications of this ﬁnding is discussed.
Link to Manuscript: http://goo.gl/5hJFdR
Personal Website: http://goo.gl/EA3tEqPositive Mood May Enhance Cognitive Flexibility: Evidence from category learning
Theories of mood and its effects on cognition suggest that positive mood may increase cognitive flexibility. This increased flexibility is associated with areas in the prefrontal cortex and the anterior cingulate cortex, both of which play crucial roles in hypothesis testing and rule selection. As such, cognitive tasks that rely on these behaviors may benefit from positive mood, whereas tasks that do not rely on these behaviors should not benefit from cognitive flexibility and/or positive mood. We explored this idea within a category-learning framework. positive, neutral, and negative moods were induced in our subjects and they learned either a rule-described or a non-rule-described category set. Subjects in the positive mood condition performed significantly better than subjects in the neutral or negative mood conditions when learning the rule-described categories. Mood had a less obvious effect effect on the learning of non-rule-described categories, but computational modelling suggested that subjects who learned in a positive mood were more likely to use the optimal learning strategy. These results have implication for theories of category learning, and also have implication for the understanding the effects of local, environmental factors like mood on performance.
Link to Manuscript: http://goo.gl/RhfXAL
Lab Website: http://goo.gl/oMHGmx
Background image source: http://goo.gl/6vJ0sH