who: Ehimwenma Nosakhare
(Microsoft New England Research and Development Center)
when: Sept 12 (Thursday) 11:45a – 1:15p
where: Computer Science Building Rm 150
food: Athena’s Pizza
“Probabilistic Latent Variable Modeling for Predicting Future Well-Being and Assessing Behavioral Influences on Stress”
Health research has an increasing focus on promoting well-being and positive mental health, to prevent disease and to more effectively treat disorders. The availability of rich multi-modal datasets and advances in machine learning methods are now enabling data science research to begin to objectively assess well-being. However, most existing studies focus on detecting the current state or predicting the future state of well-being using stand-alone health behaviors. There is a need for methods that can handle a complex combination of health behaviors, as arise in real-world data.
Building on our previous work where we predict future well-being, in this talk, I’ll present a framework to 1) map multi-modal messy data collected in the “wild” to meaningful feature representations of health behavior, 2) uncover latent patterns comprising multiple health behaviors that best predict well-being, and 3) propose how these patterns may be used to recommend healthy behaviors to participants. We show how to use supervised latent Dirichlet allocation (sLDA) to model the observed behaviors, and we apply variational inference to uncover the latent patterns. Implementing and evaluating the model on 5,397 days of data from a group of 244 college students, we find that these latent patterns are indeed predictive of self-reported stress, one of the largest components affecting well-being. We investigate the modifiable behaviors present in these patterns and uncover some ways in which the factors work together to influence well-being.
This work contributes a new method using objective data analysis to help individuals monitor their well-being using real-world measurements. Insights from this study advance scientific knowledge on how combinations of daily modifiable human behaviors relate to human well-being.
Ehi Nosakhare is an AI Data Scientist at Microsoft’s New England Research and Development Center (NERD). She designs, develops and leads the implementation of machine learning solutions in application projects for Microsoft’s products and services. In August 2018, she earned her Ph.D. in Electrical Engineering and Computer Science (EECS) from the Massachusetts Institute of Technology (MIT), Cambridge, MA. Her PhD research focused on probabilistic latent variable models and applying them to understand subjective well-being. She is generally interested in developing interpretable ML models and using these models to solve real world problems, as a result, she is curious about the ethical implications of AI/ML. Ehi got her S.M. in EECS from MIT, and graduated with a B.Sc. in Electrical Engineering, summa cum laude, from Howard University, Washington DC. As a student, she completed internships at Microsoft and IBM T. J. Watson Research Center. She is a recipient of a best paper award at the NeurIPS ML for Healthcare Workshop. In 2017, she was an organizer for the Women in Machine Learning (WiML) workshop, co-located with NeurIPS. Ehi has been honored as a Tau Beta Pi Scholar and Fellow. In her spare time, she enjoys reading and re-learning to play the cello.
Lisa Sanders (UMass PBS) will present “A Potential Measure of Phonological Processing During Natural Speech Comprehension: One Year Later” in the Cognitive Brown Bag meeting at noon Weds. Sept. 11. These meetings are held in Tobin 521B and all are welcome. The schedule for the rest of the semester follows the abstract.
Abstract. Last year I told the cognitive brownbag group about a line of research that I wasn’t quite sure what to do with. Feedback during that talk led to a follow-up study with promising preliminary results, and an upcoming R01 submission. This year I can tell you about two distinct measures of phonetic and phonological processing during natural speech comprehension.
9/18 – p value discussion
10/2 – Ethics presentation (second year students)
10/9 – Erika Mayer (UMass Linguistics)
10/16 – Stacey Wetmore (Roanoke College)
10/23 – Glenn Baker (Mt. Holyoke)
10/30 – Shota Momma (UMass Linguistics)
11/6 – Julian Jara-Ettinger (Yale)
11/20 – Xingshan Li (Chinese Academy of Sciences)
12/4 – Guillaume Pagnier (Brown Ph.D. student)
12/11 – Ibrahim Dahlstrom-Hakki (TERC)
The cognitive brown bag speaker on Wednesday, April 24 (12:00, Tobin 521B) will be Merika Wilson (https://www.umass.edu/pbs/people/merika-wilson). Title and abstract are below. All are welcome.
The Role of Conjunctive Representations in Memory
Evidence suggests that structural or functional changes in the medial temporal lobe (MTL) impair long-term declarative memory, yet the reason why this specific region of the brain is critical for memory is not fully understood. One theory– the Representational-Hierarchical account –proposes that some memory deficits may reflect impairments in the representations that underlie memory processes. This theory makes two specific predictions. First, recognition memory performance in participants with compromised MTL structures should be impaired by feature-level interference, in which studied items contain many shared, and thus repeatedly appearing, perceptual features. Second, if the interference in a recognition memory task – i.e., the information that repeats across items – resides at a higher level of complexity than simple perceptual features, such as semantic gist, participants with compromised MTL structures should be less impacted by such interference than participants with intact MTL structures. We tested these predictions using the Deese-Roediger-McDermott paradigm, by creating feature-level (i.e., perceptual) interference with phonemically/orthographically related word categories, and higher-level associative interference with semantically related word categories. The current study extends previous findings from this paradigm with older adults, who are thought to have age-related changes to MTL structures, to two individuals with more extensive MTL damage.
Veneeta Dayal of Yale University will present “The Fine Structure of the Interrogative Left Periphery” in the GLSA Linguistics colloquium series Friday April 12th at 3:30. All are welcome!
Abstract: In this talk I explore the possibility that there are three points on the left periphery where interrogative meaning is built up, CP+WH, Force-P+Q, SAPASK:
[SAP SA0ASK [Force-P Force0+Q [CP C0+WH [TP]]]]
At CP, the +WH specification takes the TP denotation and creates a set of propositions, the semantic type for questions. At SAP, the question is anchored to the context of utterance via speaker and addressee co-ordinates. CPs are canonically what we find in complement positions, SAPs what we find in matrix questions and quotations. This two-way distinction, I would venture to say, is relatively uncontroversial or at least less radical sounding than the postulation of a three-way distinction.
I argue for a third structural position, in between CP and SAP, with a distinct semantic profile. I call this position Force-P+Q. While the term Force-P is familiar from Rizzi (1997), the characterization of this position is likely different from what has so far been assumed in the literature. I argue that Force0+Q takes a set of propositions (a question denotation) and turns it into a centered question, a question that is crucially active for someone. This allows Force-P to either feed into SAP, and be linked to a contextually provided anchor, or enter into a complementation relation with a predicate and be linked to an argument of that predicate.
The empirical justification for the three-way distinction in interrogative syntax-semantics comes from the following inter-related phenomena, which will be discussed in some detail in the course of the talk: embedding predicates, subject-aux inversion, biased questions, (polar) question particles, intonational contours, alternative vs. polar questions. In doing so, I draw on earlier collaborative work with Jane Grimshaw (Dayal and Grimshaw 2009) and Rajesh Bhatt (Bhatt and Dayal 2014 and subsequent versions), while absolving them of all responsibility for anything in this proposal that they may not have signed on to.
The cognitive brown bag this week on Wednesday April 10 at 12:00 in Tobin 521B will be given by Sandarsh Pandey (https://www.umass.edu/pbs/people/sandarsh-pandey). Title and abstract are below. All are welcome.
Investigating the properties of hierarchical ensemble representations
An ensemble representation contains the global summary statistic (mean, variance, etc) of a group of individual representations. Individual ensemble representations of different features (orientation, size, facial expression) have been extensively studied but not a lot is known about how multiple ensemble representations interact with one another. In our study, we generate a hierarchical stimulus by spatially arranging small circles of varying sizes to form several bigger circles of varying sizes. Participants either estimate the mean size of the smaller circles (lower level ensemble representation) or the mean size of the bigger circles (higher level ensemble representation). We were interested in answering two questions – whether the two representations are independent of one another and whether there was a cost involved in holding both representations simultaneously (compared to just holding a single lower or higher level representation). Results indicated that the lower level ensemble representation biased the estimation of the higher level ensemble representation, though the reverse was not true. We also found that despite ensemble encoding being fast and automatic, there was an increased cost involved in holding both ensemble representations simultaneously.
Rethinking Social Networks in the Era of Computational Social Science
Professor, Department of Sociology, University of Massachusetts Amherst
Friday, April 5, 2019 • 12:30-2:00PM (lunch served at 12:15)
Computer Science Building • Room 150/151
Abstract — Social network analysis has proliferated throughout the social sciences over the past 50 years. In a recent paper I have argued that this work has conceptualized ‘social ties’ in four fundamentally different ways – as socially constructed role relations such as friendship or co-authorship; interpersonal sentiments such as liking or hatred; behavioral interactions such as communication or scholarly citations; or access to information or other resources. In this presentation I will discuss the interplay of these concepts, consider where ties (and non-ties) are likely to match across these four domains, and thus assess where we may apply theories based on one network concept (e.g., sentiment ties of liking and disliking) to data representing another (e.g., interaction as logs of e-mails sent). I will then discuss some empirical lenses emerging from computational social science, such as wearable sensors, location-aware devices, online calendars, logs of phone calls, e-mails, or online transactions. I hope to inspire an interdisciplinary conversation about how these time-stamped event series correspond to the social science concepts of social networks above. The associated paper is available at this link.
Bio — James Kitts is a professor of sociology and was a founding co-director of the Computational Social Science Institute at UMass. He earned his Ph.D. from Cornell University in 2001 and previously held faculty appointments at Columbia University, Dartmouth College, and the University of Washington. Bridging computational social science, sociology, and public health, James has worked on methods for detecting networks of social interaction using wearable sensors, analyzed the network dynamics of adolescent friendships and inter-hospital patient transfers, modeled opinion polarization on influence networks, and conducted field research on dietary norms in networks of militant vegans. He is Principal Investigator on an NIH R01 grant investigating peer influence in health behavior on adolescent social networks in four urban middle schools, a collaboration with CSSI colleagues John Sirard of Kinesiology (Co-PI), Mark Pachucki of Sociology, and Krista Gile of Mathematics & Statistics, along with Lindiwe Sibeko of Nutrition.
Sabine Iatridou of MIT will present “Negation Licensed Comments” in the GLSA colloquium series Friday April 5th at 3:30 in ILC N400. All are welcome, and a reception will follow.
The cognitive brown bag on Weds. April 3 will feature Junha Chang ( https://www.umass.edu/pbs/people/junha-chang). As always, the talk will be at noon in Tobin 521B. Title and abstract are below. All are welcome.
Do Observers Integrate Separate Features to Make An Integrated Target for Better Search Guidance?
In most search tasks, observers are given target information as a cue before a search array appears. It is well established that observers actively use the given form of target cue and earn benefits from an exact target cue on search performance. However, it is unclear whether the observers could voluntarily integrate individual target feature information to match the predicted target form to earn the benefits. To test this, we compared behavioral data and the amplitude of Contralateral Delay Activity, which is indicative of the number of VWM representations being held, between two cue conditions. In a split cue condition, participants viewed two separate feature cues for each target feature (i.e., a color rectangle and an orientation bar) and were instructed to look for a target defined by a conjunction of two feature cues in the following search array. In an integrated cue condition, the participants viewed two identical conjunction targets as cues (i.e., two colored orientation bars). If the participants integrate two target features into an object in the split cue, it predicts similar RTs and CDA amplitudes between two cue conditions. Whereas, if the participants maintain two target features separately in the split cue condition, it predicts longer RT and larger CDA amplitudes in the split cue condition than the integrated cue condition. So far, we found mixed results: longer RT and numerically larger CDA amplitude but not significant in the split cue condition. This pattern might suggest that the participants maintained an integrated target representation in VWM but guided attention by each feature.