Monthly Archives: January 2016

Gross in CSSI seminar Fri. Jan. 29 12:30-2

The UMass Computational Social Science Institute invites you to the CSSI seminar, cosponsored this week by the Department of Political Science:

Justin Gross

Department of Political Science, University of Massachusetts Amherst
Connected by Comments: The Evolving Affect Network of Presidential Candidates on Twitter During the 2016 GOP Nomination Contest  

Friday, January 29, 2016 • 12:30 p.m.-2:00 p.m.
Computer Science Building, Room 150/151
Lunch begins at 12:00; talk begins at 12:30

Abstract:  The unprecedented number of serious candidates in the Republican Party’s 2016 nomination contest for the U.S. Presidency provides a rare opportunity to examine the changing nature of affective relationships among candidates. All seventeen major candidates have Twitter accounts and have tweeted comments about their opponents both before the campaigning began and over the course of the campaign for the GOP nomination. Political scientists, writing on the phenomenon of negative campaigning, have made a number predictions about what conditions will make it more likely that a campaign shall “go negative.” These researchers have concentrated on advertisements, but the number and variety of advertisements produced are highly dependent on a campaign’s resources. By contrast, candidates’ use of social media allows us to directly observe, in real time, candidate interaction. Furthermore, the nature of “going negative” is more complicated in a crowded field and would seem to benefit from a network analytic approach. Structural balance theory, in particular, provides some guidance in thinking about the dynamics of positive and negative affect, operationalized as a signed network. However, peer group and organizational behavior, commonly driving theories of structural balance, are rather distinct from the behavior among electoral competitors. How likely is it that, in an environment tending toward mutual animosity, a taste for structural balance will somehow prevail? I address this question, examine clustering patterns, and describe some highlights of the online conflicts and collaborations that have played out this election season.

Bio:  Justin H. Gross is Assistant Professor of Political Science at UMass Amherst. He holds a Ph.D. in Statistics and Public Policy from Carnegie Mellon University. His applied research interests are in mass media and political communication, public opinion, and public policy. He works on methodological problems in measurement, text analysis, and network analysis, and is especially interested in methods that put statistical and computational tools to use in service of our ability to achieve rich qualitative insights. Recently, he has been collaborating with a cross-disciplinary team of political scientists and computational linguists, developing tools for the detection of issue frames and publicly expressed ideologies in text.

Mazumdar in CS Thursday 1/28 at 1 pm

Neural Auto-associative Memory Via Sparse Recovery

who: Arya Mazumdar, UMass CICS

when: 1:00pm Thursday, 1/28

where: cs151
pizza: Antonio’s
generous sponsor: Yahoo!

***In general, MLFL will be at 12pm this semester. However, this week it is at 1pm***

Abstract:

An associative memory is a structure learned from a dataset M of vectors (signals) in a way such that, given a noisy version of one of the vectors as input, the nearest valid vector from M (nearest neighbor) is provided as output, preferably via a fast iterative algorithm. Traditionally, neural networks are used to model the above structure. In this talk we propose a model of associative memory based on sparse recovery of signals. Our basic premise is simple. Given a dataset, we learn a set of linear constraints that every vector in the dataset must satisfy. Provided these linear constraints possess some special properties, it is possible to cast the task of finding nearest neighbor as a sparse recovery problem. Assuming generic random models for the dataset, we show that it is possible to store exponential number of n-length vectors in a neural network of size O(n). Furthermore, given a noisy version of one of the stored vectors corrupted in linear number of coordinates, the vector can be correctly recalled using a neurally feasible algorithm.

Instead of assuming the above subspace model for the dataset, we might assume that the data is a sparse linear combination of vectors from a dictionary (sparse-coding). This very relevant model poses significant challenge in designing associative memory and is one of the main problems we will describe. (This is a joint work with Ankit Singh Rawat (CMU) and was presented in part at NIPS’15).

Bio:

is an assistant professor in the College of Information and Computer Science at the University of Massachusetts, Amherst. From Jan 2013 till recently Arya used to be an assistant professor at University of Minnesota-Twin Cities, and form Aug 2011 to Dec 2012, he was a postdoctoral scholar at Massachusetts Institute of Technology. He received his Ph.D. from University of Maryland, College Park, in 2011. Arya is a recipient of 2014-15 NSF CAREER award and the 2010 IEEE ISIT Student Paper Award. He is also the recipient of the Distinguished Dissertation Award, 2011, at the University of Maryland. He spent the summers of 2008 and 2010 at the Hewlett-Packard Laboratories, Palo Alto, CA, and IBM Almaden Research Center, San Jose, CA, respectively. Arya’s research interests include Information and Coding Theory and their applications to networked systems and learning.

UMass CogSci Workshop Fri. Jan. 29th

The 2nd Annual UMass Cognitive Science Workshop will be held this Friday Jan. 29th from 2:30 – 5, in the Department of Linguistics on the 4th floor on the Integrative Learning Center. All are welcome, regardless of whether you have RSVP’d. Please circulate this announcement to departmental and other mailing lists. 

We’ll have talks from (relatively) new additions to our CogSci faculty, in room N400:

2:30 Gaja Jarosz, Linguistics, “Nature vs. Nurture in Phonological Acquisition: Sonority Sequencing in Polish”
3:00 Brendan O’Connor, Computer Science “Linguistic discovery in social media corpora: natural language processing meets sociolinguistics”
3:30 Christopher White, Music “What are words and parts of speech in music?”

From 4 – 5, we’ll have a poster session outside N400 accompanied by snacks (we won’t be supplying drinks – water will be available – we can also brew tea or coffee for those who want it – ask Joe Pater). We have plenty of space for more posters. If you’d like to present one (previously presented work is the norm!), please fill out this google form by the end of day Wednesday:

http://goo.gl/forms/j85vUkrOu5

Stratos in CS on word learning Thursday, 1/21

who: Karl Stratos, Columbia

when: 12:00pm Thursday, 1/21

where: cs151
pizza: Antonio’s
generous sponsor: Yahoo!

***Note that this semester’s MLFL talks will be at a different time than last semester***

Abstract:

There has recently been much success in deriving rich, distributional representations of words from large quantities of unlabeled text. They include discrete representations such as agglomerative clusters (e.g., Brown clusters) and real-valued vectors such as word embeddings (e.g., word2vec). These lexical representations can be deployed off-the-shelf in a wide range of language processing tasks to help the model generalize at the word level.

In this talk, I will present a simple and efficient algorithm for learning such representations. The algorithm is spectral—i.e., it involves the use of singular value decomposition (SVD), and it comes with a theoretical guarantee of recovering the underlying model given enough data from the model. In addition, we find that our algorithm can be much more scalable than other methods in practice. For example, it can be up to 10x faster than the Brown clustering algorithm in wall-clock time while delivering competitive lexical representations.

Bio:

Karl Stratos is a PhD student at Columbia University, advised by Michael Collins. He is broadly interested in machine learning techniques and their applications in natural language processing. His recent focus has been on spectral algorithms, representation learning, and structured prediction. One of his research aims is to develop practical approaches to leveraging enormous amounts of unlabeled data.

Brain Wars and “Integrative Cognitive Science”

Dear all,

I’m working on a new project called “Brain Wars” http://websites.umass.edu/brain-wars/. This is my working title for a book directed at the general public about the debates over “Associative” vs. “Algebraic” models of cognition, and related battles. I’ll likely be starting with Minsky vs. Rosenblatt, a precursor to the battles that occurred after the publication of Rumelhart and McClelland 1986. I’m also planning to advance the thesis that these paradigm wars have largely subsided in the 21st century, with most people assuming some sort of hybrid theory or approach (though I’m of course aware that there is still ink being spilled over these and other issues); I’m labeling this “Integrative Cognitive Science”. I’ve set up a website at the above link to store references, citations, and archival material. There’s not much there yet, though there are audio recordings of Smolensky’s 1988 debates with Fodor and Pylyshyn (under “Debates”), and I’ve also got a link to a video of Adrian Staub and me speaking in Paris at a roundtable on “Empirical Foundations of Linguistics” (under “Peace in Our Time? Integrative Cognitive Science”). Incidentally, Adrian’s talk has a nice bit of publicity for CogSci at UMass, mentioning the long history of productive collaboration between Linguistics and Cognitive Psychology. I would very much welcome your thoughts or contributions.

Best,

Joe (Pater).

pater@linguist.umass.edu

Simon Kirby April 22nd: Save the date and time poll

This is a reminder that Simon Kirby of Edinburgh will be visiting us on Friday April 22nd. There are two possibilities for times. One is to hold the talk at 12:30 – 2, which is the Computational  Social Science Seminar time, and we’d be able to take advantage of their generous offer to host the talk in the CS space and provide catering. This would also help with our ongoing efforts to build connections with our colleagues in CSSI (a number of us have spoken in their seminar series, and one of our workshop speakers, Brendan O’Connor, is an active member of both groups). The other is to hold the talk later, which may turn out to be more compatible with people’s teaching schedules. There is no Linguistics colloquium that day, so that time (3:30) may turn out to be particularly convenient for many. Can those of you who are interested in hearing this talk please indicate your preferred time in this poll?

RSVP: 2nd annual UMass CogSci Workshop Fri. Jan. 29th

We will be holding the 2nd annual UMass Cognitive Science Workshop on Friday January 29th, from 2 – 5 pm. Our speakers will be Gaja Jarosz of Linguistics, Brendan O’Connor of Computer Science, and one more to be confirmed. The talks will start at 2:30, and we’ll have time before and after for informal discussion and posters. A more precise schedule, and location will be announced soon. The location will depend on how many posters and attendees we have, so could you please fill out this quick RSVP by noon this Monday the 18th? There is a space to indicate your poster title and author list. A reminder: posters that have been presented at other conferences are more than welcome (this is the usual case), and these are a great way of sharing your work with other people in the cognitive science community.