Categories
Uncategorized

Linguists at Data Science Tea 4 pm Tues. 4/26

What: tea, refreshments, presentations and conversations about topics in data science
Where: Computer Science Building Rooms 150, 151
When: 4-5:30pm Tuesday April 26
Who: You! Especially PhD & MS students, and faculty interested in data science.

Professor Kristine Yu – The learnability of tones from the speech signal

Many of the world’s languages are tone languages, meaning that a change in pitch (how high or how low the voice is) causes a change in word meaning, e.g. in Mandarin, “ma” uttered with a rising pitch like in an English question (Did you go to class today?) means “hemp”, but “ma” uttered with a falling pitch like in an English declarative (Yes!) means “to scold”. This talk discusses initial steps in using machine learning to find out the best way to parametrize tones in an acoustic space, in order to set up the learning problem for studying how tone categories could be learned. I look forward to your comments and suggestions!

Professor Gaja Jarosz – Computational Models of Language Development: Nature vs. Nurture

Recent work on phonological learning has utilized computational modeling to investigate the role of universal biases in language development. In this talk I review the latest findings and controversies regarding the status of a particular language universal, Sonority Sequencing Principle, traditionally argued to constrain the sound structure of all human languages. I argue that explicit computational and statistical models of the language development process, when tested across languages (English, Mandarin, Korean, and Polish) allow us to disentangle the often correlated predictions of competing hypotheses, and suggest a crucial role for this universal principle in language learning.

Professor Brian Dillon – Serial vs. parallel structure-building in syntactic comprehension

In this talk I give a brief overview to theories of human syntactic comprehension. An important theoretical question in this area is whether the human sentence processor creates and maintains a single syntactic description of a sentence, or if instead it maintains multiple, parallel parses of the input. This question is of wide-ranging theoretical importance for theories of human syntactic processing, but the empirical data that distinguish serial from parallel parsing behavior are unclear at best (Gibson & Pearlmutter, 2000; Lewis, 2000). In this talk I reexamine this theoretical question, and present in progress work with Matt Wagers (Linguistics, UC Santa Cruz) that uses tools from mathematical psychology (Signal Detection Theory) to derive novel empirical predictions that distinguish serial vs. parallel processing, a first step on the road to reevaluating this old, but perpetually important, theoretical question.

Speaker Bios

Kristine Yu received a B.S. in Chemistry from Stanford University and her M.A. and Ph.D. in linguistics from the University of California Los Angeles. Her research focuses on tone and intonation, from the speech signal on up to grammar and human language processing.

Gaja Jarosz is an associate professor in the UMass Linguistics department, as of this academic year. She works in the areas of phonological theory, language development, and computational linguistics. Prior to joining the UMass Linguistics Department, she was an assistant (and then associate) professor in the Linguistics Department at Yale University (2007-2015). She received her PhD in Cognitive Science at the Johns Hopkins University in 2006.

Brian Dillon is an assistant professor in Linguistics at the University of Massachusetts, Amherst. He runs the Cognitive Science of Language lab at UMass Amherst, which focuses on psycholinguistics, the study of how children and adults acquire and understand natural language. Prior to coming to UMass, he studied at the University of Maryland’s CNL lab with professor Colin Phillips and professor William Idsardi.  Before that, he worked with professor Robert Van Valin on the morphosyntax of modern Irish.

 

Categories
Uncategorized

New grant proposal travel grants available

Meeting with a funding agency program officer can be a useful part of the grant preparation process. To facilitate these meetings, the Initiative in Cognitive Science offers grants of up to $400 to faculty wishing to travel to meet with a program officer in order to discuss a grant proposal in cognitive science. These grants are subject to the condition that matching funds are provided by the faculty member’s department or college/school. For more information, contact one of the Initiative co-directors, Joe Pater or Lisa Sanders.

Categories
Uncategorized

Pesetsky in Linguistics Friday April 15th at 3:30

David Pesetsky (MIT) will give the Linguistics department colloquium on Friday, April 15, at 3:30 in ILC N400. The title of his talk is “Exfoliation: towards a derivational theory of clause size.” An abstract follows.

We too easily become used to facts about language that should strike us as strange. One of these is the menagerie of clause-types and clause-sizes in the world’s languages categorized with ill- understood labels such as finite, non-finite, full, reduced, defective, and worse. For almost a half-century, the standard approach to these distinctions has treated them as a consequence of lexical choice — a legacy of arguments by Kiparsky & Kiparsky (1970) and Bresnan (1972), who showed (1) that verbs that select a clausal complement select for the complementizer and finiteness of that complement, and (2) that finiteness and complementizer choice have semantic implications. In an early-1970s model of grammar in which selection and semantic interpretation were properties of Deep Structure, these discoveries directly entailed the lexicalist view of clause type that is still the standard view today. So compelling was this argument at the time, that its 1960s predecessor (Rosenbaum 1967) was all but forgotten — the idea that distinctions are derivationally derived as the by-product of derivational processes such as Raising. As a consequence, it has gone unnoticed that in a modern model of grammar, where structure is built by Merge (and both selection and semantic interpretation are interspersed with syntactic operations), the arguments against the derivational theory no longer go through.

In this talk, I present a series of arguments for a modernized return to a derivational theory. I argue that a reduced clause is the response to specific situation: a clause-external probe that has located a goal such as the subject in the upper phase of its CP-complement, when that goal does not occupy the edge of its CP. Since anti-locality prevents that goal from moving to the clausal edge (Erlewine 2015 and predecessors), a last-resort operation called Exfoliation deletes outer layers of the clause until the goal occupies the edge without movement. If the goal was a subject occupying a low enough position, the result is an infinitive. If the goal occupied a higher position, the result is a finite clause missing its complementizer. My starting point is the paradigm in (a)-(d). Because a standard approach assumes that every infinitive is born infinitival, the contrast between (a) and (b) is usually treated as a puzzle of case theory: why does moving the subject in (b) eliminate its case problem visible in (a)? The derivational approach invites an entirely different question: why should the embedded clause in (a) be infinitival in the first place? Since no probe targets the embedded subject in (a), Exfoliation should not have taken place, and the clause should have remained finite (I assure you that Mary is the best candidate). Only in (b), where an Ā-probe has targeted the embedded subject, is Exfoliation justified, hence the possibility of an infinitive. Example (c) also shows Exfoliation, deleting only the complementizer because the subject is higher than in (b), and (d) is impossible because no Exfoliation took place — thus explaining the that-trace effect as part of the same paradigm.

a. *I assure you Mary to be the best candidate.

b. Mary, who I assure you __ to be the best candidate. (Kayne 1983)

c. Mary, who I assure you __ is the best candidate.

d. *Mary, who I assure you that __ is the best candidate.

Similar effects with A-movement arise in the behavior of English wager-class predicates and raising in Lusaamia (Carstens & Diercks 2014), as well as with other Ā-phenomena such as anti- Agreement (Baier 2015). Finally, I provide an independent argument for the last-resort nature of Exfoliation from Zulu Hyper-Raising, based on a simplified version of a proposal by Halpert (2015).

Categories
Uncategorized

Simon Kirby events next Friday April 22

We will meet with  Simon Kirby of the University of Edinburgh on April 22 10-11 am in Integrative Learning Center N400 to discuss his 2015 Cognition paper “Compression and communication in the cultural evolution of linguistic structure“. There will also be a prior meeting to prepare for that discussion on Wednesday April 20th from 11-12 in the same location. All welcome to attend either one or both of these meetings. If you would like to meet with Simon at some other time during his visit, please e-mail Joe Pater.

As previously announced, his talk “The Evolution of Linguistic Structure: where learning, culture and biology meet” jointly sponsored by the Initiative in Cognitive Science  and the 5 Colleges Cognitive Science Seminar on will take place at 3:30 in ILC N101. The abstract is below.

Abstract. Language is striking in its systematic structure at all levels of description. By exhibiting combinatoriality and compositionality, each utterance in a language does not stand alone, but rather exhibits a network of dependencies on the other utterances in that language. Where does this structure come from? Why is language systematic, and where else might we expect to find this kind of systematicity in nature? In this talk, I will propose a simple hypothesis that systematic structure is the inevitable result of a suite of behaviours being transmitted by iterated learning. Iterated learning is a mechanism of cultural evolution in which behaviours persist by being learned through observation of that behaviour in another individual who acquired it in the same way. I will survey a wide range of lab studies of iterated learning, in which the cultural evolution of sets of behaviours is experimentally recreated. These studies include everything from artificial language learning tasks and sign language experiments, to more abstract behaviours like sequence learning, and have recently even been extended to other species. I will conclude by suggesting that these cultural evolution experiments provide clear predictions about where we should expect to see structure in behaviour, and what form that structure might take.
Categories
Uncategorized

Zinkow in MLFL noon Thursday 4/7

what: Composing Inference Methods For Probabilistic Models

who: Rob Zinkov, Indiana University

when: 12:00pm Thursday, 4/7

where: cs151
pizza: Antonio’s
generous sponsor: Yahoo!

Abstract:

Probabilistic inference procedures are usually coded painstakingly from scratch, for each target model and each inference algorithm. In this talk, I will show how inference procedures can be posed as program transformations which transform one probabilistic model into another probabilistic model. These transformations allow us to generate programs which express exact and approximate inference, and allow us to compose multiple inference procedures over a single model. The resulting inference procedure runs in time comparable to a handwritten procedure.

Bio:

Rob Zinkov is a Research Scientist at Indiana University working with Chung-chieh Shan on probabilistic programming and Bayesian inference.

Categories
Uncategorized

CSSI/ISSR “Examining the Replication Crisis” 12:30 Friday 4/8

Friday, April 8, 2016 – 12:30pm to 2:00pm
107 Bartlett Hall

The ability to replicate research findings is an essential component of the scientific process.  However, the scientific process itself has come under considerable scrutiny due to recent evidence that the results of many studies and experiments are difficult, if not impossible, to replicate.[1]  Many trace the roots of the problem to an academic merit system that rewards original research and positive results with publication, while discounting studies that involve replication or show insignificant findings.  Others argue that conflicting and contradictory findings are a natural part of the process of discovery where knowledge does not hinge on a single experiment, but rather evolves over the course of multiple studies.  The inability to replicate may simply show that that context matters – results found in one setting may very well differ in another.

This panel discussion will explore the causes, controversies and consequences of the “Replication Debate” featuring methodologists and researchers from across the social and computational sciences. We will also explore the pros and cons of several proposed remedial actions, such as expecting more stringent statistical tests, abandoning the use of threshold p values, requiring larger samples and requiring publishing scholars to provide open access to their datasets and code. The event is hosted by ISSR and the Computational Social Science Institute.

Featured Speakers

  • Emery Berger (Information and Computer Science)
  • Thomas Herndon (Economics)
  • David Jensen (Information and Computer Science)
  • Caren Rotello (Psychology)
  • Adrian Staub (Psychology)

[1] Some recent examples include:

Chang and Li (2015). “Is Economics Research Replicable? Sixty Published Papers from Thirteen Journals Say ”Usually Not”,” Finance and Economics Discussion Series 2015-083. Washington: Board of Governors of the Federal Reserve System

Herdon, Ash and Pollin (2013) “Does high public debt consistently stifle economic growth?  A critique of Reinhart and Rogoff.” Cambridge Journal of Economics.

Open Science Collaboration (2015). “Estimating the reproducibility of psychological science.” Science 349: 6251

Categories
Uncategorized

Davidson in Linguistics 3:30 Friday 4/8

Kathryn Davidson (Harvard) will give a Linguistics colloquium on Friday, April 8 at 3:30 in ILC N400. A title and abstract of her talk follow.

Title:
Combining imagistic and discrete components in a single proposition: The case of sign language classifier predicates

Abstract:
Classifier predicates in sign languages (also known as “depicting verbs”) have both discrete and imagistic components: they participate fully in the grammar as verbs and involve categorical handshapes that agree with the subject, but also have an obligatory “gestural” component that psycholinguistic experiments have shown are interpreted in an analog and iconic way. Understanding how to treat these verbs in a formal semantic system is therefore a challenge. In this talk I will draw parallels with work on quotation and attitude reports to introduce an analysis of classifier predicates involving the notion of an iconic “demonstration of events”. I will also present corpus data from bimodal (sign/speech) bilingual blended utterances that sheds light on the syntax/semantics of classifier predicates. Finally, I will discuss extensions of this analysis of classifier predicates to formal semantic analyses of gesture.

Categories
Uncategorized

Mai in Cognitive Brown Bag Weds. 4/6

Qui Mai will be presenting in the Cognitive Brown Bag series today Weds. April 6th from 12-1:20, in Tobin 521B. Next week’s speaker is Ben Zobel, who will present “Spatial release from informational masking by the precedence effect”. All are welcome!

Categories
Uncategorized

Data science research symposium April 22

This e-mail comes from Marla Michel (marla@umass.edu), the director of strategic programs at the newly formed UMass Center for Data Science, which we expect to create considerable synergy with our own nascent Institute. Note the registration deadline of April 13th. The event takes place on the same date as our own talk from Simon Kirby, but the presentations take place in the morning, which don’t conflict with Kirby’s 3:30 time.
This is a second notice regarding the upcoming DATA SCIENCE RESEARCH SYMPOSIUM on Friday, April 22, 8:30 – 5:00, hosted by the UMass Center for Data Science in the Computer Science Building. Please share this with your centers, institutes, and colleagues.
Please register by April 13 to help us with our planning. There is no registration fee.
The theme for the day will be Collaborating for Impact.  Here is a sampling of the partner pairs who will speak about their data science challenges and collaborative research approaches:
  • Improving Manufacturing Supply Chains: Colleagues from Pratt & Whitney will share their firm’s foray into using data science, specifically complex probabilistic models, to enhance their supply chain management. CICS Professor David Jensen will co-present with PW’s Jeremy Summers or Jay Goya.
  • Big Data in Municipalities: The public sector, as represented by the municipalities, are incredibly challenged by the large quantities of data they are dealing with. Professor Henry Renski, Landscape Architecture and Regional Planning, will share some of his insights regarding opportunities for impact alongside a partner from one of the state’s largest cities.  We’re still awaiting final confirmation on the speaker.
  • Better Infrastructure Decision Making: Reflecting on a current collaboration, Massachusetts Department of Transportation’s Katie McArthur and CICS Professor Dan Sheldon will present on how data science is helping to the agency make better decisions.  This is work also involving colleagues from CNS and COE.
  • Career Pathing, Laddering and Education: There’s a real opportunity for data science to be used to help advise and predict career paths and ladders for workforce planners and people in the data science field. CICS Professor and Center Director Andrew McCallum will co-present with Burning Glass chief scientist, Dan Restuccia, on the exciting potential of what happens when you marry natural language processing research with thousands of resumes and position descriptions.
Other topics include healthIT, security and private, reinforcement learning, natural language understanding. Companies participating include Oracle, Microsoft Research, Amazon, MITRE, McKesson, MassMutual, State Street, and more.
For a complete list of pairings and current list of breakout sessions, please visit the symposium’s webpage.
NOTE: Graduate students wanting to attend must commit to preparing and sharing a poster on their research when registering here.
Questions to me.  Thank you!
Marla
Categories
Uncategorized

Simon Kirby CogSci talk Friday April 22nd, 3:30

Simon Kirby of the University of Edinburgh will be giving a talk “The Evolution of Linguistic Structure: where learning, culture and biology meet” jointly sponsored by the Initiative in Cognitive Science  and the 5 Colleges Cognitive Science Seminar on Friday April 22nd at 3:30 in ILC N101. The abstract is below, as is a small .pdf version of the poster suitable for e-mailing. All help publicizing this event is greatly appreciated. Physical posters will begin appearing around campus shortly – e-mail cogsci@umass.edu if you haven’t seen one by April 15th and have a suggested location.

Abstract. Language is striking in its systematic structure at all levels of description. By exhibiting combinatoriality and compositionality, each utterance in a language does not stand alone, but rather exhibits a network of dependencies on the other utterances in that language. Where does this structure come from? Why is language systematic, and where else might we expect to find this kind of systematicity in nature? In this talk, I will propose a simple hypothesis that systematic structure is the inevitable result of a suite of behaviours being transmitted by iterated learning. Iterated learning is a mechanism of cultural evolution in which behaviours persist by being learned through observation of that behaviour in another individual who acquired it in the same way. I will survey a wide range of lab studies of iterated learning, in which the cultural evolution of sets of behaviours is experimentally recreated. These studies include everything from artificial language learning tasks and sign language experiments, to more abstract behaviours like sequence learning, and have recently even been extended to other species. I will conclude by suggesting that these cultural evolution experiments provide clear predictions about where we should expect to see structure in behaviour, and what form that structure might take.