From Cori Bargmann’s autobiography
Source: The Rockefeller University
“Human biology, especially human neurobiology, is very complex, and our view of the human brain is fragmentary. However, the genomes of humans and worms share more genes than any of us expected, including most classes of genes that are important in the nervous system. (The complexity of the human nervous system comes from regulating the genes in different ways, and from deploying them in vastly larger numbers of neurons.) The basic functions of those genes are similar in all animals, so if we view one goal of biology as building a “dictionary” containing the meaning of each gene, we can assemble definitions in that dictionary from any animal, with a good chance that the definitions and grammar will apply across all animals and humans. Those of us who study worms hope to meet those who study human brains in the middle, using the universality of biology to translate understanding across organisms.”
Cori Bargmann’s 2013 Breakthrough Prize talk: Using fixed circuits to generate flexible behavior.
I am intrigued by the notion of compositionality displayed by the ‘grammar of genes’. A particular gene invariably makes the same contribution in every animal that has it, but this invariable contribution is altered through predictable contextual interactions so that the same set of genes can lead to very different outcomes. The issue is relevant for the old debate about meaning composition for conditionals. In my paper for the Edgington volume, for example, I showed that embedded conditionals interact with surrounding quantifiers in not completely ‘algorithmic’ ways. Does this mean that we should just give up on the idea of a compositional semantics for conditionals? Or should we rethink our ideas about compositionality in natural language semantics? Non-compositionality is a fact of life for content words (cat, blue, sing …), which are part of the non-logical vocabulary of natural languages. Nouns, adjectives, and verbs can change their meanings in seemingly unpredictable ways, depending on the linguistic and non-linguistic environment they find themselves in. But the semantic contribution of function words (if, and, every, …), which are part of the logical vocabulary of natural languages, seems to be invariant and resistant to uncontrolled contextual interference. Context seems to be able to affect the interpretation of function words only through certain grammatically determined ‘gates’ or ‘channels’ like those responsible for domain restrictions.
Connections: Oxford Handbook of Compositionality.
Connections: NIH BRAIN Working Group.
Semantics, the investigation of linguistic meaning, draws on the traditions, methodologies, and superegos of the humanities and the social and natural sciences. As semanticists, we work in libraries and in the field, in armchairs and in labs, with grammars, corpora, consultants, and experimental data. We use logical and statistical methods. We are in a good place to connect the humanities and the social and natural sciences. We should do more to build those bridges.
Source: Hannah Arendt Center
The humanities are in a crisis. Enrollments for classes have dropped dramatically in recent years. Here is a commentary by Deborah K. Fitzgerald, the Dean of MIT’s School of Humanities, Arts, and Social Sciences.
“As educators, we know we cannot anticipate all the forms our students’ future challenges will take, but we can provide them with some fundamentals that will be guides for the ongoing process of exploration and discovery. We can help shape their resilience, and prepare them to analyze and problem-solve in both familiar and unfamiliar situations. Calling on both STEM and humanities disciplines — as mutually informing modes of knowledge — we aim to give students a toolbox brimming over with mental and experiential levers to support them throughout their careers and lives.“
Inspiration for how to design imaginative interdisciplinary undergraduate classes bridging the sciences and the humanities might come from the new magazine Nautilus, ‘a New Yorker version of Scientific American’: “Each month we choose a single topic. And each Thursday we publish a new chapter on that topic online. Each issue combines the sciences, culture and philosophy into a single story told by the world’s leading thinkers and writers.” The magazine and its website have essays, investigative reports, blogs, fiction, games, videos, and graphic stories. Another source of inspiration could be the Mapping Ignorance initiative of the Chair of Scientific Culture of the University of the Basque Country.
From Kurzweil Accelerating Intelligence on Patrick Tucker’s The Naked Future: “Computer scientist Stephen Wolfram, and futurist Ray Kurzweil have famously painstakingly recorded every minute detail of their lives, from their diets to the keystrokes, in order to quantify and better their lives. Now, technology has made self-quantification easier than ever, allowing the “everyman” to record and study their habits just as Wolfram and Kurzweil have done, but with less hassle… So what happens in a future that anticipates your every move? The machines may have a better handle on us than ever, but we’ll live better as a result. The naked future is upon us, and the implications for how we live and work are staggering.”
Source: The Modern Word. Borges.
“In all fictional works, each time a man is confronted with several alternatives, he chooses one and eliminates the others; in the fiction of Ts’ui Pên, he chooses— simultaneously—all of them. He creates, in this way, diverse futures, diverse times which themselves also proliferate and fork.” The Garden of Forking Paths by Jorge Luis Borges.
From the Stanford Encyclopedia: Branching time semantics: “As an explicit (or formalised) idea, branching time was first suggested to Prior in a letter from Saul Kripke in September 1958. This letter contains an initial version of the idea and a system of branching time, although it was not worked out in details. “
More on branching time semantics: Around the tree. Semantic and Metaphysical issues concerning branching and the open future.
Sula bird (© Lynn Weinert)
I just came back from SULA 8. There is no other conference where long-held beliefs about semantics get challenged in just about every talk. This is the conference where you see where our field is moving. Semantics of Under-Represented Languages in the Americas (SULA) 8 was held at the University of British Columbia this year. The conference was founded in 2001 to bring together researchers working on (spoken or signed) languages or dialects of the Americas which do not have an established tradition of formal semantic work. It solicits work that involves primary fieldwork or experimentation as well as formal analysis. SULA has several features that make it unique. There is always a session with members of the communities whose languages are being investigated. This is why SULA is usually held in the Americas. At SULA 1, one of the community representatives was Jessie Little Doe Baird from the Wôpanâak Language Reclamation Project. This year’s representative was Peter Jacobs of the Skwxwu7mesh Nation. There are always graduate students among the invited speakers of SULA. There are also invited commentators (like me) who are not themselves fieldworkers, and there often are invited speakers who are not primarily semanticists: SULA 1 had Ken Hale as one of the invited speakers, for example, and SULA 8 featured Karen Rice.
Here is the website for SULA 8, and here are links to the programs and proceedings of earlier SULAs: SULA 1 at the University of Massachusetts at Amherst (2001). SULA 2 at the University of British Columbia (2003). SULA 3 at the University at Buffalo (2005). SULA 4 at the Universidade de São Paulo (2007). SULA 5 at Harvard/MIT (2009). SULA 6 at the University of Manchester (2011). SULA 7 at Cornell University (2012). SULA 9 at UC Santa Cruz (2016).
There have been a couple of descendants of SULA: SULA-bar at the University of Manchester (2011). TripleA 1 (The Semantics of African, Asian, and Austronesian Languages), Tübingen (2014).
Suzi Oliveira de Lima: The grammar of individuation and counting. 2014 UMass dissertation.
Suzi Oliveira de Lima: From personal website
Are there languages that do not draw a grammatical distinction between count nouns and mass nouns? Some scholars have said there aren’t. Others have claimed that there are languages where all non-referential nouns are mass nouns. Henry Davis and Lisa Matthewson (1999) argued that in the Salish language St’át’imcets, all non-referential nouns are count nouns. Suzi Lima has been investigating another language with this property: the Tupi language Yudja (Juruna family). Lima’s dissertation is a game changer in fieldwork methodology: her findings do not just rely on the by now standard elicitation tasks for semantic fieldwork, but use a wider range of experimental techniques, including quantity judgment tasks and production and comprehension studies with children and adults.
Related work from SULA 8: Andrea Wilhelm made a case that in Dëne Sųłiné (Chipewyan) all nouns are referential. Nouns either denote individuals or kinds, they do not have predicative denotations at all. Amy Rose Deal suggested that in Nez Perce (Niimiipuutímt, Sahaptian), all notional mass nouns can have both count and mass denotations. What is emerging from this cross-linguistic work, then, is that languages have options for construing noun denotations. The possible options seem to be: reference to individuals, reference to kinds, singular, plural, or number-neutral atomic properties, and non-atomic properties. There are repercussions of whatever option is chosen by a language. A language that has no mass nouns should not have measure phrases, for example, and this is so for Yudja, as Lima shows. A language where all nouns are referential should not have intersective adjectives or restrictive relative clauses, and this is so for Dëne Sųłiné, as Wilhelm shows.
Most recent work on the count-mass distinction in natural languages responds in one way or other to Gennaro Chierchia’s influential Reference to Kinds Across Languages, which is one of the most downloaded papers for Natural Language Semantics. With over 1400 citations, it is also one of the most cited papers in semantics.
Here is an essay on this question by Barbara Partee.
Barbara Partee giving the Whatmough Lecture
“There have been centuries of study of logic and of language. Some philosophers and logicians have argued that natural language is logically deficient, or even that “natural language has no logic.” And before the birth of formal semantics in the late 1960’s, most linguists and philosophers were agreed that there was a considerable mismatch between the syntactic structure of natural language sentences and their “logical form.” This essay briefly sketches the history of arguments about the relation between natural language syntax and logical structure, concentrating on the period from Frege to Montague, roughly 1880 to 1970, illustrating the issues with sentences containing quantifiers.”
Here is a video of Barbara Partee’s 2014 Whatmough Lecture at Harvard University. The History of Formal Semantics: Changing Notions of Semantic Competence.
Source: Dick Daniels (http://carolinabirds.org/)
From Science. Flower, Gribble & Ridley. 2014. Deception by Flexible Alarm Mimicry in an African Bird.
“Forked-tailed drongos are a particularly intelligent type of bird found in Africa. Drongos associate with many other bird and mammal species, which can learn to respond to drongo warning calls. Drongos are also exceptional mimics of the other species’ alarm calls. Though the increased vigilance across these multi-species associations is a benefit to all, drongos sometimes use these calls as ploys to scare associated species away from food, which the drongos then steal. However, without some approach to maintain the effectiveness of this deception, the drongos’ ploy would soon be detected. Flower et al. now show that drongos are able to fool their target species longer by flexibly varying the type of call they give.”
More from National Geographic Daily News. Related:The bird that cries wolf changes its lies.
Top picks from from Nature News, 02 May 2014
Source: Brainstorm Psychology
“In Lind’s experiment, participants took a Stroop test — in which a person is shown, for example, the word ‘red’ printed in blue and is asked to name the colour of the type (in this case, blue). During the test, participants heard their responses through headphones. The responses were recorded so that Lind could occasionally play back the wrong word, giving participants auditory feedback of their own voice saying something different from what they had just said.” … ” After participants heard a manipulated word, a question popped up on the screen asking what they had just said, and they were also quizzed after the test to see whether they had detected the switch. When the voice-activated software got the timing just right — so that the wrong word began within 5–20 milliseconds of the participant starting to speak — the change went undetected more than two-thirds of the time.” The original article appeared in Psychological Science, April 28, 2014.