Logic and Grammar

botticelli

Sandro Botticelli: A Young Man Being Introduced to the Seven Liberal Arts

Why do I make my semantics students learn logic? I ask them to work through both volumes of the Gamut textbook, even though Gamut doesn’t speak the language of linguistics. It is written in the language of logic. Why should semantics students have to learn how to talk and reason in this way? There is a simple answer: In an interdisciplinary field everyone from any participating field has to speak the language of the other fields. That’s your entrance ticket for success in an interdisciplinary enterprise. You have to understand where the practitioners of other fields are coming from. As a relatively new interdisciplinary field, formal semantics has been a success. It is the result of the marriage of two highly formalized and abstract theories: Logic, which provides theories of the human notion of what a valid piece of reasoning is, and Syntax, which contributes theories of how hierarchical syntactic structures are computed in natural languages. The marriage is solid and has been going strong for almost 50 years. Many young linguists, logicians, and philosophers are fluent in three disciplines, and collaborate in joint research institutions, journals, and conferences.

You may have heard people say that theories of logic can’t be cognitive theories because people make logical mistakes. Yes, we all do make logical mistakes. What is important, though, is that, when we do, we can be convinced that we were wrong. How come? There must be a notion of what a valid piece of reasoning is that is the same for all human beings. Imagine what the world would be like if people all had different notions of what follows from what and what is or isn’t consistent. Mathematics would be impossible, science would be impossible, laws and contracts would be impossible, social institutions would be impossible, … For more than 2000 years, logicians have been designing theories of universally shared patterns of valid human reasoning. The resulting theories are among the most sophisticated theories science has produced to date. And they are the most sophisticated formal theories in cognitive science. One of the key insights of the early logicians was the discovery that little words like notandorsomeallmustmay, and so on are the main players in patterns of valid reasoning. That is, those patterns are created by properties of the functional (that is, logical) vocabularies of human languages. It’s precisely those vocabularies that also provide the scaffolding for syntactic structures. Syntax is about the hierarchical structures projected from the functional vocabularies of natural languages, Logic provides the models of how to study the meanings of those vocabularies and how to explain their role in reasoning. In formal semantics, those two disciplines have come together.

Contemporary modern semantics was born when the traditional perspectives of logic merged with the modern enterprise of generative syntax, as initiated by Noam Chomsky. The first worked out formal semantic system in this tradition was David Lewis’ 1970 paper General Semantics, one of the most beautiful and enjoyable articles in semantics to the present day. Lewis made an explicit connection with Chomsky’s Aspects model, the generative syntax model of the time. In contrast to Lewis, Richard Montague was outspokenly hostile to Chomsky’s work. He was not interested in Chomsky’s call for an explanatory syntax. It was only after Montague’s death that linguists like David Dowty, Lauri Karttunen, Barbara Partee, Stanley Peters, and Robert Wall made Montague’s works accessible to linguistic audiences.

But things out there need to act on the brain, no?

@utafrith

Uta Frith

Uta Frith: “What is the role of language? When we consider social interactions this almost always involves language. Is language actually the primary driver of our social interactions, or is it the other way round?”

Kristian Tylen: ” … My preference is to think that language both evolves from and is shaped by our interactions with the surrounding physical and social environment.  And so it is out there rather than inside us. This is demonstrated by the way that language structures are motivated. Take the way we talk about pitch  In English and Danish: We talk about low and high pitch mapping onto low and high spatial notation. Other languages for instance use thick and thin or big and small. These relations all map onto universal experience. Low tones come from big creatures and high tones from small creatures. And it turns out that it is very difficult to learn the opposite relations.”

Uta Frith: “But things out there need to act on the brain, no?  I don’t disagree with you that the world outside the mind is a starting point, but the experience of the outside shaped the inside, over millenia.  As a consequence, I guess there are some pre-shaped circuits in the brain, which might become obsolete, if  the environment changed radically.  So this is why I would put the outside in second place, and the inside first.”

Source: Putting Language into the Social BrainSocial Minds: A Piece of the Frithmind.

How does virtual reality affect the brain?

virtual-reality Nature Neuroscience, advance online publication: 24 November 2014.
UCLA Newsroom, 24 November 2014.

“Put rats in an IMAX-like surround virtual world limited to vision only, and the neurons in their hippocampi seem to fire completely randomly — and more than half of those neurons shut down — as if the neurons had no idea where the rat was, UCLA neurophysicists found in a recent experiment. Put another group of rats in a real room (with sounds and odors) designed to look like the virtual room, and they were just fine.” Kurzweil Accelerating Intelligence, November 25, 2014.

This raises many interesting questions: What happens when humans hear or read spatial descriptions or look at maps? Are their hippocampi building maps? Partial maps? No maps at all? How does this relate to the results reported in Benjamin Bergen’s book? How does the brain distinguish reality and fiction?

The new science of prospective psychology

“What if the mind is not a storehouse of knowledge, but an engine of prediction? What if we are not Homo sapiens, but Homo Prospectus?” Martin E. P. Seligman

future1The University of Pennsylvania Positive Psychology Center has established a new branch of Cognitive Psychology: Prospective Psychology. Prospective Psychology investigates the mental representation and evaluation of possible futures. Through the Templeton Science of Prospection Awards, 22 two-year projects will explore the field of prospection.

To read: Seligman, M. E. P., Railton, P., Baumeister, R. F., & Sripada, C. (2013). Navigating into the future or driven by the past. Perspectives on Psychological Science, 8(2), 119-141.

The myth of mirror neurons

Gregory Hickok: The myth of mirror neurons. The real neuroscience of communication and cognition. W. W. Norton 2014.

http://books.wwnorton.com/books/The-Myth-of-Mirror-Neurons/

Source: Publisher

From publisher’s website: “In The Myth of Mirror Neurons, neuroscientist Gregory Hickok reexamines the mirror neuron story and finds that it is built on a tenuous foundation—a pair of codependent assumptions about mirror neuron activity and human understanding. Drawing on a broad range of observations from work on animal behavior, modern neuroimaging, neurological disorders, and more, Hickok argues that the foundational assumptions fall flat in light of the facts.”

Review by Patricia Smith Churchland in Nature 511, 532–533 (31 July 2014): “Hickok’s critique deserves to be widely discussed, especially because many scientists have bought into the mirror-neuron theory of action understanding, perhaps because they lack the time or inclination to peer into its workings themselves. Hickok performs a valuable service by laying out the pros and cons clearly and fairly. He ends by agreeing that although mirror neurons may well have a role in explaining communication and empathy, many other neural networks with complex responses are undoubtedly involved. Those networks and their roles are still to be clarified.”

Louder than words

Benjamin K. Bergen: Louder than words: the new science of how the mind makes meaning. Basic Books 2012. Reviewed in Language by Raymond W. Gibbs, Volume 90, Number 2, June 2014.

http://www.louderthanwordsbook.com

Source: Publisher

“Imagine that you are a participant in the following psycholinguistic experiment. You are seated in front of a computer terminal and shown the sentence The carpenter hammered the nail into the wall. After reading the sentence, you are shown a picture of an object, such as a nail or elephant, and asked to quickly judge whether that object was mentioned in the sentence. Of course, you would quickly say ‘yes’ to the picture of a nail and ‘no’ to the elephant. The primary interest, however, is in your speeded response to the nail picture, depending on whether it was shown in a horizontal or vertical orientation. Research indicates that people, on average, are faster to make their ‘yes’ decisions when the picture was in the same spatial orientation as implied by the sentence just read … Thus, people are faster to say ‘yes’ when the picture showed the nail in the horizontal orientation than when it was shown upright, or in the vertical position. However, when they first read the sentence The carpenter hammered the nail into the floor, people are faster, on average, to say ‘yes’ to the nail picture that presented it in a vertical position rather than horizontal. One interpretation of these findings is that people automatically construct a mental image of an object in its appropriate spatial orientation based on what the sentence implies. Even if the nail’s position is not explicitly noted in the sentence, our immediate understanding of the sentence’s meaning enables us to create an image of the situation in which the nail was hammered in a horizontal or vertical position. How people construe imaginative understandings of language is the subject of Ben Bergen’s book.” Source: Raymond Gibbs in Language, June 2014.

We seem to begin to understand how simple sentences might be associated with representations of possible situations. It’s not more than a beginning, though. It’s just simple sentences. How do we represent sentences like The carpenter didn’t hammer the nail into the wall or The carpenter should have hammered the nail into the floor? If we picture the nail in a particular orientation in those sentences, too, what does this tell us about how “the mind makes meaning”?

Ben Bergen’s website, with podcasts.

Cracking the brain’s code

Christof Koch & Gary Marcus: Cracking the Brain’s Code. How does the brain speak to itself? MIT Technology Review. June 17, 2014

“The brain as a whole, throughout our waking lives, is a veritable symphony of neural spikes—perhaps one trillion per second. To a large degree, to decipher the brain is to infer the meaning of its spikes. But the challenge is that spikes mean different things in different contexts. It is already clear that neuroscientists are unlikely to be as lucky as molecular biologists. Whereas the code converting nucleotides to amino acids is nearly universal, used in essentially the same way throughout the body and throughout the natural world, the spike-to-information code is likely to be a hodgepodge: not just one code but many, differing not only to some degree between different species but even between different parts of the brain. The brain has many functions, from controlling our muscles and voice to interpreting the sights, sounds, and smells that surround us, and each kind of problem necessitates its own kinds of codes.”

This is part of a group of articles on Hacking the Soul, which also includes an interview with Rebecca Saxe.

Learning everything about anything?

Source: Kurzweil Accelerating Intelligence.

Credit: University of Washington

Credit: University of Washington

“Computer scientists from the University of Washington and the Allen Institute for Artificial Intelligence in Seattle have created an automated computer program that they claim teaches everything there is to know about any visual concept. Called Learning Everything about Anything (LEVAN), the program searches millions of books and images on the Web to learn all possible variations of a concept, then displays the results to users as a comprehensive, browsable list of images, helping them explore and understand topics quickly in great detail. You can try it here.”

Intelligent as it may be, LEVAN doesn’t seem to know the difference between a horse eye and an eye horse, between a horse shoe and a shoe horse, or between a horse shed and a shed horse.  

Connections: the discussion of the headedness of noun-noun compounds in my Radcliffe video on Mapping Possibilities. Also: Teon Brooks on representing compounds in the brain. 

Is mind reading like print reading?

Cecilia M. Heyes & Chris D. Frith. The cultural evolution of mind reading. Science 20, June 2014. With podcast.

“It is not just a manner of speaking: “Mind reading,” or working out what others are thinking and feeling, is markedly similar to print reading. Both of these distinctly human skills recover meaning from signs, depend on dedicated cortical areas, are subject to genetically heritable disorders, show cultural variation around a universal core, and regulate how people behave. But when it comes to development, the evidence is conflicting. Some studies show that, like learning to read print, learning to read minds is a long, hard process that depends on tuition. Others indicate that even very young, nonliterate infants are already capable of mind reading. Here, we propose a resolution to this conflict. We suggest that infants are equipped with neurocognitive mechanisms that yield accurate expectations about behavior (“automatic” or “implicit” mind reading), whereas “explicit” mind reading, like literacy, is a culturally inherited skill; it is passed from one generation to the next by verbal instruction.”

Background: Theory of Mind. Internet Encyclopedia of Philosophy.

Connections (via the Open University): Paul Grice’s paper Meaning and his distinction between natural and non-natural meaning seems very relevant. When we read another person’s mind from their facial expressions, for example, we seem to retrieve natural, non-conventional, meanings, which is a very different process from retrieving non-natural, conventional, meanings from speech or texts. In the Science podcast, Sara Presby asks Cecilia Heyes whether the comparison between mind reading and print reading, which is the core of the article, isn’t made at a way too general level. 

Connections: Rebecca Saxe on reading each other’s minds. MOOC on how to read … a mind.