Maryam Mirzakhani has died today. She was 40 years old. From Stanford News: “A self-professed “slow” mathematician, Mirzakhani’s colleagues describe her as ambitious, resolute and fearless in the face of problems others would not, or could not, tackle. She denied herself the easy path, choosing instead to tackle thornier issues. Her preferred method of working on a problem was to doodle on large sheets of white paper, scribbling formulas on the periphery of her drawings. Her young daughter described her mother at work as “painting.” “You have to spend some energy and effort to see the beauty of math,” she told one reporter. In another interview, she said of her process: “I don’t have any particular recipe [for developing new proofs] … It is like being lost in a jungle and trying to use all the knowledge that you can gather to come up with some new tricks, and with some luck you might find a way out.”
In her honor, I am reposting a 2014 post from this blog. Sources: Wikepedia. Article on Maryam Mirzakhani in the Guardian. Article and video in Quanta Magazine.
Jordan Ellenberg‘s popular explanation of what earned Mirzakhani the Fields Medal in 2014: “… [Her] work expertly blends dynamics with geometry. Among other things, she studies billiards. But now, in a move very characteristic of modern mathematics, it gets kind of meta: She considers not just one billiard table, but the universe of all possible billiard tables. And the kind of dynamics she studies doesn’t directly concern the motion of the billiards on the table, but instead a transformation of the billiard table itself, which is changing its shape in a rule-governed way; if you like, the table itself moves like a strange planet around the universe of all possible tables … This isn’t the kind of thing you do to win at pool, but it’s the kind of thing you do to win a Fields Medal. And it’s what you need to do in order to expose the dynamics at the heart of geometry; for there’s no question that they’re there.”
“Brilliant Blunders by Mario Livio, is a lively account of five wrong theories proposed by five great scientists during the last two centuries. These examples give for nonexpert readers a good picture of the way science works. The inventor of a brilliant idea cannot tell whether it is right or wrong. Livio quotes the psychologist Daniel Kahneman describing how theories are born: “We can’t live in a state of perpetual doubt, so we make up the best story possible and we live as if the story were true.” A theory that began as a wild guess ends as a firm belief.”
“The essential point of Livio’s book is to show the passionate pursuit of wrong theories as a part of the normal development of science. Science is not concerned only with things that we understand. The most exciting and creative parts of science are concerned with things that we are still struggling to understand. Wrong theories are not an impediment to the progress of science. They are a central part of the struggle.”
“In order to make progress, one must leave the door to the unknown ajar – ajar only.” “Per fare progressi, si deve tenere socchiusa la porta verso l’ignoto – socchiusa solamente.” Richard Feynman
Co-directors Vittorio Bo & Jacopo Romoli: ” … this tenth edition of the Rome Science Festival aims to be a celebration of doubt, uncertainty and the unknown and the particular way to penetrate it known as the scientific method. The Festival programme is centred around questions involving physics, biology, psychology and linguistics: What is the relationship between uncertainty and indetermination? Between uncertainty and chance? What is hidden in black holes or in what we call dark matter or in the concept of infinity? How do we relate cognitively to uncertainty and the unknown and what language do we use to speak about them? How can we calculate uncertainty precisely? How do we use secrecy in politics?” The full program of the festival is here (Italian & English).
Jeremy England: “You start with a random clump of atoms, and if you shine light on it for long enough, it should not be so surprising that you get a plant.”
Ard Louis: “If England’s approach stands up to more testing, it could further liberate biologists from seeking a Darwinian explanation for every adaptation and allow them to think more generally in terms of dissipation-driven organization. They might find, for example, that the reason that an organism shows characteristic X rather than Y may not be because X is more fit than Y, but because physical constraints make it easier for X to evolve than for Y to evolve.”
Uta Frith: “What is the role of language? When we consider social interactions this almost always involves language. Is language actually the primary driver of our social interactions, or is it the other way round?”
Kristian Tylen: ” … My preference is to think that language both evolves from and is shaped by our interactions with the surrounding physical and social environment. And so it is out there rather than inside us. This is demonstrated by the way that language structures are motivated. Take the way we talk about pitch In English and Danish: We talk about low and high pitch mapping onto low and high spatial notation. Other languages for instance use thick and thin or big and small. These relations all map onto universal experience. Low tones come from big creatures and high tones from small creatures. And it turns out that it is very difficult to learn the opposite relations.”
Uta Frith: “But things out there need to act on the brain, no? I don’t disagree with you that the world outside the mind is a starting point, but the experience of the outside shaped the inside, over millenia. As a consequence, I guess there are some pre-shaped circuits in the brain, which might become obsolete, if the environment changed radically. So this is why I would put the outside in second place, and the inside first.”
“When you think about physics, you usually describe things in terms of initial conditions and laws of motion; so what you say is, for example, where a comet goes given that it started in a certain place and time. In constructor theory, what you say is what transformations are possible, what are impossible, and why. The idea is that you can formulate the whole of fundamental physics this way; so, not only do you say where the comet goes, you say where it can go. This incorporates a lot more than what it is possible to incorporate now in fundamental physics.”
Where can the comet go, given what? What is the range of possibilities that we consider live options? We are not considering all LOGICAL possibilities! This is where work on natural language semantics becomes important: we rack our brains about how humans project possibilities from the facts they encounter and how language helps us keep track, categorize, and compare those possibilities.
“Human biology, especially human neurobiology, is very complex, and our view of the human brain is fragmentary. However, the genomes of humans and worms share more genes than any of us expected, including most classes of genes that are important in the nervous system. (The complexity of the human nervous system comes from regulating the genes in different ways, and from deploying them in vastly larger numbers of neurons.) The basic functions of those genes are similar in all animals, so if we view one goal of biology as building a “dictionary” containing the meaning of each gene, we can assemble definitions in that dictionary from any animal, with a good chance that the definitions and grammar will apply across all animals and humans. Those of us who study worms hope to meet those who study human brains in the middle, using the universality of biology to translate understanding across organisms.”
I am intrigued by the notion of compositionality displayed by the ‘grammar of genes’. A particular gene invariably makes the same contribution in every animal that has it, but this invariable contribution is altered through predictable contextual interactions so that the same set of genes can lead to very different outcomes. The issue is relevant for the old debate about meaning composition for conditionals. In my paper for the Edgington volume, for example, I showed that embedded conditionals interact with surrounding quantifiers in not completely ‘algorithmic’ ways. Does this mean that we should just give up on the idea of a compositional semantics for conditionals? Or should we rethink our ideas about compositionality in natural language semantics? Non-compositionality is a fact of life for content words (cat, blue, sing …), which are part of the non-logical vocabulary of natural languages. Nouns, adjectives, and verbs can change their meanings in seemingly unpredictable ways, depending on the linguistic and non-linguistic environment they find themselves in. But the semantic contribution of function words (if, and, every, …), which are part of the logical vocabulary of natural languages, seems to be invariant and resistant to uncontrolled contextual interference. Context seems to be able to affect the interpretation of function words only through certain grammatically determined ‘gates’ or ‘channels’ like those responsible for domain restrictions.
From Kurzweil Accelerating Intelligence on Patrick Tucker’s The Naked Future: “Computer scientist Stephen Wolfram, and futurist Ray Kurzweil have famously painstakingly recorded every minute detail of their lives, from their diets to the keystrokes, in order to quantify and better their lives. Now, technology has made self-quantification easier than ever, allowing the “everyman” to record and study their habits just as Wolfram and Kurzweil have done, but with less hassle… So what happens in a future that anticipates your every move? The machines may have a better handle on us than ever, but we’ll live better as a result. The naked future is upon us, and the implications for how we live and work are staggering.”
Source: The Modern Word. Borges.
“In all fictional works, each time a man is confronted with several alternatives, he chooses one and eliminates the others; in the fiction of Ts’ui Pên, he chooses— simultaneously—all of them. He creates, in this way, diverse futures, diverse times which themselves also proliferate and fork.” The Garden of Forking Paths by Jorge Luis Borges.
From the Stanford Encyclopedia: Branching time semantics: “As an explicit (or formalised) idea, branching time was first suggested to Prior in a letter from Saul Kripke in September 1958. This letter contains an initial version of the idea and a system of branching time, although it was not worked out in details. “
More on branching time semantics: Around the tree. Semantic and Metaphysical issues concerning branching and the open future.
Sarah-Jane Leslie is anchoring conversations with well-known philosophers in a series entitled “Philosophical Conversations.” The series has conversations with Joshua Knobe, Rae Langton, Kwame Anthony Appiah, Roger Scruton, and Elizabeth Harman. Upcoming interviews will include Sarah-Jane’s father, the cognitive psychologist and autism expert Alan Leslie. The series can also be accessed via the Sanders Foundation’s YouTube channel.
“The idea that episodic memory is required for temporal consciousness is common in science, philosophy, fiction, and everyday life. Related ideas follow naturally: that individuals with episodic amnesia are lost mariners, stuck in time, in a “permanent present tense” or “lost in a non-time, a sort of instantaneous present”. Yet recent evidence suggests that people with episodic amnesia are not stuck in time. Episodic memory and future thought are dissociable from semantic knowledge of time, attitudes about time, and consideration of future consequences in decision-making. These findings illustrate how little is known about the sense of time in episodic amnesia and suggest that the human sense of time is likely not one thing, but many things.”