Modeling Memories and Evolution with Computer Science

At the end of September, two hundred young researchers and 34 recipients of the most prestigious prizes in mathematics and computer science will gather together in southwest Germany for the Heidelberg Laureate Forum. The week-long meeting combines scientific, social, and outreach activities in a format that should sound very familiar to attendees of the Lindau Nobel Laureate Meetings. In fact, the organisers of the Heidelberg Laureate Forum wanted to provide mathematicians and computer scientists with an annual networking meeting modeled after the ones taking place for many decades in Lindau.

Heidelberg Lecture at the 68th Lindau Nobel Laureate Meeting with Leslie Valiant. Photo/Credit: Christian Flemming/Lindau Nobel Laureate Meetings

While no Nobel Prize in Mathematics or Computer Science exists, the Heidelberg Laureate Forum invites its own version of laureates: recipients of the Abel Prize (mathematics), ACM A.M. Turing Award (computer science), ACM Prize in Computing (computer science), Fields Medal (mathematics), and the Nevanlinna Prize (mathematics/computer science). And every year, one of the Heidelberg Laureates gives a crossover lecture at the Lindau Nobel Laureate Meeting to help foster the connection between the two sister forums.

At #LINO18, British computer scientist Leslie Valiant incorporated the meeting’s discipline of physiology and medicine into his Heidelberg Lecture, titled “Biology as Computation.” The 2010 recipient of the ACM A.M. Turing Award spoke about how our current understanding of computation can help advance our understanding of biology, and in particular, neuroscience and evolution. He began by stating what he would not talk about: the use of computers in biology research.

“The first useable computer was built in Cambridge around 1949 with a memory of about 500 words, and immediately, some biologists found some use of it in crystallography. So I take it that biologists don’t need any advice about technology or computing,” said Valiant. “What I’m really talking about is something which has equal potential but is much less developed: the use of computational models in biology.”

The idea of looking at biology as some kind of computation is far from new. In the 1950s, mathematicians Alan Turing and John von Neumann began publishing academic papers on aspects of the life sciences soon after Turing discovered the basis of computation. More than a decade earlier, he wrote about a Turing machine — a hypothetical computing machine that uses a predefined set of rules to determine a result from a set of input variables — that defines what is computable.

“The solution of computation is probably one of the best-established ideas in science, and it has to be recognised as a somewhat unusual notion,” Valiant said. “It’s not quite a physical law, it’s not an experimental finding, it’s not a theorem of mathematics. It was a new kind of idea.”

Valiant then introduced two examples where a computational modeling approach can be used to solve problems in biology. The first example involves how memories get stored in the brain. Even though the brain has a finite capacity and number of neurons, how does it manage to store not just one thing but hundreds of thousands of things over a lifetime?

For instance, if a friend tells you he went to the Green Fox Restaurant on the recommendation of Joe and hated it, all those previously unrelated words and phrases are somehow effortlessly taken in by your brain and stored away. What are the fundamental operations that lead to this result? Valiant divides this process into four “primitives,” ordered by increasing complexity: storage allocation for a new concept, associating two previously unrelated concepts, memorization of a single instance rather than making a generalization, and supervised learning by some learning algorithm.

“The question is how can the cortex achieve hundreds of thousands of such individual acts in succession without too much degrading of the previous ones?” he said. “So if you learned something 20 years ago, and you’ve learned lots of things in between, you amazingly may still remember what you learned 20 years ago. It’s quite a miracle to explain.”

Leslie Valiant at the 68th Lindau Nobel Laureate Meeting. Photo/Credit: Christian Flemming/Lindau Nobel Laureate Meetings


The second example tackles the theory of evolution. How did such complicated organisms like humans come into existence and maintain themselves over time? While the evidence for Darwinian evolution is overwhelming, says Valiant, it doesn’t mean we understand the phenomenon. No theory explains the rate of evolution, and no computer simulations have successfully produced life-like organisms.

Valiant wanted to dig deeper into the theory and started by asking about the mutation-generating process that creates genetic variation. Say each gene has an expression function that defines how it expresses a given protein, which is dependent on a number of factors. What would these expression functions look like? Valiant notes that if they are too simple, it would conflict with the complex nature of biology. On the other hand, if they are too complicated, evolution won’t work.

“We don’t understand the connections between genotype, phenotype, and the environment, so how on earth can we hope to have a theory of evolution?” said Valiant. “The answer is we need to appeal to some science, some method which works even when it doesn’t understand what it is doing.”

He believes that Darwinian evolution can be thought of as a kind of machine learning, a field of computer science which gives computers the ability to learn without being explicitly programmed. Both machine learning and evolution work despite having no understanding of what they are doing. For instance, imagine that you want a computer to recognize pictures of elephants. Initially, it gets some pictures wrong. But with supervised machine learning, you can simply tell the computer where its mistakes were, and it adjusts accordingly so that it can improve for next time.

“I’m saying that evolution is exactly the same thing — it’s supervised learning. Who is the supervisor? It’s death,” he said. “The feedback you get from your environment which tells you that the good behavior is survival.”

In the analogous case of evolution, the collection of pictures to pick out is akin to all the different possible conditions in an organism’s cells. Instead of picking out pictures of elephants, your genome determines whether a certain protein is expressed or not. Your current genome isn’t perfect, of course, and expresses some proteins which ideally should not be expressed. So who decides what is ideal, and what is not? Valiant’s theory states that Darwinian evolution is a kind of supervised machine learning, where the supervisor is survival.

On Tuesday, 25 September 2018, Nobel Laureate Bill Phillips will give the Lindau Lecture on “Time, Einstein and the Coolest Stuff in the Universe” at the 6th Heidelberg Laureate Forum.

Watch the Heidelberg Lecture by Leslie Valiant:

 

Meeri Kim

About Meeri Kim

Meeri N. Kim, PhD works as a science writer who contributes regularly to The Washington Post, Philly Voice, and Oncology Times. She writes for The Washington Post’s blog “To Your Health,” has a column for Philly Voice called “The Science of Everything,” and her work has also appeared in The Philadelphia Inquirer, Edible Philly, and LivableFuture. In 2013, Meeri received a Ph.D. in physics from the University of Pennsylvania for her work in biomedical optics.

View All Posts