Ben Feringa: Molecular Machines of the Future

Ben Feringa giving the first lecture at the 67th Lindau Nobel Laureate Meeting. Photo/Credit: Jula Nimke/Lindau Nobel Laureate Meetings

Nobel Laureate Ben Feringa giving the first lecture at the 67th Lindau Nobel Laureate Meeting. Photo/Credit: Julia Nimke/Lindau Nobel Laureate Meetings

The Nobel Laureate gave the #LiNo17 opening lecture with the title ‘The Joy of Discovery’. Ben Feringa grew up on a farm near Groningen, the second of ten siblings. Today, he is professor in Groningen and also received his MSc and PhD degrees there. And just as much as he enjoyed nature as a child, he now enjoys the inifinite possibilities of molecules. In his own words: “We enjoy the adventure into the unkown.” Before starting his lecture, he has some advice in store for the young scientists at the Lindau Nobel Laureate Meeting: Always look for a challenge, and find teachers who challenge you, persevere, follow your intuition and your dreams – but ‘walk on two feet’, meaning remain realistic, and find a balance between life and research. Looking at his impressive career, and appreciating his obvious delight in his work, it seems that Feringa took his own advice to heart.

It’s truly mind-blowing to see what Ben Feringa and his research group are capable of: they synthesise molecules from inanimate matter that can move autonomously. One striking example are the small ‘spiders’ that you can see crawling around under a microscope. These ‘spiders’ can self-assemble, meaning that several molecules form clusters, and these clusters move completely autonomously as long as ‘fuel’ is provided, in their case sugar. (You can watch the crawling ‘spiders’ in a solution, also called nano-swimmers, on the website of Feringa’s research group, or at the end of his #LiNo17 lecture). Other molecules at Feringa’s Molecular Nanoscience group at the University of Groningen have been fitted with light-sensitive switches, so light of a certain wavelength turns them on and off and also acts as their ‘fuel’.

As Feringa points out himself in his lecture: chemists are great at creating molecules, but it’s extremely difficult to control their dynamic functions – movement, rotation, switches, responses, etc. His most noted invention is his version of the ‘nanocar’ – it was also strongly featured in the 2016 Nobel Prize media coverage. For a nanocar’s engine, you need unidirectional rotation. Feringa and his research group discovered the first man-made molecular rotor that could perform a 360 degree rotation ‘a bit by accident’ in the 1990s. They had been working on an alkene molecule (alkenes are unsaturated hydrocarbons containing at least one carbon-carbon double bond). This specific alkene could perform a quarter turn in a process called isomerisation: a process in which one molecule is transformed into another with exactly the same atoms, only these atoms are now arranged differently. Suddenly the researchers realised that the molecule had in fact performed a 180 degree turn and hadn’t switched back. Then they wondered: “Maybe we can get it to perform a 360 degree turn.”


How the molecular rotor works: double-bond isomerisation and thermal helix inversion (heat) alternate. Image: Ben Feringa group. Source: The Swedish Academy of Sciences

How the molecular rotor works: double-bond isomerisation and thermal helix inversion (heat) alternate. Image: Ben Feringa group. Source: The Royal Swedish Academy of Sciences


Finally, the researchers managed a full rotation with two double-bond isomerisations and two helix inversions induced by heat (see graph above). On the one hand, ‘unidirectional rotation marks the most fundamental breakthrough‘ in the search for molecular motors; on the other hand, the molecule was still too slow – it needed about one hour for the 360 degree turn. Now the researchers set out to build much faster molecules. About sixty different motor designs later, they reached an astounding speed of 10 million rotations per second. But in reality there are some restrictions: for instance, you often cannot get enough energy into these nanosystems to perform at top speed, and the surfaces on which the motors are supposed to perform limit their speed. So realistically, these tiny motors now rotate at about 4000 cycles per second. Next, the researchers fitted four of the enhanced molecules on to axles and added a stator: a molecular four-wheel drive was put on the ‘road’, usually a metal surface.

Today, several research groups around the world build nanocars. And although Feringa’s team received much recognition for their own nanocar, they’re exploring many other possible applications of molecular machines, for instance in medicine: imagine smart drugs that can be ‘switched on’ only at their target area, for instance a tumour. These would be high-precision drugs with very few or even no side-effects, because other body cells would not be affected. In his Lindau lecture, as well as in his Nobel lecture in Stockholm in December 2016, Feringa gave two prominent examples: photo-controlled antibiotics and photo-controlled chemotherapeutics. Into one drug from each category, the Feringa group inserted a light-switch, meaning that the drugs only start working if they’re activated by a certain wavelength of light. The researchers are now working with near-infrared light that has a deep penetration depth, meaning it can even reach remote places deep inside the human body.


Nanocar JPG (797x451)


With photo-controlled antibiotics, the goal is to ‘train’ the molecules to find their target structures autonomously. Next, their activity would be switched on with an infrared light. Now the drugs would work against a bacterial infection at the target point – no other body cells or bacteria would be affected, making antibiotic restistance more unlikely. And even if the drug leaves the body after treatment, contamination of ground or drinking water would be prevented by precisely engineered half-times of the molecules: they would simply stop being active after a certain amount of time, rendering the build-up of antibiotic restistance outside the human body unlikely as well.

The same holds true for chemotherapeutics: only after a photo-controlled chemotherapeutic reached a tumour, its activity would be switched on, meaning all other body cells would be spared the often severe side-effects. In his Nobel lecture, Feringa describes his dream for future cancer treatments: new imaging technologies like MRI would be linked to a specific laser. First, the patient receives an injection of a photo-controlled chemotherapeutic. Next, the MRI technology would detect a tiny tumour. Now the MRI feeds this information automaticaly to a laser that is callibrated to a specific wavelength that activates the drug. The result is “high temporal and local precision”.

Those are only two examples of the ‘endless opportunities’ of molecular machines, in Feringa’s words – and applications are not limited to pharmaceuticals. Feringa himself talks about self-healing car coatings or wall paint, also called ‘smart coatings’. With a growing world population and a scarcity of materials, smart coatings could help to form longterm coatings, help to spare natural resources, or they could integrate information technology like sensors into the coatings. Other experts envision self-healing infrastructure, for instance plastic water pipes that are able to repair their own leaks. Fraser Stoddart, Feringa’s American-Scottish co-recipient of the 2016 Noble Prize, went into yet another research direction and now builds highly efficient data storage devices based on molecular machines.


Ben Feringa giving the first lecture at the 67th Lindau Nobel Laureate Meeting. Picture/Credit: Julia Nimke/Lindau Nobel Laureate Meetings

Ben Feringa during his lecture at the 67th Lindau Nobel Laureate Meeting. Picture/Credit: Julia Nimke/Lindau Nobel Laureate Meetings


In October 2016, the Royal Swedish Academy of Sciences announced “the dawn of a new industrial revolution of the twenty-first century” based on molecular machines. Feringa himself often emphasises that he is conducting basic research, and he likes to point out that inventions like electric machines, airplanes or smartphones where all the results of basic research – and that they often needed several years or decades to find widespread application. He estimates that in maybe fifty years, doctors will be able to use photo-controlled drugs as described in his Nobel lecture.


Imagine a World Without Electrical Sockets

Photo: Courtesy of Il Jeon

Photo: Courtesy of Il Jeon

My research involves the development of new materials and applying them to energy devices. There are three types of materials that I mainly focus on: carbon allotropes, transition metal dichalcogenides and organic surface modifiers. I produce and modify these materials to use them in photovoltaics, such as specific types of solar cells. As the paradigm of electronics is shifting to flexibility, low-cost and environmental friendliness, I believe replacing conventional materials by new materials that are flexible, cheaper and more ecological can keep energy devices abreast of this. Ultimately, we can expect to see devices that are fully composed of carbon allotropes, transition metal dichalcogenides and organic compounds. This will lead to a future of wearable energy technology in which people will treat solar energy the way we treat Wi-Fi and Bluetooth these days. Imagine charging your mobile phones from indoor lights and a world without electrical sockets. It goes without saying that this could solve the present-day energy and environmental issues.


Photo: Courtesy of Il Jeon

Photo: Courtesy of Il Jeon

There is a good reason why I am working on this research topic. I have had various research experiences both in academia and industry. Just like Steve Jobs said in the commencement address at Stanford, connecting dots is the root innovation that can lead to breakthroughs in science. Therefore, I wanted to connect the dots from my past career. I hold degrees in chemistry at undergraduate and graduate levels. This laid the foundation for the material studies that I am doing now. Then, the work experience at a South Korean conglomerate, LG Display Co. ltd., where I developed organic light emitting devices (OLED) and quantum dot displays, sparked my interest in energy devices. This is due to the fact that energy devices have a similar working mechanism to display devices, and I wanted to work on something that addresses the societal issues directly.


Photo: Courtesy of Il Jeon

Photo: Courtesy of Il Jeon

There are three key components to my research: growth, synthesis and fabrication. I grow carbon allotropes and transition metal dichalcogenides by using chemical vapour deposition, which is a chemical process to form a high quality material using high temperatures in a vacuum so that it does not catch fire. Once they are produced, I modify those using organic synthesis to render new functionalities to the materials. Various fullerene derivatives and modified graphene are good examples of this. These materials are characterised and utilised in solar cell fabrication. The end goal is to improve the performance of energy devices using newly developed materials. I generally spend half of my week on material development and the other half on device fabrication. Some people say that I cannot catch two hares at the same time and I should focus on either material science or device engineering. However, I am a competitive person so I believe you can catch more than two hares if you work just that much harder. 

Big Data Analytics Deliver Materials Science Insights

Finding patterns and structure in big data of materials science remains challenging, so researchers are working on new ways to mine the data to uncover hidden relationships. Credit: Hamster3d/

Finding patterns and structure in big data of materials science remains challenging, so researchers are working on new ways to mine the data to uncover hidden relationships. Credit: Hamster3d/


Developing new materials can be a lengthy, difficult process and innovations in the field come through a combination of serendipity and methodical hard work. Researchers perform many rounds of synthesising new materials and testing their properties, using their chemical knowledge and intuition to relate a material’s structure to its function. The result is materials for tough body armour, thin, powerful batteries or lightweight aircraft components, among many other applications.

To speed materials discovery, researchers are now asking computers to help. Algorithms similar to those that organise our email, photos and online banking can also be used to find patterns in chemical data that relate to a material’s structure and composition.

Photo: R. Schultes/Lindau Nobel Laureate Meetings

Walter Kohn, Nobel Laureate in Chemistry 1998. Photo: R. Schultes/Lindau Nobel Laureate Meetings

Traditional computer modelling of materials uses methods recognised with the Nobel Prize in Chemistry 1998. Walter Kohn and John Pople shared the prize that year for developing algorithms that modelled molecules using quantum mechanics, improving the accuracy of molecular structure and chemical reactivity calculations. The techniques that Kohn and Pople each developed revolutionised computational chemistry and have continued to be improved to give highly accurate results.

These methods typically work well to predict structural and electronic properties of crystalline metals and metal oxides. But these predictions do not always match measured properties of complex bulk materials and their surfaces under experimental conditions. Predicting properties of bulk materials and their surfaces using current quantum mechanical methods requires lengthy calculations using supercomputers.

To speed up these calculations, chemists are analysing public databases of atomic, chemical and physical properties to find combinations that predict materials properties. They use big-data analytics tools to search for meaningful patterns in the large amounts of data. Algorithms like this already influence our daily lives by filtering spam email, suggesting other items for online shoppers, detecting faces in digital photos, and identifying fraudulent credit card transactions. Although materials scientists have much less data than email providers or online stores, there is still enough publicly available data about atomic properties such as electronegativity, atomic radius and bonding geometry as well as the geometric and electronic structures of various materials that the same analysis tools are still useful. Materials databases include Materials Project in the United States and the Novel Materials Discovery Laboratory in Europe, among others.

Computational materials discovery often involves making predictions for an entire class of materials, such as metals, metal oxides or semiconductors. However, a global prediction may not apply to certain subgroups of materials within that class.

Bryan Goldsmith, a Humboldt postdoctoral fellow at the Fritz Haber Institute of the Max Plank Society in Berlin and a young scientist attending the 67th Lindau Nobel Laureate Meeting and his colleagues recently applied a data analytics tool called subgroup discovery to see how physical and chemical properties relate to the structure of gold nanoclusters containing varying numbers of atoms. Gold clusters are a model example of how materials properties change from the bulk to nanoscale. Bulk gold is shiny, inert and yellow in color. Gold nanoparticles, however, are red, catalytic and have dynamic structures.


The Novel Materials Discovery Laboratory, a European Center of Excellence established in the fall of 2015, has the world’s largest collection of computational materials science data.


Using molecular dynamics simulations, the researchers calculated 24,400 independent configurations of neutral, gas-phase gold clusters containing 5 to 14 atoms at temperatures from -173 to 541 °C (100 to 814K). Next, they predicted the ionisation potential, electron affinity and van der Waals forces between atoms in a cluster, among other properties.

Then the researchers generated various mathematical combinations of the predicted chemical data to produce a large number of possible relationships between different subgroups of gold clusters. Finally, they used subgroup discovery to find the relationships that best predicted cluster structure and their electronic properties.

The algorithm rediscovered the known property that gold nanoclusters with even number of atoms are semiconducting, whereas those with an odd number of atoms are metallic. It also revealed something new about forces that stabilise nonplanar gold clusters: van der Waals forces typically thought to stabilise interactions between molecules contributed more to the stability of nonplanar clusters than planar clusters.


A computational prediction for a group of gold nanoclusters (global model) could miss patterns unique to nonplaner clusters (subgroup 1) or planar clusters (subgroup 2). Credit: New J. Phys.

A computational prediction for a group of gold nanoclusters (global model) could miss patterns unique to nonplaner clusters (subgroup 1) or planar clusters (subgroup 2). Credit: Goldsmith et al. Uncovering structure-property relationships of materials by subgroup discovery. New J. Phys. 19 (2017) 013031 (CC BY 3.0)

By starting their data analytics with known properties, the researchers hope to develop predictive models that retain physical and chemical information that is easy for other scientists to interpret, Goldsmith says. “We believe that if you can find these simple equations, they can help guide you to deeper understanding, and hopefully lead to new chemistry and materials insights.”

With more powerful computers, larger databases and novel ways to use the data being developed, data analytics could become increasingly important to researchers synthesising new materials. A database of failed reactions could guide the direction of future experiments, and data analytics tools could speed the interpretation of spectra used to characterise molecules and materials. And in time, researchers hope to predict the outcome of a catalytic reaction or materials synthesis. “Data analytics should be an indispensable part of every chemist and material’s scientist toolkit,” Goldsmith says.

Tomas Lindahl and the Surprising Instability of DNA

Today we know that each and every day, our DNA is damaged by UV light, free radicals or carcinogenic substances. And even without such external attacks, DNA can undergo many changes, for instance during replication. But in the 1960s, with the discovery of DNA’s double helix structure only one decade earlier, DNA was viewed as something inherently stable.


Tomas Lindahl during the Nobel Prize press conference in Stockholm in December 2015. Photo: Holger Motzkau, CC BY-SA 3.0

Tomas Lindahl during the Nobel Prize press conference in Stockholm in December 2015. Lindahl has worked in the UK for several decades and is emeritus director of Cancer Research UK at Clare Hall Laboratory in Hertfordshire. Photo: Holger Motzkau, CC BY-SA 3.0

In 1969, Tomas Lindahl set out to tackle a question that seemed so far-fetched at the time that he didn’t even apply for a grant. Instead, to study the stability or instability of DNA exerimentally, he used money he had been awarded beforehand.

Already as a postdoctoral researcher in Princeton, Tomas Lindahl had found that tranfer RNA, or tRNA, could be quite unstable under certain conditions. This finding ran against the current belief that DNA should be extremely stable. Since RNA is usually single-stranded, some reduced stability would be expected. But still Tomas Lindahl couldn’t get the question of the inherent stability or instability of DNA out of his mind.

In the US, he had been the first to describe the previously unknown enzymes DNA ligase and DNA exonuclease, both important for repairing DNA breaks. But at the time, “we did not have the techniques available to attempt to prove their roles in intracellular recombination events,” Tomas Lindahl writes in his autobiography for

Back in Stockholm and with his own small lab, he now began to look for signs of DNA decomposition in a neutral aqueous solution. He decided to start out with some pilot experiments, “and if the results did not seem promising – quietly bury the project,” Lindahl describes these early steps in his Nobel Lecture. But the results were indeed promising, and so he carried on with “a series of time-consuming experiments to attempt to quantify and characterise the very slow degradation of DNA solutions under physiological conditions”.

With the help of chromatography, he found that some base residues were lost from DNA. Also, the remaining DNA bases had changed, “the most important of these is the deamination of cytosine residues to uracil”. This change is described in the graph below, deamination meaning ‘loss of an aminogroup’.

When Lindahl started to quantify these observed changes, he found the startling number of thousands of DNA changes per day in any mammalian cell: a number that should have made the development of life on earth as we know it impossible. The compelling conclusion was that there were powerful DNA repair mechanisms at work round the clock.


Base excision repair


Step by step, Lindahl was able to describe the pathway of a repair mechanism that became kown as ‘base excision repair’. Several enzymes need to work together to find, to excise, and finally to replace a damaged nucleotide. Cytosine, one of the four building blocks of DNA, easily loses one aminogroup, as mentioned above, the result is a base called uracil. But uracil cannot bind with guanine, the other half of the GC base pair. Now, an enzyme called glycosylase detects this problem and excises uracil. Next, the enzyme DNA polymerase fills the gap with cytosine, and finally the strand is sealed by DNA ligase. By finding this pathway, Lindahl’s research came full circle: he could now prove the role of the enzyme he had first described as a postdoc years earlier.

In 2015, the Nobel Prize in Chemistry was awarded to Tomas Lindahl, Paul Modrich and Aziz Sancar ‘for mechanistic studies of DNA repair’. Aziz Sancar has described ‘nucleotide excision repair’, the mechanism that cells use to repair UV damage to DNA, and Paul Modrich has demonstrated how cells correct errors that occur when DNA is replicated during cell division. This repair mechanism is called ‘mismatch repair’.

This means that base excision repair is only one repair pathway among many, albeit an important one. And not all pathways have been discovered yet. Correspondingly, there are many different enzymes involved in the various repair pathways. And each and every enzyme is an interesting starting point for cancer drug research, because inhibiting one of these enzymes also means suppressing DNA repair. As Lindahl himself likes to point out: these repair pathways can be seen as a ‘double-edged sword’, because normal cells use them all the time to remain healthy, but cancer cells use them as well to stay alive and cancerous.


Angelina Jolie at the launch of the UK initiative on preventing sexual violence in conflict, May 2012. One year later, she made public that she is the carrier of a BRCA mutation and that she underwent a double mastectomy and later an ovariectomy to reduce her cancer risk. BRCA genes are responsible for DNA repair, their mutations can lead to a very high cancer risk. Photo: Foreign and Commonwealth Office, Open Government Licence v1.0 (OGL)

Angelina Jolie at the launch of the UK initiative on preventing sexual violence in conflict, in May 2012. One year later, she made public that she is the carrier of a BRCA mutation: BRCA genes are responsible for DNA repair, and their mutations can lead to a sharply increased cancer risk. Photo: Foreign and Commonwealth Office, Open Government Licence v1.0 (OGL)

As a result of this research, novel ‘targeted therapies’ now aim to affect the repair pathways that some cancer cells rely on, hopefully leaving healthy cells unaffected. One drug that is mentioned in the scientific material of the Royal Swedish Academy of Sciences is the cancer drug Olaparib: it’s a PARP inhibitor, inhibiting a polymerase named PARP (poly ADP ribose polymerase), one of the many enzymes of DNA repair. It is approved for use against cancers in patients with BRCA1 or BRCA2 mutations.

Female carriers of these mutations are five times more likely to develop breast cancer and up to thirty times more likely to develop ovarian cancer. These mutations, which are more frequent in certain popluation groups like for instance Ashkenazi jews, became widely known when Hollywood actress Angelina Jolie explained publicly that she was a BRCA1 carrier and that she had a double mastectomy and ovariectomy to hopefully prevent her getting cancer. After she made this step public, there has been a marked increase in women seeking tests for their BRCA status – an important step for making an informed decision about medical procedures.

In his 2015 Nobel Lecture, Tomas Lindahl concluded that many more small molecules than are currently known can probably damage DNA, meaning “that there are more DNA repair enzymes waiting to be discovered”. And each and every one can be viewed as a new hope for cancer patients. Lindahl’s vision for the future is that cancer will become a disease of old age, like type 2 diabetes: you need to take some medication against it, but you can live with it and enjoy a good quality of life.

This summer, Tomas Lindahl will visit the Lindau Nobel Laureate Meetings for the first time. We’re looking forward to welcoming him in Lindau and to hearing his lecture on DNA repair.


The Helix Bridge  is a pedestrian bridge linking Marina Centre with Marina South in Singapore. Its design is based on the double helix model of DNA. This can be seen best at night, when pairs of the coloured letters G and C, as well as A and T, are lit up in red and green. They represent cytosine, guanine, adenine and thymine, the four bases of DNA. Photo: joyt/

The Helix Bridge is a pedestrian bridge in Singapore linking Marina Centre with Marina South. Its design is based on the double helix model of DNA. This can be best seen at night, when pairs of the coloured letters G and C, as well as A and T, are lit up in red and green. They represent cytosine, guanine, adenine and thymine, the four bases of DNA. Photo: joyt/


Boosting Photosynthesis to Meet Rising Food Demand

The issue of food is perhaps somewhat overlooked among the many challenges faced by mankind but the truth is that world demand is steadily increasing and by 2050, could be almost twice that of 2005. More alarmingly, based on the calculations of the Food and Agriculture Organization of the United Nations, if things continue as they are, we are not going to meet this increased demand. Will upgrading the basic chemical reaction underlying the growth of all plants allow us to meet the challenge?

Several causes underlie the expected need for more food. First and foremost, it is simply a consequence of there being more of us: by 2050 there may be over two billion more humans living on planet earth. A further reason is increasing living standards across the globe; as these are rising, so too is the demand for better quality food (and more of it). Further, the last decade has witnessed an explosion in the use of grain to make biofuel in developed countries, diverting it from its potential use as a foodstuff.  So, how are we to ramp up food production and meet this shortfall? And further, how can we do it in a sustainable way that does not exacerbate already existing problems such as water shortage and climate change?


Although agricultural efficiency has increased significantly over the course of the last decades, evidence suggests that crop yields are now plateauing. Novel solutions are urgently needed to meet the world's increasing demand for food. Credit: valio84sl/

Although agricultural efficiency has increased significantly over the course of the last decades, evidence suggests that crop yields are now plateauing. Novel solutions are urgently needed to meet the world’s increasing demand for food. Photo: valio84sl/


The central strategies that have underpinned decades of increasing crop yield involve either increasing the amount of land used for agriculture, or maximizing the amount of food that can be grown on the land already in use. However, many experts in the field believe that conventional strategies based on efficient use of water and fertilizers as well as other approaches have now been exhausted. Indeed, the yield of some of the main staples of the human diet worldwide such as rice and wheat has plateaued in recent years. What are the remaining options? This question among others was also debated by the participants in a discussion which took place at the 63rd Lindau Nobel Laureate Meeting in 2013.

Harvesting the power of transgenic technology, which means taking a gene responsible for a given trait from one organism and transferring it to a second species, could be a potential solution to the problem. However, this approach remains highly controversial, not least because common strategies include the engineering of genetically modified (GM) variants that harbour genes providing resistance to harsh environmental conditions, disease and, crucially, to antibiotics. A major concern related to this approach is that these genes might be easily transferred to other pest species, resulting in “super weeds” that cannot be eradicated. However, another distinct strategy has also taken shape in the last 20 years: one which seeks not to make plants hardier, but rather to boost the basic process underlying all plant growth.

The evolution of photosynthesis, the process by which plants convert carbon dioxide into sugar and water in the presence of sunlight, was of crucial importance to the development of life on earth. Not surprisingly, several Nobel Prizes have been awarded to scientists whose work has helped to characterise the factors involved and to unravel the basic mechanism of the reaction. Most directly, the Nobel Prize in Chemistry 1988 was jointly awarded to Hartmut Michel, Robert Huber, and Johannes Deisenhofer “for the determination of the three-dimensional structure of a photosynthetic reaction centre”. Now, this chemical reaction, which has done so much to promote life on this planet and the evolution of ever more complex lifeforms, may hold the key to ensuring that our increasing needs for food are met. It has been proposed that if photosynthesis, and thus the production of sugar, can be fine-tuned and upgraded, then this will lead to bigger plants, bigger yields and ultimately, more food.


Johann Deisenhofer during his 2016 Lindau Lecture. We are looking forward to his lecture at #LiNo17! Credit: Christian Flemming/LNLM

Johann Deisenhofer during his 2016 Lindau Lecture. We are looking forward to his lecture at #LiNo17! Photo: Christian Flemming/LNLM


It may come as a surprise, but the chemical reaction that has been evolving over the last 3.5 billion years and which underlies all life on earth is actually relatively inefficient in several aspects. When we talk about photosynthetic efficiency what we are actually talking about is the percentage of light energy that is finally converted into chemical energy in the form of glucose. The inefficiency is already pre-set by the fact that only a fraction of the spectrum of sunlight is actually used for photosynthesis. Chlorophyll, the green pigment that gives leaves their colour, is most efficient at capturing the light that we can see – the red and blue light which makes up less than half of the total light that reaches our planet from the sun. The inefficiency is compounded by the fact that plants cannot use all of the energy contained in the sunlight that they absorb. To avoid damage to the components of the photosynthetic reaction and to the organism as a whole, the excess energy is converted to heat and is dissipated from the plant. This important protective mechanism is termed non-photochemical quenching (more of which later).

The strategies aimed at increasing photosynthetic efficiency take many forms. In fact, one of the critical enzymes required for photosynthesis, RuBisCo (short for ribulose-1,5-bisphosphate carboxylase/oxygenase) is itself rather slow and inefficient. Thus, several laboratories worldwide are attempting to engineer versions of the protein that harbour more efficient enzymatic activity. The concentration of carbon dioxide is another important factor in photosynthesis. Thus, further strategies to optimise photosynthesis include simply increasing the local concentration of carbon dioxide in the immediate vicinity of plants, or improving the ability of plants to take it up and use it.

Until recently, however, the feasibility of all of these strategies was based mainly on preliminary evidence and in some cases the ideas remained just that, ideas. The overall concept of enhancing photosynthesis, the process ultimately underlying all plant growth, is undeniably attractive – but will it really work?     


Decades of research has gone into understanding the mechanism of photosynthesis. These insights are now being utilised by researchers who, by boosting and upgrading the efficiency of the reaction, hope to increase crop yields and ensure the future of our food supply. Credit: alvarez/

Decades of research has gone into understanding the mechanism of photosynthesis. These insights are now being utilised by researchers who, by boosting and upgrading the efficiency of the reaction, hope to increase crop yields and ensure the future of our food supply. Photo: alvarez/


Then, in the autumn of last year, arrived a study which goes some distance to demonstrating that the promise of boosting photosynthesis can be translated into tangible gains in crop yield, in this case tobacco. The authors, a team led by Krishna K. Niyogi and Stephen P. Long, and comprising researchers based in the US, UK and Poland, took a rigorously systematic approach and started by simulating the chemical process of photosynthesis in its entirety in order to identify potential steps where intervention may optimise the reaction. The scientists soon realised that the fact that plants absorb much more energy from the sun than they can use may hold the answer. In response to harsh direct sunlight, plants activate non-photochemical quenching immediately to protect themselves and their photosynthetic enzymes. However, when it comes to switching the mechanism off again, they are not as efficient. It can take as long as 30 minutes, time during which photosynthetic activity is dampened. The authors calculated that under conditions when light is fluctuating, overall yield could be decreased by as much as 30 percent. To try and tackle this question, and to promote faster resumption of photosynthesis upon exposure to sunlight, the team led by Niyogi and Long genetically engineered tobacco to express higher amounts of three key proteins that speed up the resumption of photosynthesis in conditions of fluctuating light. In unprecedented results, the genetically modified plants gave yields that were on average 15 percent higher than unmodified tobacco. The researchers are now applying their approach to rice and other important food crops. 

Scientists working at Rothamsted Research in the UK, the world’s oldest agricultural research institute, also recently claimed to have made significant breakthroughs in boosting photosynthesis to improve crop yield. They have taken a different approach to improving the efficiency of photosynthesis: they increased the expression of the enzyme Sedoheptulose-1,7-bisphosphatase (SBPase). Their initial trials under greenhouse conditions showed significant increases in wheat yield and researchers are now testing whether these will also be observed in wheat grown in the field.

Decades of research, much of it crowned with Nobel Prizes, has gone into understanding the players and mechanisms involved in photosynthesis. With good reason: understanding how photosynthesis works may secure the future of our food supply. The recent successes with translating basic insights into tangible increases in crop yield suggest that we may soon begin to harvest the fruits of this approach. Watch this space.

Kenneth Arrow and the Golden Age of Economic Theory

The recent death of Kenneth Arrow (who was born on 23 August 1921) represents both the loss of one of the transcendent minds in the history of economics and the closing of a golden age of economic theory. That age – which includes such historically important figures as Arrow’s fellow Nobel Laureates Paul Samuelson and Gary Becker – represented a development and expansion of formal economic theory that brought unprecedented precision to the logical foundations of social science.

The economic approach to individual decision-making is derived from the interplay of preferences, constraints, and beliefs. This approach, when combined with the conceptualisation of observed outcomes for an economic environment as equilibria, allows for clear understanding of how markets create and adjudicate interdependences in these decisions. Arrow and the larger body of scholars in this golden age both developed the logical foundations of ideas whose origins go back to Adam Smith and the beginnings of economics, and extended the domain of economics to contexts far beyond markets of conventional supply and demand.

Arrow’s contributions span virtually all of economic theory, but they can be approximated as falling into five distinct areas.


Kenneth Arrow. Photo: Peter Badge/Lindau Nobel Laureate Meetings

Nobel Laureate Kenneth Arrow visited the Lindau Nobel Laureate Meetings in 2004. Photo: Peter Badge/Lindau Nobel Laureate Meetings

The impossibility theorem

Arrow’s most famous scholarly achievement is his celebrated ‘impossibility theorem’, which lies at the heart of understanding how a government, or other collective decision-making process, can employ individual preferences as inputs from which decisions are determined. Dictatorships can do this with ease: a leader’s preferences determine action. Other cases are far more complex.

Intuitively, democracies aspire to assign equal weight to voters and produce policies that are the ‘will of the people’. But how should differences in preferred outcomes be evaluated? Should everyone have one vote per election, regardless of the intensity of their feelings about the issues at stake? Should voting be first past the post, or should proportional representation be followed? There are many ways to produce collective choices.

We have powerful intuitions as to how collective choice procedures should function. One is that if everyone prefers one policy to another, the latter should never be preferred. Another desideratum is that voting procedures should always produce coherent decisions. But as the Marquis de Condorcet showed in 1785, incoherence can occur in a majority voting system where choices are sequentially considered pairwise: specifically, that it is possible that candidate A defeats B, B defeats C, but C defeats A, thereby failing to produce a coherent notion of a winner.

Condorcet’s remarkable insight leads to the question of whether alternative voting schemes can avoid such outcomes. Arrow’s even more remarkable (1951a) analysis, which was his doctoral dissertation, asked whether there can be any procedure that respects the preferences of all and at the same time always produces coherent decisions? The impossibility theorem proved that the answer is no. Any procedure, no matter how clever, runs the risk of producing cycles in voting or other bizarre outcomes. In other words, it may not be the case that there is always a coherent voice of the people.

Why is this so important? Arrow’s results in no way call into question the intrinsic value of democratic processes. Rather, they demonstrate that no procedure can aggregate individual preferences in a way that meets all objectives in all cases. A perfect voting system or collective action scheme, in this sense, does not exist and any institutional design must recognise this.


General equilibrium theory

Arrow’s second great set of contributions revolves around general equilibrium theory, the high altar of mathematical economics. The economic theory of market economies is based on the idea that individual buyers and sellers face incentives to consume more or less, based on the prices of goods and services. Prices, in turn, adjust in order to balance supply and demand in individual markets.

But what happens in the economy as a whole? Does there exist a set of prices that equates supply and demand across markets? This is the question of whether there is an equilibrium for the whole economy, and if an equilibrium exists, is it socially desirable?

Adam Smith’s ‘invisible hand’ argues that the decentralised decisions of buyers and sellers create collective economic outcomes that are desirable. Is this logically possible? In the 1880s, the great economist Leon Walras recognised that the mathematical representation of individual choices and markets interactions would allow such questions to be answered. Walras set an agenda for mathematical economics whose resolution waited until Arrow and the 1950s.

Together with his sometime co-author and fellow Nobel Laureate Gerard Debreu, as well as Lionel McKenzie, Arrow identified conditions for the nature of individual consumers and producers such that an economy can be in general equilibrium; in other words, it is logically possible for supply to equal demand in every market. Arrow and Debreu (1954) gave an affirmative answer to the existence question by describing when equilibrium is possible.

Further, Arrow (1951b) provided extensions and proofs of the celebrated welfare theorems of economics, which show that the equilibrium levels of supply and demand are optimal in the sense that it is not possible for everyone to be made better off by different levels of production and consumption, and that certain types of transfers can move an economy from one efficient outcome to another. These theorems make the invisible hand idea precise and again show when Smith’s profound insights can hold.

But to show that something is logically possible does not mean that it holds in reality. To prove that an equilibrium exists and that it is efficient is to say that under a set of assumptions about how individuals make choices and how markets work, these things are true. Arrow’s monumental achievement was to reveal the logic underlying ideas about market equilibrium and the invisible hand.

Understanding the conditions under which a market economy is efficient also reveals when it is not. By implication, this understanding also reveals how deviations from certain idealisations of how markets function determine the extent to which equilibria are not socially optimal.

As such, Arrow’s work in economic theory moves beyond tired dichotomies of whether markets are good or bad to understanding what they collectively can do. It is really no surprise that Arrow had a longstanding sympathy with socialism in his early years and remained an ardent liberal all his life. His 1978 essay on socialism is fascinating in illustrating how profoundly humane ideals interacted with economic science to produce Arrow’s political commitments.


Decision-making under uncertainty

Arrow’s third great achievement was systematic exploration of the effects of uncertainty on economic decision-making and aggregate outcomes. This work took two stages. First, Arrow developed much of the theory of decision-making under uncertainty and the implications of uncertainty for understanding market equilibria. Part of this involved determining how a rational agent would employ information in an uncertain world to make choices.

Other work characterised the nature of risk at both the individual level and the market level. At the individual level, where his thinking is well surveyed in Arrow (1971), he is most famous for developing (1965), simultaneously with John W. Pratt, the Arrow-Pratt measures of risk aversion, which provide precise characterisation of how uncertainty in consumption affects an individual’s utility. These measures are conventionally used in empirical research.

In terms of market equilibria, Arrow (1964) developed the idea that financial assets can be thought of as ‘contingent commodities’ – that is, objects whose values are contingent on the resolution of uncertainty. From this perspective, one can understand the diversification of risk as the purchase of a set of contingent commodities that together yield the same return regardless of how uncertainty unfolds.

Commodities that only pay off for one of the possible ways that uncertainty unfolds, but not otherwise, are now called Arrow securities. Arrow securities are essential to the modern theory of finance because actual financial assets can be reinterpreted as representing combinations of Arrow securities. Hence the prices of Arrow securities may be used to determine how all assets are priced. This formalisation allowed the Arrow-Debreu model of general equilibrium to generalise naturally to account for uncertainty. These ideas underpin the modern theory of finance.

The extension of general equilibrium theory to uncertainty is a perfect example of Arrow as scientist: rigorous identification of limitations to economic theory combined with constructive solutions via mathematical formalism.


Imperfect information

Arrow’s later work focused on the implications of uncertainty and imperfect information. The ideas here are so rich as to define a fourth area of his contributions.

One of his most celebrated papers is “Uncertainty and the Welfare Economics of Medical Care” (1963). This study identified why markets for medical care are virtually certain to fail to fulfil the conditions under which market equilibrium is socially desirable. Relative to current debates, Arrow recognised the problem of moral hazard in the behaviour of doctors and patients. All contemporary scholarly analyses of health insurance have been influenced by Arrow’s arguments, in this case none of which were made using mathematics.

Arrow’s focus on the implications of imperfect information led to him to help pioneer (along with Edmund Phelps, another Nobel Laureate) the theory of statistical discrimination (1973). Statistical discrimination asks how the socioeconomic outcomes of a group are affected by stereotypes. In a pool of job applicants, suppose an employer cannot observe individual ability but can observe ethnicity. Then an employer who is attempting to forecast performance can be said rationally to ascribe the average performance of individuals of an ethnic group to each member.

If one group starts with higher average educational quality, this will mean its members will receive job offers more often. In turn, this distorts the incentives of individuals to pursue higher quality education and can create a vicious cycle as disadvantaged groups rationally do not make educational investments.

Statistical discrimination is the leading competitor to models of taste-based discrimination, in which African Americans, for example, might be disadvantaged by animus. The model represented an enormous intellectual leap as it identified how discrimination can occur even when bigotry is absent.


Economics of knowledge

Finally, Arrow made two fundamental contributions to the economics of knowledge. First, he provided a clear economic justification for government sponsorship of science. Arrow’s (1962a) argument derives from the public good nature of advances in knowledge. The benefits of such advances cannot be fully captured by the developer of an idea, and so science is a clear example of a case where the conditions under which efficiency of market equilibrium do not hold.

Second, Arrow (1962b) developed a formal theory of how knowledge grows because of economic activity. In his famous model of the economics of learning-by-doing, he argued that the growth of technical knowledge should not be understood as the product of random insights and inspirations by scientists and others, but as a consequence of the environment produced by economic activity. This means that economic growth can beget growth. Arrow’s vision preceded by two decades the emergence of modern endogenous growth theory, pioneered by Paul Romer and Nobel Laureate Robert Lucas.

The prescience of Arrow’s work on knowledge is but one example of his living legacy. Alfred North Whitehead once said that European philosophy was a series of footnotes to Plato. Saying the same of Arrow and economics would be an injustice to the achievements of modern researchers (and one that would have mortified Arrow). What is true is that his body of writings has proven to be visionary in many, many directions – so that the most profound research of today has antecedents in Arrovian thought.


Challenging and broadening economic theory

In the latter part of his career, Arrow was deeply involved in the Santa Fe Institute, the leading centre for complexity research. He also (with me) directed the John D. MacArthur Research Network on Social Interactions and Economic Outcomes. He was a regular participant in debates about the environment and climate change and a long-time fellow of the Beijer Institute for Ecological Economics.

What links these activities? In each case, the research endeavour involved challenges to the assumptions and methods of the same neoclassical economic theory that Arrow had constructed.

This is perhaps the ultimate testament to Arrow’s genius. Having created so much of what constitutes modern quantitative social science, he was always profoundly aware of the limitations of the edifice. With no sacrifice of the logical rigour that places his contributions in the realm of permanent changes in knowledge, Arrow never ceased critically evaluating and challenging economic theory. His remarkable 1974 book, The Limits of Organization, exemplifies the Arrow vision of an eventual social science in which disciplinary boundaries between economics, sociology, and psychology dissolve in the quest to achieve a better match with complex reality.

Arrow’s astonishing capacity for critical engagement with economics lives on in the four living Nobel Laureates whom he advised. Eric Maskin, Roger Myerson, Alvin Roth, and Michael Spence have each changed economic theory because of their challenges to standard assumptions and their profound insights follow the Arrow example of using impeccable formal logic as the mainspring for progress in social science.

The fifth Nobel Laureate he supervised, John Harsanyi, was one of the fathers of game theory, which represents a distinct edifice of microeconomics to market analysis and general equilibrium theory, although there are many deep connections. Arrow’s legacy as teacher and adviser, of course, goes far beyond his most celebrated students to a virtual legion of economists.

Like Faust, limitless curiosity and passion for knowledge meant that Arrow strove without relenting; but unlike Faust, Arrow needed no redemption. His intellectual integrity was pristine and unparalleled at every stage of his life. His character was as admirable and admired as his intellect. Arrow’s personal and scholarly example continues to inspire, nurture, and challenge.


This piece first appeared at Vox.



Arrow, K J (1951a), Social Choice and Individual Values, New York: Wiley.
Arrow, K J (1951b), “An Extension of the Basic Theorems of Classical Welfare Economics”, in J Neyman (ed.) Proceedings of the Second Berkeley Symposium on Mathematical Statistics and Probability, Berkeley and Los Angeles: University of California Press.
Arrow, K J (1962a), “Economic Welfare and the Allocation of Resources for Inventions”, in The Rate and Direction of Inventive Activity: Economic and Social Factors edited by R. R. Nelson, Princeton: Princeton University Press.
Arrow, K J (1962b), “The Economic Implications of Learning by Doing”, Review of Economic Studies 29: 155-73.
Arrow, K J (1963), “Uncertainty and the Welfare Economics of Medical Care”, American Economic Review 53, 941-73.
Arrow, K J (1964), “The Role of Securities in the Optimal Allocation of Risk-Bearing”, Review of Economic Studies 31: 91-96.
Arrow, K J (1965), “Aspects of the Theory of Risk-Bearing”, Helsinki: Yrjö Jahnsson Lectures.
Arrow, K J (1971), Essays in the Theory of Risk-Bearing, Chicago: Markham; Amsterdam and London: North-Holland.
Arrow, K J (1973), “The Theory of Discrimination”, in O Ashenfelter and A Rees (eds), Discrimination in Labor Markets, Princeton University Press.
Arrow, K J (1974), The Limits of Organization, New York: WW Norton & Company.
Arrow, K J (1978), “A Cautious Case for Socialism”, Dissent, Fall.
Arrow, K J and G Debreu (1954), “Existence of Equilibrium for a Competitive Economy”, Econometrica 22: 265-90.

Michael Levitt: a Pioneer of Computational Biology

Chemical reactions happen unbelievably fast. In fractions of a millisecond, electrons jump from one atom to another. Classical chemistry cannot keep up with this: neither can every step of a reaction be unveiled experimentally, nor can calculations based on classical physics simulate these fast and complex processes. Due to the pioneering research of Michael Levitt, together with Arieh Warshel and their Harvard colleague Martin Karplus, it’s now possible to have the classical Newtonian physics work side-by-side with simulations based on quantum physics.

Previously chemists had to choose. If they used classical physics, their calculations were simple and they could model large molecules. The downside was: chemical reactions couldn’t be simulated, especially not in the 1970s when this ground-breaking research was conducted. To simulate these reactions, chemists had to use models from quantum physics, meaning that every electron, every atomic nucleus etc. was accounted for. But these calculations require enormous computing power, meaning that their application was limited to really small molecules.

Nowadays, chemists apply the demanding quantum physics calculations only where they are needed: at the core of the reaction, also called the reaction centre. The rest of the molecule in question is modelled using classical physics. Taking into account that computers were infinitely slower in the early 1970s compared to computer speed today, Levitt and Warshel reduced the calculations even further: they merged several atoms in their model. In his autobiography on, Levitt comments: “How we dared to leave out 90 percent of the atoms is an interesting story.” Obviously, they reached “the right level of simplicity” with this approach: neither too complicated, nor so simple that the model would be meaningless.

But how did a shy young boy from South Africa become a world-class biophysicist? Of course, most of his success is due to his great intellectual talent. But his fierce perseverance also helped, as did many lucky coincidences. Let me give you three examples of the latter.


Michael Levitt during his talk at the Inscription Ceremony in the park surrounding the Museum of Natural History in New York City, when his name was added to the stone pillar where all names of American Nobel Laureates are inscribed. Photo: Consulate General of Sweden in New York City,2014, CC BY-SA 2.0

Michael Levitt during his talk at the Inscription Ceremony in the park surrounding the Museum of Natural History in New York City, when his name was added to the stone pillar where all the names of American Nobel Laureates are inscribed. Photo: Consulate General of Sweden in New York City, 2014, CC BY-SA 2.0


First of all, he had an aunt and uncle in London who were both established scientists: His aunt Tikvah Alper found that the infectious agent in Scrapie, a degenerative sheep and goat disease, did not contain nucleic acid. This finding was important in understanding the mechanisms of all forms of transmissible spongiform encephalopathy, like Creutzfeldt-Jakob disease in humans. Max Sterne, her husband and Levitt’s uncle, developed an effective and safe vaccine against anthrax in South Africa that is still used today. When young Michael Levitt visited them in London in late in 1963, he couldn’t help but become interested in life science. He later became one of the pioneers of computational biology; he first learned how to programme in Fortran during an internship in Berkeley that his aunt had organised for him a few years later.

In 1963, Michael Levitt was only sixteen and had already studied several months at Pretoria University. He spent the first few months in London “glued to [his] uncle and aunt’s TV set watching the Winter Olympics,” because he had never seen snow, and there had been no television in South Africa. But the BBC series “The Thread of Life” by Nobel Laureate John Kendrew, who had been awarded the Nobel Prize in Chemistry only one year earlier, made an even larger impression. For young Levitt, this was “a remarkable introduction to molecular biology”, because it had become clear just a few years earlier “that life was highly structured in space and time, just like a clock, but a billion times smaller and infinitely more complicated.” Already at this young age, he became fascinated by the role modern physics could play to elucidate biological processes.

In order to be admitted to a good British university, he had to pass A-level exams, his South African matriculation exam wasn’t sufficient. He chose to study at King’s College in London, where the tradition of biophysics was strong. After his first degree, he wanted to pursue a PhD at the Laboratory of Molecular Biology (LMB) in Cambridge. So he wrote to John Kendrew – only to get rejected. Michael Levitt persisted, asking to join LMB a year later, again without a clear result. But Levitt didn’t give up, put on his Bar Mitzvah suit and waited outside the offices of John Kendrew and Max Perutz, also a Nobel Laureate of 1962. The latter came out first, talked to Levitt and promised to consider his request. Finally, he was accepted as a doctoral student for one year later. But instead of the world trip that Levitt had imagined, he was sent to the Weizmann Institute in Israel to learn more about the force field method from Shneior Lifson, an important theory behind the computer modelling of large molecules, not to be confused with the force field in classical physics.


Michael Levitt photographed by Peter Badge. The transition from normal mortal to Nobel Laureate according to Levitt:

The transition from “being a mere mortal to a Nobel Prize winner” has many aspects, according to Levitt, including the following: “It is not easy when people start listening to all the nonsense you talk.” Photo: Peter Badge/Lindau Nobel Laureate Meetings


Being sent to Israel, after persevering in Cambridge, was the second lucky coincidence. As Levitt himself writes, his first year in Israel was “the real watershed” of his life: in only ten months, he laid the foundations both for a successful career in science and for a happy family life. Together with Arieh Warshel, another student and later co-recipient of the 2013 Nobel Prize, he wrote the computer programme that used the force field method to calculate properties of molecules. And Michael Levitt met his wife Rina only weeks after arriving in Israel, and they got married before he left again for Cambridge in August 1968 to finally pursue his PhD. A biologist by training, Rina Levitt later became an accomplished multimedia artist.

The third lucky coincidence happened in the mid-1980s, many years, numerous ground-breaking findings and publications later (and after moving back to Israel, then back to Cambridge, etc.): Rina and Michael Levitt where attending a private cocktail party in Cambridge, Massachusetts, when Nobel Laureate Roger Kornberg called and heard that they considered leaving Israel. Kornberg immediately suggested that Michael should come to Stanford – where he has been since 1987, and still is. “My dominant memory of coming to Stanford was how easy everything was. It seemed as if we had grown up on Jupiter and then moved to the Earth’s gravity.” Levitt founded his first research group and his first company, his eldest son attended Berkeley. A few years later, his wife and sons moved back to Israel so that all three boys could complete their military service. Levitt himself was “mainly in the air between Israel and Stanford” during this time. His wife moved back to Stanford, but a few years later left for Israel again to spend more time with their first grandchild.

“Just when I thought things could not get any better, I was woken at 2:16 AM on the morning of 9 October, 2013,” he remembers. Levitt had said to himself many times that “No one should expect the Nobel Prize,” so the call from Stockholm came as a great surprise.

We sincerely hope that Michael Levitt will be able to attend a Lindau Nobel Laureate Meeting in the coming years.


New Super Tool for Cell Biology

Researchers from Stefan Hell’s department at the Max Planck Institute for Biophysical Chemistry in Göttingen have achieved yet another breakthrough in light microscopy: With their new MINFLUX microscope they can separate molecules that are only a few nanometres apart, meaning its resolution is 100 times higher than conventional light microscopy, and about 20 times higher than super-resolution light microscopy.

The 2014 Nobel Prize in Chemistry was dedicated to the breaking of the optical diffraction limit. For more than a century, half the wavelength of light – about 200 nanometres – was considered the absolute limit for light microscopes. When Stefan Hell was studying physics in Heidelberg, this limit was still taught – but Hell couldn’t accept it. By developing stimulated emission depletion (STED) microscopy the following years, he was the first researcher to successfully venture beyond this limit in the 1990s, both theoretically and experimentally. Based on these breakthroughs he received the 2014 Nobel Prize in Chemistry, together with the American physicists Eric Betzig and William E. Moerner, “for the development of super-resolved fluorescence microscopy”.

But how does STED work, and how does it lay the foundations for MINFLUX? Hell explains STED in his own words: “If I cannot resolve two points because they are too close together and they are both emitting fluorescence, I need to darken one of them – and suddenly, you’re able to see the other point. If I make sure that all molecules are dark, except the one or the few that I’m interested in, then I can finally resolve this one, or these few.” The darkening is achieved by optical interference: Since a laser beam is used to excite fluorescence in molecules, it’s also possible to darken some molecules in the probe with a second laser beam whose wave properties cancel out the first beam. This second laser beam is doughnut-shaped, leaving only a small centre spot still emitting fluorescence. Thus STED functions by depleting a circular region of the sample, while leaving a small focal point (see graph below).


How a STED microscope works: with a laser excitation focus (left), a doughnut-shaped de-excitation focus (centre) and remaining fluorescence (right). Credit: Marcel Lauterbach, CC BY-SA 3.0

How a STED microscope works: with a laser excitation focus (left), a doughnut-shaped de-excitation focus (centre) and remaining fluorescence (right). Credit: Marcel Lauterbach, CC BY-SA 3.0

With the MINFLUX microscope, Hell’s research group combined the advantages of STED microscopy with the principles of PALM (photo-activated localization microscopy) developed by Eric Betzig. In PALM, also called PALM/STORM, single molecules are also excited by switching them on and off, but these molecules light up randomly, they’re not targeted. Betzig and his colleagues would use a very short laser pulse, thus exciting only a few molecules which could then be localised. As these molecules bleach out, the researchers turn the laser on again to see the next batch and so forth. Finally, the entire probe has been activated, seen and plotted. The result is a graphic resolution far beyond the diffraction limit.

With this approach, PALM already operates on the single-molecule level but the exact location of each molecules isn’t easily determined. With STED, the location of an excited molecule is well known, but the laser beam isn’t able to confine its emission to a single molecule. MINFLUX, on the other hand, switches individual molecules randomly on and off. Simultaneously, their exact positions are determined with a doughnut-shaped laser beam, something we know from STED. But in contrast to STED, this doughnut beam excites the fluorescence instead of darkening it. So if a molecule is located on the ring, it will glow; if it is exactly at the dark centre, it will not glow – but its exact position will be known.

Dr. Francisco Balzarotti, a researcher in Hell’s department, developed an algorithm to locate this centre position fast and with high precision. “With this algorithm, it was possible to exploit the potential of the doughnut excitation beam,” the first lead author of the Science paper explains. Klaus Gwosch, a PhD student in Hell’s group and third lead author, obtained the molecular-resolution images: “It was an incredible feeling as we, for the first time, were able to distinguish details with MINFLUX on the scale of a few nanometres”, the young physicist describes the team’s reaction to the potential of studying life at the molecular level.


Stefan Hell gemeinsam mit den Erstautoren der Studie Dr. Francisco Balzarotti, Yvan Eilers und Klaus Gwosch am Mikroskop (von links). Foto: Irene Böttcher-Gajewski, Max-Planck-Institut für biophysikalische Chemie

Stefan Hell and the three lead authors of the MINFLUX Science publication, Dr. Francisco Balzarotti, Yvan Eilers and Klaus Gwosch (from left) with their ground-breaking microscope. Photo: Irene Böttcher-Gajewski, Max Planck Institute for Biophysical Chemistry

In addition to the high optical resolution, this new microscope has another advantage over both STED and PALM: high temporal resolution. Stefan Hell: “MINFLUX is much faster in comparison. Since it works with a doughnut laser beam, it requires much lower light signal, i.e. fewer fluorescence photons, per molecule as compared to PALM/STORM for attaining the ultimate resolution.” MINFLUX stands for “MINimal emission FLUXes”, alluding to this reduced light requirement. Already with STED, researchers had been able to record real-time videos from the inside of living cells. But now it is possible to trace the movement of molecules in a cell with a 100 times better temporal resolution.

Yvan Eilers, another PhD student involved and the second lead author of the Science paper, was responsible for ‘filming’ protein actitivity within a living cell. He filmed the movements of ribosome subunits inside a living E. coli bacterium. “The past has shown that major resolution enhancements have led to new insights into the biology of cells, as STED and PALM have demonstrated,” Eilers elaborates. “Now everybody here is optimistic that this will hold true for MINFLUX as well.” The researchers in Hell’s group are convinced that in the future, even extremely fast changes in living cells will be investigated with the help of their new microscope, for instance the folding of proteins.

I asked Klaus Gwosch whether other research groups had already been in touch to acquire a MINFLUX microscope. “The Science publication has of course reached a large international audience,” he replied. “Currently, the department of NanoBiophotonics in Göttingen has the only MINFLUX microscope, but we expect other research groups to adopt and implement our approach.” His boss Stefan Hell agrees: “I am convinced that MINFLUX microscopes have the potential to become one of the most fundamental tools of cell biology. This could revolutionise our knowledge of the molecular processes occurring in living cells.”

Both Yvan Eilers and Klaus Gwosch will participate in the 67th Lindau Nobel Laureate Meeting this summer as young scientists, together with their doctoral adviser Stefan Hell. William Moerner, the third recipient of the 2014 Nobel Prize in Chemistry, will also attend and talk about super-resolution microscopy. We are looking forward to an interesting and inspiring week in Lindau!



MINFLUX microscopy separates molecules optically that are only a few nanometers apart. On the left, a schematic graph of the fluorescent molecules. PALM microscopy (right) only delivers a diffuse image of the molecules, whereas the position of each molecules can easily be discerned with MINFLUX (centre). Image: Klaus Gwosch, Max Planck Institute for Biophysical Chemistry

MINFLUX microscopy optically separates molecules that are only a few nanometers apart. A schematic graph of the target molecules is shown on the left. PALM microscopy (right) only delivers a diffuse image of these molecules, whereas the position of each molecule can easily be discerned with MINFLUX (centre). Image: Klaus Gwosch, Max Planck Institute for Biophysical Chemistry

Reaching for the Stars with Solar Sails and Lasers

The discovery of seven terrestrial planets orbiting the same star was announced in February 2017: astronomers and the public were very excited because at only forty light years away this system is practically on our cosmic doorstep. Some planets of the system called TRAPPIST-1, after the Belgian telescope in Chile that originally found it, could even harbour liquid water – and thus might be inhabited by life as we know it. One of the authors of this study is Didier Queloz, who was also part of the very first team to discover an extrasolar planet, or exoplanet, in 1995.

Back in the mid-1990s, many astronomers thought that they would find numerous copies of our solar system. Instead, they found strange new worlds, for instance large terrestrial planets called super-Earths that often orbit their stars closely. Even Proxima Centauri, at 4.24 light years our closest neighbour, probably has at least one terrestrial planet orbiting its star within the so-called habitable zone, as researchers announced in August 2016. And Proxima Centauri is a red dwarf star, just like TRAPPIST-1.

“But how can we ever reach these distant worlds?” scientists and science fiction authors alike have asked for more than a century. One thing that is remarkable about this topic is that the line between science and fiction constantly seems to blur. The main problem with extreme long-distance space travel: today’s spacecrafts are too slow to reach other stars – missions would span several generations. Currently that is not feasible with astronauts. But even with robotic space probes, it is hard to imagine a mission that lasts for more than a hundred years: Which instruments could function this long in space? Where would they get their energy from, especially between stars where light is scarce? And who is supposed to pay for an overlong mission like that?


This is how the future of space travel could look like: A 20-meter solar sail system, developed by ATK Space Systems, during testing at NASA Glenn Research Center's Plum Brook facility in Sandusky, Ohio. The sail material is supported by a series of coilable booms, which are extended via remote control from a central stowage container about the size of a suitcase. The sails are made of an aluminized, plastic-membrane material. Photo:  NASA/Marshall Space Flight Center, 2005, public domain

This is what the future of space travel could look like: A 20-meter solar sail system, developed by ATK Space Systems, during testing at NASA Glenn Research Center’s Plum Brook facility in Sandusky, Ohio. The sails are made of an aluminized plastic membrane. Their storage container has the size of a suitcase. Lower right-hand corner: a technician in dark clothing. Photo: NASA/Marshall Space Flight Center, 2005, public domain


This is why space travel needs novel propulsion systems that can travel at a fraction of the speed of light, preferably without requiring fuel. Solar sails are a technology to achieve all this: ultra-thin reflective foils that are carried by the sunlight’s radiation pressure, or other light sources, to distant worlds. The fact that electromagnetic waves produce a faint pressure on reflective surfaces has been known since the 19th century. But beware of a common misunderstanding: radiation pressure is not the same as ‘solar wind’. The latter is a constant stream of charged particles emanating from the sun. Solar sails are not propelled by solar wind.

JAXA, the Japanese space agency, has the most practical experience with solar sails in space. In 2010, its solar sail probe IKAROS flew into space, piggy-back with a Japanese Venus probe. IKAROS was able to perform all its tasks: unfold its solar sails, produce energy with its solar cells and even measure the acceleration due to radiation pressure. The six-month mission was a huge success. NASA has been experimenting with the unfolding of solar sails on the ground (see photo above).


Nobel Laureate Saul Perlmutter is among the advisory team of Breakthrough Starshot. He received the 2011 Nobel Prize in Physics for the discovery that the expansion of the Universe is accelerating, together with Brian P. Schmidt and Adam Riess. This picture shows him during his lecture at the 65. Lindau Nobel Laureate Meeting in 2015. Photo:  Christian Flemming/LNLM

Nobel Laureate Saul Perlmutter is part of the advisory team of Breakthrough Starshot. He received the 2011 Nobel Prize in Physics for the discovery that the expansion of the universe is accelerating, together with Brian P. Schmidt and Adam Riess. This picture shows him during his lecture at the 65th Lindau Nobel Laureate Meeting in 2015. Photo: Christian Flemming/LNLM

In the spring of 2016, an initiative of scientists and entrepreneurs announced a private research programme to boost the development of solar sail technology. Initiated and financed by Russian-American investor Yuri Milner, two of the main proponents of ‘Breakthrough Starshot’ are the British physicist Stephen Hawkings and Facebook founder Mark Zuckerberg; Nobel Laureate Saul Perlmutter is among the advisory team. In the previous year, in 2015, Milner had founded the Breakthrough Initiatives. This larger framework has two main foci: communicating with potential aliens and spacetravel to distant stars, i.e. Starshot.

Currently, Breakthrough Starshot has a starting capital of 100 million US dollars. The researchers are investigating the possibility to construct a fleet of ultra-lightweight space probes that will be launched conventionally with rockets, but will be accelerated in space with the help of focused lasers in the direction of the Alpha Centauri system. The experts are hoping that the sailing space probes can travel at 20 percent of the speed of light, meaning they would reach their destination in about 20 years. Any signal back to earth would travel another four years.

“Since we won’t be working with one gigantic laser, we’ll have to combine millions of kilowatt lasers,” explains Philip Lubin, professor at the University of California in Santa Barbara and technical pioneer of the Starshot project. “This a huge technical challenge, because all these lasers need to be meticulously synchronised.” The researchers are planning to build a laser array in the magnitude of 100 gigawatts. Just for comparison: a standard nuclear power plant has about one gigawatt power. The first Starshot laser arrays will be built on earth, but it would make sense to position them in space: on a satellite, on an orbital space station, or even on the moon.

Even if all of this sounds a bit futuristic and farfetched: just imagine if an array like that was in place – various space missions could be accomplished, not only a one-time trip to Proxima Centauri. For instance, spaceships carrying humans and cargo could reach Mars within hours. Philip Lubin originally started his research on large focused lasers with the goal to redirect asteroids coming too close to earth. So this future laser array could also be used to catch asteroids, study them and even exploit their natural resources.

How to put the brakes on space probes flying at 20 percent of the speed of light is currently being discussed by experts from the Max Planck Institute for Solar System Research in Göttingen, Germany. This poses a real problem because any probe at this speed could sail by the entire Alpha Centauri System in less than a second. The result: hardly any data would be collected. The experts from Göttingen propose to use the gravitational fields of all three stars, Alpha Centauri A and B plus Proxima Centauri, to slow down the probes. They also suggest that the probes could first use a fly-by manoeuvre near our sun to gain momentum for their long haul, before setting out to distant systems, thus requiring smaller laser arrays. The researchers are already discussing their ideas with the Starshot team.


Artist's impression of the Japanese space probe IKAROS near Venus. In 2010, it completed all its tasks successfully: automatic unfolding of large solar sail; power generation by thin-film solar cells on sail; verification of acceleration by solar light pressure; and navigation by solar sail. Image: Andrzej Mirecki CC BY-SA 3.0

Artist’s impression of the Japanese space probe IKAROS near Venus. In 2010, it completed all its tasks successfully: automatic unfolding of large solar sail, power generation by thin-film solar cells on sail, verification of acceleration by solar light pressure and navigation by solar sail. Image: Andrzej Mirecki CC BY-SA 3.0


Philip Lubin doesn’t expect to witness the results of Breakthrough Starshot, as he said in a German interview: “I’m now 63 years old. I might witness the launching of an interstellar mission in about thirty years’ time. But it will take at least 24 years for any data to be transmitted back from a distant star system.” One reason why Lubin is optimistic about his mission is the fact that for the last 25 years, the power potential of lasers has doubled about every 20 months and costs have been reduced simultaneously. Lubin calls this development ‘photonics revolution’.

And the enormous progress in material science, especially nanotechnology, needs to be taken into account, as well as the constant miniaturisation of technical devices, especially computer chips. Incidentally, the Starshot team calls its space probes ‘starchips’ because they’re effectively flying computer chips.

Taking into account that this space mission faces many more challenges than conventional missions, and will probably be delayed and have skyrocketing costs, plus unprecedented problems, this science fiction-like scenario might, in one way or another, still come true someday.


Revealing the Secrets of Membrane Proteins

2.3 billion years ago, “the probably most significant extinction event in history” took place. This is how Hartmut Michel starts his 2015 lecture in Lindau, describing the Great Oxygenation Event, or GOE. What happened so early in the history of life? Ancestors of today’s cyanobacteria developed photosynthesis, a process that uses energy from sunlight, water and carbon dioxide to produce carbohydrates. Today, photosynthesis is considered “the most important chemical reaction on earth”, providing food for humans and animals, releasing oxygen for them to breathe – and millions of years later, this process provides fossil fuel in the form of oil, coal and natural gas, as Michel likes to point out.

But for the earliest single-cell organisms billions of years ago, free oxygen was a toxin. If they couldn’t somehow deal with large amounts of it in the atmosphere, as well as with the subsequent molecules from the ‘reactive oxygen species’ ROS, they died. One very effective way to ‘deal’ with free oxygen is the production of ‘oxygen reductases’: proteins that reduce oxygen to water, and simultaneously conserve the energy inherent in this chemical reaction. For more than the last ten years, Hartmut Michel has studied different oxygen reductases at the Max Planck Institute of Biophysics in Frankfurt, where he became director in 1987. One year later, Hartmut Michel was awarded the 1988 Nobel Prize in Chemistry “for the determination of the three-dimensional structure of a photosynthetic reaction centre”, together with Johann Deisenhofer and Robert Huber. More about photosynthesis in a minute.


The electron transport chain in the mitonchondrial intermembrane space. Cytochrome c is part of Complex IV. Graph: T-Fork, based on graph by LadyofHats, both public domain

The electron transport chain in the mitonchondrial intermembrane space. Cytochrome c oxidase is part of Complex IV. Graph: T-Fork, based on graph by LadyofHats, both public domain


In recent years, Michel and his research group mainly studied two types of oxygen reductases: the so-called superfamily of ‘heme-copper oxidases’, and the ‘cytochrome bd oxidase’. All of these oxidases are located in membranes and are thus called ‘membrane integrated terminal oxidases’. A famous example from the superfamily is cytochrome c oxidase, the last enzyme in the respiratory electron transport chain located in the mitochondrial membrane (see graph). It receives one electron from each of four cytochrome c molecules, transfers them to an oxygen molecule, converting molecular oxygen to two molecules of water. It also helps to pump the protons, which the ATP synthase needs to make ATP, across the membrane: “the general energy currency of life”, as Michel explained in his 2015 Lindau lecture. Did you know that your body produces an astounding amount of 70 kg of ATP every day to provide ‘fuel’ for its many processes? These include breathing, digesting and maintaining body heat, etc.


Hartmut Michel during his 2014 lecture at the 64th Lindau Nobel Laureate Meeting. Photo: Christian Flemming/LNLM

Hartmut Michel during his 2014 lecture at the 64th Lindau Nobel Laureate Meeting. Photo: Christian Flemming/LNLM

Interestingly, many oxygen reductases seem to have developed before the GOE. If this holds true – what were their functions? This is a ‘paradox’ that researchers haven’t solved yet. Another astounding result of Michel’s research is the fact that the two forms of oxygen reductases that he studies have many similarities, despite their structural differences: for instance, they both transport four electrons simultaneously, thus preventing the formation of ROS. “So obviously, the same mechanism was invented twice by Mother Nature,” Michel concludes in his lecture.

The photosynthetic reaction center is a membrane protein as well – the very first membrane protein whose structure could be elucidated. When Michel studied biochemistry in Tübingen and Würzburg, textbooks stated as an irrevocable fact that membrane proteins could not be crystallized. Since x-ray crystallography was, and still is, the best way to reveal the molecular structure of proteins, neither their structure nor their function could be determined without crystallization. Incidentally, many Nobel prizes were awarded in the last 100 years for developing x-ray crystallography.

But Hartmut Michel didn’t accept this scientific consensus. One major obstacle in crystallizing membrane proteins was that they are actually membrane proteins and lipids together, meaning the membrane is partly hydrophobic and it is thus impossible to create an aqueous solution. To solve this problem, detergents were needed, but they tend to form large micelles that can obscure the protein within. Finally, Michel found a fitting detergent, Heptan-1,2,3-triol, that forms smaller molecule clusters. Now, he had to decide on a membrane protein: He finally chose to work with the purple bacterium Rhodopseudomonas viridis, the name meaning “a red pseudo cell that is green”. These bacteria are capable of photosynthesis, like many plants, and their reaction centre could be isolated.


Determining protein structures with the help of x-ray crystallization is a very elaborate process: first, the protein needs to be crystallized, and this is very difficult with membrane proteins. Next, x-rays reveal a refraction pattern that's transformed into an electron density map with the help of advanced calculus. Finally, the protein structure is derived from this. Graph: Thomas Splettstoesser,, CC BY-SA 3.0

Determining protein structures with the help of x-ray crystallography is a very elaborate process: first, the protein needs to be crystallized, and this is very difficult with membrane proteins. Next, x-rays reveal a diffraction pattern that’s transformed into an electron density map with the help of advanced calculus. Finally, the protein structure is derived from this, again with advanced mathematics. Graph: Thomas Splettstoesser,, CC BY-SA 3.0

Johann Deisenhofer and Robert Huber provided the mathematics required for the elucidation of their atomic structure. The researchers first published their results in 1985, and received the Nobel Prize in Chemistry for this finding only three years later. In the early 1980s, it took Michel about four months to create an entire data set (see graph on right). Nowadays, one set can be created within seconds. Since their first publication, the atomic structures of more than 600 membrane proteins have been identified. Only about 50 of these are human membrane proteins – but there are several thousands in total! So there’s still a lot to be done.

Why is it so important to understand more about human membrane proteins? 80 percent of all current drugs affect membrane proteins, and more than 50 percent of all drugs target them directly. These proteins play a crucial role in infections, both viral and bacterial, as well as in many forms of cancer. That’s why Hartmut Michel concluded his 2016 Lindau lecture: “Most diseases are caused by a malfunction, understimulation or overstimulation of a certain membrane protein.” Consequently, understanding human membrane proteins could dramatically help cure disease.

Hartmut Michel is a committed supporter of the Lindau Nobel Laureate Meetings: he visited them twenty times, seven videos of his lectures are available here, and he’s also a member of the meetings’ Council. We’re looking forward to welcoming him in June 2017 at the 67th Meeting dedicated to chemistry.