Resistant Bacteria vs. Antibiotics: A Fiercely Fought Battle

Antibiotics are an integral part of today’s medicine, not only to treat a strep throat or an ear infection – they also play a huge role in routine operations like appendecotomies or cecareans, and they are indispensable as co-treatment for many chemotherapies.

If you take an antibiotic today, it has most probably been developed and approved of in the last century. And since “bacteria want to live, and they are cleverer than us,” as Nobel laureate Ada Yonath describes them succinctly, many pathogens have become resistant to these common drugs. In September 2017, the World Health Organization (WHO) published an urgent appeal to increase funding for research into new antibiotics: not enough new drugs are in the ‘pipeline’ to combat the growing problem of multi-resistant strains. Currently, an estimated number of 700,000 patients die from infections with these strains every year – and this death toll might rise.

The WHO and other experts are especially concerned about multi-resistant tuberculosis that causes about 250,000 deaths per year, and less than half of all patients receive the necessary treatment that can take up to 20 months. The problem is that disrupted treatment inevitably leads to more resistances. Another very worrisome development is the emergence of multi-resistant Neisseria strains that cause the STD gonorrhoea. Neisseria gonorrhoeae are gram-negative bacteria, meaning that their surface is not coloured by gram staining. This resilient surface is also the reason why it is hard to treat gonorrhoea infections in the first place, even without resistances. Only this year, there have been several outbreaks of this multi-resistant variant around the world.


Antibiotic resistance tests: the bacteria in the culture on the left are sensitive to all seven antibiotics contained in the small white paper discs. The bacteria on the right are resistant to four of these seven antibiotics. Photo: Dr Graham Beards, 2011, CC BY-SA 4.0

Antibiotic resistance tests: the bacteria in the culture on the left are sensitive to all seven antibiotics contained in the small white paper discs. The bacteria on the right are resistant to four of these seven antibiotics. Photo: Dr Graham Beards, 2011, CC BY-SA 4.0


This brings us to another problem: resistant bugs travel fast. No matter where they develop, with modern travel they can spread around the world within days. The WHO also published a list with 12 pathogens that pose the greatest risks. This list includes Neisseria as well as the well-known and much-feared ‘hospital bug’ methicillin-resistant Staphylococcus aureus, or MRSA.


Imaging technologies help to develop new drugs

Relief from this dire situation might come from unexpected sources, like the technology honoured by the Nobel Prize in Chemistry 2017: cryo-electron microscopy, or cryo-EM. With the help of this new method, researchers can ‘see’ “proteins that confer resistance to chemotherapy and antibiotics”. This method was difficult to develop, and it leaned heavily on the experiences from X-ray crystallography and classic electron microscopy.

Often in research, being able to ‘see’ something is the first step of understanding its function, hence the strong interest in imaging technology in the life sciences: if a researcher can ‘see’ the workings of a resistance-inducing protein, he or she can start working on strategies to inhibit this process. Cryo-EM is especially good at depicting surface proteins, i.e., the location where infections or gene transfers usually start.

At the same time, optical microscopy is moving ahead as well, being able to ‘watch’ proteins being coded in living cells.  The Nobel Prize in Chemistry 2014 was dedicated to the breaking of the optical diffraction limit. Stefan Hell developed STED microscopy, American physicists Eric Betzig invented PALM microscopy, and both were awarded the Nobel Prize, together with William E. Moerner, “for the development of super-resolved fluorescence microscopy”. Shortly after receiving the most prestigious science award, Stefan Hell combined STED and PALM microscopy to develop the MINFLUX microscope: the very technology that can show proteins being coded. All these methods together will result in a “resolution revolution” that may contribute to the development of new classes of antibiotics.


Nobel laureate Ada Yonath during a discussion with young scientists at the 2016 Lindau Nobel Laureate Meeting. Photo: LNLMM/Christian Flemming

Nobel laureate Ada Yonath during a discussion with young scientists at the 2016 Lindau Nobel Laureate Meeting. Yonath has been studying bacterial ribosomes for many years. Photo: LNLM/Christian Flemming

Nobel Laureate Ada Yonath, who was awarded the 2009 Nobel Prize in Chemistry “for studies of the structure and function of the ribosome“ with X-ray crystallography, is currently researching species-specific antibiotics. Her starting point is that many antibiotics target bacteria’s’ ribosomes, “the universal cellular machines that translate the genetic code into proteins.” First, her team studied the inhibition of ribosome activity in eubacteria, i.e., ‘good’ bacteria. Next, she extended her studies to ribosomes from multi-resistant pathogens like MRSA. Her goal is to design species-specific drugs, meaning specific to a certain pathogen. These will minimise the harm done to the human microbiome by today’s antibiotics, resulting in a more efficient cure and a lower risk of antibiotic resistance, because fewer bacteria are affected.


Finding new drugs in unexpected places

Another attack strategy is to look for new antibiotic agents in places that never seemed very promising. For example, in 2010 the Leibniz Institute for Natural Product Research and Infection Biology in Jena (Germany) published a new antibiotic agent found in the soil bacterium Clostridium cellulolyticum. It belongs to the group of anearobic bacteria, a group that has long been neglected in the search for antibiotics. “Our research shows how the potential of a huge group of organisms has simply been overlooked in the past,” says Christian Hertweck, head of Biomolecular Chemistry. Just recently, scientists at the Imperial College London and the London School of Hygiene and Tropical Medicine have treated resistant Gonorrhoea bacteria with Closthioamide, the agent from Jena. They found that even small quantities were highly effective in the Petri dish; clinical trials will follow.

Yet another research strategy is to make antibiotics more ‘resistant’ to resistance formation. For instance, it has taken 60 years for bacteria to become resistant to vancomycin. Now, researchers at The Scripps Research Institute (TSRI) have successfully tested an improved version of vancomycin on vancomycin-resistant Enterococci that are on the WHO list of the most dangerous pathogenes. This improved drug attacks bacteria from three different sides. The study was led by Dale Boger, co-chair of TSRI’s department of chemistry, who said the discovery made the new version of vancomycin the first antibiotic to have three independent ‘mechanisms of action’ to kill bacteria. “This increases the durability of this antibiotic,” he said. “Organisms just can’t simultaneously work to find a way around three independent mechanisms of action. Even if they found a solution to one of those, the organisms would still be killed by the other two.”


Drug resistance can ‘jump’ between pathogens

Unfortunately, researchers and bacteria are not the only combatants, and this fiercly fought battle is not confined to clearly marked battlegrounds. Increasingly, multi-resistant bacteria can be found in our food, mostly due to the use of antibiotics in animal farming, and even in our natural environment. One such troubling example is Colistin, an antibiotic from the 1950s, which had never been widely used in humans due to toxic side-effects; however, in recent years it has been rediscovered as a last-resort antibiotic against multi-resistant bugs. Since it is an old drug, it’s also inexpensive and widely used – on pig farms in China.

As expected, Colistin-resistant bacteria developed in pigs, which was first discovered and published in 2015. But what makes this resistance perilous is the fact that the relevant gene is plasmid-mediated, meaning it can spread easily from one bacterium to another, possibly even from one species to another. In 2015, this resistance gene, called mcr-1, was also found in pork in Chinese supermarkets and in a few probes from hospital patients. Only 18 months later, 25 percent of hospital patients in certain areas in China tested positive for bacteria with this gene: resistances start spreading at unprecedented speeds.

Another highly disturbing example are large quantities of modern antibiotics and antimycotics found in the sewage from pharmaceutical production in India. In warm water, many bacteria find ideal conditions not only to live, but also to adapt to these novel antibiotics by quickly becoming resistant. Already travellers returning from some developing countries are considered a potential health threat, because many of them are unwitting carriers of multi-resistant pathogenes.

Since the discovery of Penicillin in 1928 by Nobel Laureate Alexander Fleming, the battle between bacteria and antibiotics is fierce and ongoing. This battle is fought in the laboratories, the hospitals and doctors’ offices all over the world, with some people seeming about as determined and creative as their opponents.

But resistance-breeding grounds like Chinese pig farms or sewage pipes from pharmaceutical companies present yet another battleground and call for a strategy that needs to be innovative as well as multifaceted. Only last week, a United Nations ad-hoc group met in Berlin to discuss these challenges. To sum it up: most of us do not live next to Indian sewer pipes, but the resistant bacteria bred there may reach us all.


Sign by the US Centers for Disease Control and Prevention CDC how antibiotic resistances occur - you use them and you lose them. But in this graph, large-scale pollution with resistant bacteria is not even included. Image: Centers for Disease Control and Prevention, 2013 Public Domain

Sign by the US Centers for Disease Control CDC how antibiotic resistance occurs: “you use it and you lose it”. Sewage pollution with resistant bacteria from pharmaceutical production is not included in this graph. Image: Centers for Disease Control and Prevention, 2013 Public Domain

Cool Microscope Technology – Nobel Prize in Chemistry 2017

Being able to see something often precedes understanding its function. In the case of molecules and atoms, this requires advanced methods. Visualising biomolecules is crucial for both the basic understanding of the chemistry of life and for the design of pharmaceuticals. Thanks to the ground-breaking work of Jacques Dubochet, Joachim Frank and Richard Henderson to the development of cryo-electron microscopy (cryo-EM), researchers can now freeze biomolecules mid-movement and image cellular processes they have never previously seen.

There have been two powerful imaging methods before cryo-EM, namely, X-ray crystallography and nuclear magnetic resonance (NMR) spectroscopy. These methods have enabled the structural analysis of thousands of biomolecules. However, both methods suffer from fundamental limitations. NMR only works for relatively small proteins in solution. X-ray crystallography requires that the molecules form well-organised crystals. The images are like black and white portraits from early cameras – their rigid pose reveals very little about the protein’s dynamics.


The 2017 Nobel Laureates in Chemistry: Jacques Dubochet, Joachim Frank, and Richard Henderson (from left). Illustrations: Niklas Elmehed. Copyright: Nobel Media AB 2017

TheNobel Laureates in Chemistry 2017: Jacques Dubochet, Joachim Frank, and Richard Henderson (from left). Illustrations: Niklas Elmehed. Copyright: Nobel Media AB 2017


Richard Henderson succeeded in using an electron microscope to generate a three-dimensional image of a protein at atomic resolution. This breakthrough proved the technology’s potential. Henderson used this older method for imaging proteins, but setbacks arose when he attempted to crystallise a protein that was naturally embedded in the membrane surrounding the cell. Membrane proteins are difficult to manage, because they tend to clump up into a useless mass once they are removed from their natural environment – the membrane. The first membrane protein that Richard Henderson worked with was difficult to produce in adequate amounts; the second one failed to crystallise. After years of disappointment, he turned to the only available alternative: the electron microscope.

When the electron microscope was invented in the 1930s, scientists thought that it was only suitable for studying dead matter. The intense electron beam necessary for obtaining high resolution images incinerates biological material and, if the beam is weakened, the images lose contrast. In addition, electron microscopy requires a vacuum, a condition in which biomolecules deteriorate because the surrounding water evaporates. When biomolecules dry out, they collapse and lose their natural structure, rendering the images useless.


Bacteriorhodopsin is a purple protein that is embedded in the membrane of a photosynthesising organism, where it captures the energy from the sun’s rays. Instead of removing the sensitive protein from the membrane, as Richard Henderson had previously tried to do, he and his colleagues took the complete purple membrane and put it under the electron microscope. In this way, the protein retained its structure because it remained membrane-bound. To prevent the sample’s surface from drying out in the vacuum, they covered it with a glucose solution.

The harsh electron beam was a major problem, but the researchers made use of the way in which bacteriorhodopsin molecules are packed in the organism’s membrane. Instead of blasting it with a full dose of electrons, they used a weaker beam. The image’s contrast was poor, and they could not see the individual molecules, but they were able to make use of the fact that the proteins were regularly packed and oriented in the same direction. When all the proteins diffracted the electron beams in an almost identical manner, they could calculate a more detailed image based on the diffraction pattern – they used a similar mathematical approach to that used in X-ray crystallography.

To get the sharpest images, Henderson travelled to the best electron microscopes in the world. All of them had their weaknesses, but they complemented each other. Finally, in 1990, 15 years after he had published the first model, Henderson achieved his goal and was able to present a structure of bacteriorhodopsin at atomic resolution. He thereby proved that cryo-EM could provide images as detailed as those generated using X-ray crystallography, which was a crucial milestone. However, this progress was built upon an exception: the way that the protein naturally packed itself regularly in the membrane. Few other proteins spontaneously order themselves like this. The question was whether the method could be generalised: would it be able to produce high-resolution three-dimensional images of proteins that were randomly scattered in the sample and oriented in different directions?

On the other side of the Atlantic, at the New York State Department of Health, Joachim Frank had long worked to find a solution to just that problem. Joachim Frank made the technology generally applicable. Between 1975 and 1986, he developed an image processing method in which the electron microscope’s fuzzy two-dimensional images are analysed and merged to reveal a sharp three-dimensional structure.

Already in 1975, Frank presented a theoretical strategy where the apparently minimal information found in the electron microscope’s two-dimensional images could be merged to generate a three-dimensional whole. His strategy built upon having a computer discriminate between the traces of randomly positioned proteins and their background in a fuzzy electron microscope image. He developed a mathematical method that allowed the computer to identify different recurring patterns in the image. The computer then sorted similar patterns into the same group and merged the information in these images to generate a sharper image. In this way he obtained a number of high-resolution, two-dimensional images that showed the same protein but from different angles. The algorithms for the software were complete in 1981.

The next step was to mathematically determine how the different two-dimensional images were related to each other and, based on this, to create a three-dimensional image. Frank published this part of the image analysis method in the mid-1980s and used it to generate a model of the surface of a ribosome, the gigantic molecular machinery that builds proteins inside the cell. Joachim Frank’s image processing method was fundamental to the development of cryo-EM.


Back in 1978, at the same time as Frank was perfecting his computer programmes, Jacques Dubochet was recruited to the European Molecular Biology Laboratory in Heidelberg to solve another of the electron microscope’s basic problems: how biological samples dry out and are damaged when exposed to a vacuum. Henderson had used a glucose solution to protect his membrane from dehydrating in 1975, but this method did not work for water-soluble biomolecules. Other researchers had tried freezing the samples because ice evaporates more slowly than water, but the ice crystals disrupted the electron beams so much that the images were useless. Also, the vaporising water was a major dilemma.

Jacques Dubochet saw a potential solution: cooling the water so rapidly that it solidified in its liquid state to form a glass instead of crystals. A glass appears to be a solid material, but is actually a fluid because it has disordered molecules. Dubochet realised that if he could get water to form glass – also known as vitrified water – the electron beam would diffract evenly and provide a uniform background.

Initially, the research group attempted to vitrify tiny drops of water in liquid nitrogen at –196°C, but were successful only when they replaced the nitrogen with ethane that had, in turn, been cooled by liquid nitrogen. Under the microscope they saw a drop that was like nothing they had seen before. They first assumed it was ethane, but when the drop warmed slightly the molecules suddenly rearranged themselves and formed the familiar structure of an ice crystal. This was a great success, particularly as some researchers had claimed it was impossible to vitrify water drops.

After the breakthrough in 1982, Dubochet’s research group rapidly developed the basis of the technique that is still used in cryo-EM. They dissolved their biological samples – initially different forms of viruses – in water. The solution was then spread across a fine metal mesh as a thin film. Using a bow-like construction they shot the net into the liquid ethane so that the thin film of water vitrified. In 1984, Jacques Dubochet published the first images of a number of different viruses, round and hexagonal, that are shown in sharp contrast against the background of vitrified water. Biological material could now be relatively easily prepared for electron microscopy, and researchers were soon knocking on Dubochet’s door to learn the new technique.

In 1991, when Joachim Frank prepared ribosomes using Dubochet’s vitrification method and analysed the images with his own software, he obtained a three-dimensional structure that had a resolution of 40 Å. This was an amazing step forward for electron microscopy, but the image only showed the ribosome’s contours. In fact, it looked like a blob and the image did not even come close to the atomic resolution of X-ray crystallography.


The electron microscope has gradually been optimised, greatly due to Richard Henderson stubbornly maintaining his vision that electron microscopy would one day routinely provide images that show individual atoms. Indeed, recent years have witnessed a ‘resolution revolution’. Resolution has improved, Ångström by Ångström, and the final technical hurdle was overcome in 2013, when a new type of electron detector came into use. Researchers can now routinely produce three-dimensional structures of biomolecules. 

There are a number of benefits that make cryo-EM so revolutionary: Dubochet’s vitrification method is relatively easy to use and requires a minimal sample size. Due to the rapid cooling process, biomolecules can be frozen mid-action and researchers can take image series that capture different parts of a process. This way, they produce ‘films’ that reveal how proteins move and interact with other molecules. Using cryo-EM, it is also easier than ever before to depict membrane proteins, which often function as targets for pharmaceuticals. For instance in the Zika virus outbreak in 2015-16, cryo-EM was used to visualise the virus’ membrane within months. As the Nobel Committee’s press release appreciates: “this method has moved biochemistry into a new era.”

Nobel Prize in Physics 2017 – the Discovery of Gravitational Waves

On 14 September 2015, the LIGO detectors in the USA saw space vibrate with gravitational waves for the very first time. Even though the signal was tiny – the time difference between the two light beams in one LIGO interferometer was only 0.0069 seconds, as Olga Botner from the Nobel Committee for Physics points out – it marked the beginning of a new era in astronomy: with Gravitational Wave Astronomy, researchers will be able study the most violent events in the universe, like the merging of black holes. Such a merger was detected in September 2015, and it happened incredible 1.3 billion lightyears away from earth.




The fourth observation of a gravitational wave was only announced on 27 September 2017 at the meeting of G7 science ministers in Turin, Italy. It was also the first to have been picked up by the Virgo detector, located near Pisa. This detection at a third site, besides the two LIGO detectors in the US states of Washington and Louisiana, provides a much better understanding of the three-dimensional pattern of the wave. It is also the result of two merging black holes and was detected on 14 August 2017.

Gravitational waves had been predicted in 1915 by Nobel Laureate Albert Einstein in his General Theory of Relativity. In his mathematical model, Einstein combined space and time in a continuum he called ‘spacetime’. This is where the expression ‘ripples in spacetime’ for gravitational waves comes from.

LIGO, the Laser Interferometer Gravitational Wave Observatory, is a collaborative project with over one thousand researchers from more than twenty countries. Together, they have realised a vision that is almost fifty years old. The 2017 Nobel Laureates all have been invaluable to the success of LIGO. Pioneers Rainer Weiss and Kip S. Thorne, together with Barry C. Barish, the scientist and leader who brought the project to completion, have ensured that more than four decades of effort led to gravitational waves finally being observed.


The three new Nobel Laureates: Rainer Weiss, Barry C. Barish, and Kip S. Thorne (from left). Copyright: Nobel Media, Illustration by N. Elmehed

The three new Nobel Laureates in Physics: Rainer Weiss, Kip S. Thorne, and Barry C. Barish (from left). Copyright: Nobel Media, Illustrations by Niklas Elmehed


Already in the mid-1970s, both Kip Thorne and Rainer Weiss were firmly convinced that gravitational waves could be detected. Weiss had already analysed possible sources of background noise that would disturb their measurements. He had also designed a detector, a laser-based interferometer, which would overcome this noise. While Rainer Weiss was developing his detectors at MIT in Cambridge, outside Boston, Kip Thorne started working with Ronald Drever, who built his first prototypes in Glasgow, Scotland. Drever eventually moved to join Thorne at Caltech in Los Angeles. Together, Weiss, Thorne and Drever formed a trio that pioneered development for many years. Drever learned about the first discovery, but then passed away in March 2017.

Together, Weiss, Thorne and Drever developed a laser-based interferometer. The principle has long been known: an interferometer consists of two arms that form an L. At the corner and the ends of the L, massive mirrors are installed. A passing gravitational wave affects each interferometer’s arm differently – when one arm is compressed, the other is stretched. The laser beam that bounces between the mirrors can measure the change in the lengths of the arms. If nothing happens, the light beams cancel each other out when they meet at the corner of the L. However, if either of the interferometer’s arms changes length, the light travels different distances, so the light waves lose synchronisation and the resulting light’s intensity changes where the beams meet; the minimal time difference of the two beams can also be detected.

The idea was fairly simple, but the devil was in the details, so it took over forty years to realise. Large-scale instruments are required to measure microscopic changes of lengths less than an atom’s nucleus. The plan was to build two interferometers, each with four-kilometre-long arms along which the laser beam bounces many times, thus extending the path of the light and increasing the chance of detecting any tiny stretches in spacetime. It took years of developing the most sensitive instrument ever to be able to distinguish gravitational waves from all the background noise. This required sophisticated analysis and advanced theory, for which Kip Thorne was the expert.




Running such a project on a small scale was no longer possible and a new approach was needed. In 1994, when Barry Barish took over as leader for LIGO, he transformed the small research group of about forty people into a large-scale international collaboration with more than a thousand participants. He searched for the necessary expertise and brought in numerous research groups from many countries.

In September 2015, LIGO was about to start up again after an upgrade that had lasted several years. Now equipped with tenfold more powerful lasers, mirrors weighing 40 kilos, highly advanced noise filtering, and one of the world’s largest vacuum systems, it captured a wave signal a few days before the experiment was set to officially start. The wave first passed the Livingston, Louisiana, facility and then, seven milliseconds later – moving at the speed of light – it appeared at Hanford, Washington, three thousand kilometres away.


Young researcher was first person the ‘see’ a gravitational wave

A message from the computerised system was sent early in the morning on 14 September 2015. Everyone in the US was sleeping, but in Hannover, Germany, it was 11:51 hours and Marco Drago, a young Italian physicist at the Max Planck Institute for Gravitational Physics, also named Albert Einstein Institute and part of the LIGO Collaboration, was getting ready for lunch. The curves he glimpsed looked exactly like those he had practiced recognising so many times. Could he really be the first person in the world to see gravitational waves? Or was it just a false alarm, one of the occasional blind tests about which only a few people knew?

The wave’s form was exactly as predicted, and it was not a test. Everything fit perfectly. The pioneers, now in their 80s, and their LIGO colleagues were finally able to hear the music of their dreams, like a bird chirping. The discovery was almost too good to be true, but it was not until February the following year that they were allowed to reveal the news to anyone, even their families.

What will we learn from the observation of gravitational waves? As Karsten Danzmann, Director of the Albert Einstein Institute and Drago’s boss, explained: “More than 99 percent of the universe are dark to direct observation.” And Rainer Weiss elaborated during a telephone conversation with Thors Hans Hansson of the Nobel Committee: Merging black holes probably send the strongest signal, but there are many other possible sources, like neutron stars orbiting each other, and supernovae explosions. Thus, Gravitational Waves Astronomy opens a new and surprising window to the Universe.

The Workings of Our Inner Clock – Nobel Prize in Physiology or Medicine 2017

2017 Nobel Laureates in Physiology or Medicine: Jeffrey C. Hall, Michael Rosbash and Michael W. Young. Illustration: Niklas Elmehed. Copyright: Nobel Media AB 2017

2017 Nobel Laureates in Physiology or Medicine: Jeffrey C. Hall, Michael Rosbash and Michael W. Young. Illustration: Niklas Elmehed. Copyright: Nobel Media AB 2017


Our body functions differently during the day than it does during the night – as do those of many organisms. This phenomenon, referred to as the circadian rhythm, is an adaptation to the drastic changes in the environment over the course of the 24-hour cycle in which the Earth rotates about its own axis. How does the biological clock work? A complex network of molecular reactions within our cells ensures that certain proteins accumulate at high levels at night and are degraded during the daytime. For elucidating these fundamental molecular mechanisms, Jeffrey C. Hall, Michael Rosbash and Michael W. Young were awarded the Nobel Prize in Physiology or Medicine 2017.

Already in the 18th century, the astronomer Jean Jacque d’Ortous de Mairan observed that plants moved their leaves and flowers according to the time of the day no matter whether they were placed in the light or in the dark, suggesting the existence of an inner clock that worked independently of external stimuli. However, the idea remained controversial for centuries until additional physiological processes were shown to be regulated by a biological clock, and the concept of endogenous circadian rhythms was finally established.


Simplified illustration of the feedback regulation of the period gene.  Illustration: © The Nobel Committee for Physiology or Medicine. Illustrator: Mattias Karlén

Simplified illustration of the feedback regulation of the period gene. Illustration: © The Nobel Committee for Physiology or Medicine. Illustrator: Mattias Karlén

The first evidence of an underlying genetic programme was found by Seymour Benzer and Ronald Konopka in 1971 when they discovered that mutations in a particular gene, later named period, disturbed the circadian rhythm in fruit flies. In the 1980s, the collaborating teams of the American geneticists Jeffrey C. Hall and Michael Rosbash at Brandeis University as well as the laboratory of Michael W. Young at Rockefeller University succeeded in deciphering the molecular structure of period. Hall and Rosbash subsequently discovered how it was involved in the circadian cycle: they found that the levels of the gene’s product, the protein PER, oscillated in a 24-hour cycle, and suggested that high levels of PER may in fact block further production of the protein in a negative self-regulatory feedback loop. However, how exactly this feedback mechanism might work remained elusive.

Years later, the team of Michael W. Young contributed the next piece to the circadian puzzle with the discovery of another clock gene, named timeless. The protein products of period and timeless bind each other and are then able to enter the cell’s nucleus to block the activity of the period gene. The cycle was closed when, in 1998, the teams of Hall and Rosbash found two further genes, clock and cycle, that regulate the activity of both period and timeless, and another group showed that vice versa the gene products of timeless and period control the activity of clock. Later studies by the laureates and others found additional components of this highly complex self-regulating network and discovered how it can be affected by light.

The ability of this molecular network to regulate itself explains how it can oscillate. However, it does not explain why this oscillation occurs every 24 hours. After all, both gene expression and protein degradation are relatively fast processes. It was thus clear that a delay mechanism must be in place. An important insight came from Young’s team: the researchers found that a particular protein can delay the process and named the corresponding gene doubletime.

It has since been discovered that the physiological clock of humans works according to the same principles as that of fruit flies. To ensure that our whole body is in sync, our circadian rhythm is regulated by a central pacemaker in the hypothalamus. The circadian clock is affected by external cues such as food intake, physical activity or temperature. But how does the circadian clock affect us? Our biological rhythm influences our sleep patterns, how much we eat, our hormone levels, our blood pressure and our body temperature. Dysfunction of the circadian clock is associated with a range of diseases including sleep disorders, depression, bipolar disorders and neurological diseases. There is also some evidence suggesting that a misalignment between lifestyle and the inner biological clock can have negative consequences for our health. An aim of ongoing research in the field of chronobiology is thus to regulate circadian rhythms to improve health.


Tackling the Intractable

Depression is one of the most common and debilitating illnesses worldwide, especially because many sufferers do not respond adequately to any of the currently available treatment options. Picture/Credit: SanderStock/

Depression is one of the most common and debilitating illnesses worldwide, especially because many sufferers do not respond adequately to any of the currently available treatment options. Picture/Credit: SanderStock/


The scourge of depression affects more than 300 million people worldwide, and is the leading global cause of disability. The Nobel Prize-winning research of Arvid Carlsson, Paul Greengard and Eric Kandel among others, paved the way for effective drugs to treat the condition.

How do nerve cells communicate with each other? This was the question that fascinated Paul Greengard and which led him to unravel the biochemical basis for how dopamine acts as a neurotransmitter between nerve cells. His scientific discoveries provided part of the underlying scientific rationale for drugs such as Prozac that act to increase the levels of serotonin, another neurotransmitter whose levels are implicated in depression. Indeed, several so-called selective serotonin reuptake inhibitors (SSRIs) have been developed for the treatment of depression and other disorders, and they are the most commonly prescribed anti-depressants in many countries.

However, even though these compounds provide relief to many, a substantial proportion of individuals with depression do not respond adequately either to these drugs or to cognitive behavioural therapy, the other common first-line treatment for depression.  In fact, about one third of people with severe depression do not initially respond adequately to any currently available therapy. Recently revived research into the medicinal potential of psychedelic drugs, which include LSD and psilocybin from mushrooms, indicates that such substances, when combined with appropriate psychiatric care, may be an effective tool in combatting depressive disorders. The stage is now set for the largest ever clinical trial examining the effectiveness of a psychedelic substance to treat depression.

Although psychedelic drugs may revolutionise the treatment of depression, at a molecular level, their mode of action is very similar to that of traditional SSRIs: they decrease the amount of serotonin that is “reabsorbed” by the signalling neuron and thus increase the amount of the neurotransmitter that can be taken up by the neuron which is receiving the signal. The key difference is that psychedelics primarily engage different serotonin receptors, which means that different regions of the brain are affected leading to very different physiological effects. Thus, while traditional SSRIs act to reduce stress, anxiety and aggression and to promote increased resilience and emotional blunting, the goal of treatment with psychedelics is rather to dissolve rigid thinking and provide environmental sensitivity and emotional release. The proponents of psychedelics thus claim that the cumulative effect is to increase well-being, while more traditional medications seek to rather simply decrease the symptoms of depression.

The potential of psychedelics to tackle depression head-on and “wipe the slate clean” instead of simply addressing the symptoms almost sounds too good to be true. Psychedelic drugs are strictly prohibited in most countries around the world. In the UK, for example, both LSD and psilocybin are classified as Class A drugs (those whose consumption is deemed most dangerous). With good reason: in particular, LSD abuse is linked with a range of adverse consequences, including panic attacks, psychosis and perceptual disorders. Many users apply Paracelsus’ maxim: “The dose makes the poison.” The regular ingestion of LSD at amounts that are not sufficient to elicit full-blown hallucinations, but which users claim improves focus and creativity, referred to as micro-dosing, has attracted a huge amount of attention in recent times, in large part due to anecdotal evidence that the practice is rife in Silicon Valley. Micro-dosing with psilocybin is also increasing in popularity. The use of psilocybin, found in “magic mushrooms”, was an element of some pre-historic cultures, and, as with other psychedelics, its use both recreational and medicinal was popular in the 1960s. Prohibitive anti-drug legislation across the globe meant that in subsequent decades research into the drug was severely curtailed. However, the last 20 years have witnessed a gradual renaissance of psilocybin research.


Psilocybin, a psychedelic substance found in “magic mushrooms”, has shown promise in tackling treatment-resistant depression and in alleviating the anxiety and depressive symptoms of cancer patients. Picture/Credit: Misha Kaminsky/

Psilocybin, a psychedelic substance found in “magic mushrooms”, has shown promise in tackling depression and in alleviating anxiety. Picture/Credit: Misha Kaminsky/


While regular small doses appear to be one potential approach, most recent clinical studies that have tested the effects of psilocybin for depression in a controlled set-up have adopted a strategy in which a single higher dose of the substance or several such doses are administered over a short period of time. This approach is in sharp contrast to the one taken for classical anti-depressants, which are consumed daily. The single high dose strategy has yielded promising results for patients with treatment-resistant depression and also for those suffering with the anxiety and depression often experienced by individuals with cancer. The majority of patients treated with psilocybin in this way exhibited an improvement in the symptoms of depression for up to six months. However, even though these recent studies have shown positive results, there remain a number of significant caveats: firstly, one of the most recent trials was open-label, meaning that the participants knew in advance that they would be receiving a psychedelic drug; secondly, most of the studies to date have been small with only 50 subjects or less; finally, as in most other trials of this kind, the reporting measures are very subjective in nature and rely upon observation by health care professionals, friends or self-reporting by the patients themselves.

It is thus still too early to draw any definitive conclusions regarding the efficacy of psilocybin in alleviating the symptoms of depression. This might be about to change, however: the British start-up company Compass Pathways is close to sealing final approval to carry out what would be the largest clinical trial to date looking at the efficacy of psilocybin in treating treatment-resistant depression. The two-part trial will incorporate a much larger number of patients than in previous trials (approximately 400), and will be performed with leading clinical research institutions across Europe. The first part will be focused on determining the most effective dosage of psilocybin; in the second part, patients will receive the psilocybin therapy as a single treatment. An important feature of the trial will be the use of more objective digital tracking methods to monitor the effects of psilocybin. In common with previous smaller-scale studies, careful psychological support and monitoring will be crucial. Research has shown that simply administering psychedelic drugs without providing a proper supportive environment, including counselling, greatly reduces the efficacy of psychedelics against depression and may even be counter-productive.

Even though the first clinical data suggest promising effects of psychedelic drugs in the treatment of depression, several questions remain open: it is unclear how representative the study populations have been, as there may have been a bias toward recruiting those who are more favourably disposed to using psychedelics, and positive prior experiences with such substances may affect treatment outcome. Furthermore, it has yet to be determined at which point substances should be introduced as therapy – as a front-line therapy before depressive symptoms become too ingrained and before long-term therapy with classical anti-depressants, or rather as a treatment of last resort when all else fails.

Winners and Losers From a ‘Commodities-For-Manufactures’ Trade Boom


Soy planting in Parana, Brazil.  Photo/Credit: alffoto/

Soy planting in Parana, Brazil. Photo/Credit: alffoto/


The rise of China has been one of the most important events to hit the world economy in recent decades. Rapid economic growth has had enormous implications within China, lifting millions of Chinese citizens out of poverty. But China’s rise has also deeply affected the economies of other countries in ways that we are only beginning to understand.

One fact that economists have learned from studying China’s impact on other countries is that competition from the booming Chinese manufacturing sector has had a big effect on manufacturing workers elsewhere. According to research by David Autor, David Dorn and Gordon Hanson, manufacturing employment has declined much more quickly in parts of the United States that produce goods imported from China.

These findings of negative impacts of Chinese competition for manufacturing workers have been corroborated by studies of European countries. For example, research by João Paulo Pessoa finds that UK workers initially employed in industries competing with Chinese products earned less and spent more time out of employment in the early 2000s.

But China is not only a competitor for other countries’ industries; it has also become an increasingly important consumer of goods produced elsewhere. In particular, China’s rapidly growing economy fuelled a worldwide commodity boom in the early 2000s.

This had an especially big impact on developing countries, whose swiftly rising exports to China became dominated by raw materials such as crops, ores and oil. Exports from low- and middle-income countries to China grew twelvefold from 1995 to 2010, compared with a twofold rise in their exports to everywhere else, so that China became an increasingly important trade partner for the developing world.

In 1995, commodities made up only 20%of these countries’ rather limited exports to China. But by 2010, nearly 70% of exports to China from developing countries were commodities (Figure 1A). Meanwhile, these countries’ rapidly growing imports from China consisted almost entirely of manufactured goods (Figure 1B).


Figure 1: Share of commodities in trade of developing countries Notes: ‘Commodities’ include products of the agricultural, forestry, fisheries/aquaculture and mining sectors. ‘Developing countries’ include non-high-income countries as defined by the World Bank, excluding countries in East and Southeast Asia, which tend to participate in regional manufacturing supply chains. Trade data is from CEPII BACI. Credit: Francisco Costa

Figure 1: Share of commodities in trade of developing countries. ‘Commodities’ include products of the agricultural, forestry, fisheries/aquaculture and mining sectors. ‘Developing countries’ include non-high-income countries as defined by the World Bank, excluding countries in East and Southeast Asia, which tend to participate in regional manufacturing supply chains. Trade data is from CEPII BACI. Credit: Francisco Costa


This swift transition to a new kind of trade relationship has sometimes been unpopular with China’s trade partners. For example, before a visit to China in 2011, Brazil’s former president Dilma Rousseff promised that she would be “working to promote Brazilian products other than basic commodities,” amid worries about “overreliance on exports of basic items such as iron ore and soy” (Los Angeles Times).

So for countries like Brazil, how did the benefits from the China-driven commodity boom compare to the costs of rising competition from Chinese manufactures?

In my research with Jason Garred and João Paulo Pessoa, published recently in the Journal of International Economics, we look at how the steep rise in ‘commodities-for-manufactures’ trade with China affected workers in Brazil. It turns out that Brazil’s evolving trade relationship with China in the early 2000s echoed that of the rest of the developing world:

  • First, trade with China exploded: just 2% of Brazil’s exports went to China in 1995, but this had risen to 15% by 2010.
  • Second, exports to China became increasingly concentrated in a few commodities (Figure 2A). In 2010, more than 80% of Brazilian exports to China were commodities, mostly soybeans and iron ore. In the first decade of the 2000s, almost all of the growth in export demand for these two Brazilian products came from China.
  • Finally, like the rest of the developing world, Brazil’s imports from China rose quickly but included almost exclusively manufactured goods (Figure 2B).

Our study analyses the 2000 and 2010 Brazilian censuses to check how the fortunes of workers across different regions and industries evolved during the boom in trade with China.


Figure 2: Share of commodities in trade of Brazil. ‘Commodities’ include products of the agricultural, forestry, fisheries/aquaculture and mining sectors. Trade data is from CEPII BACI. Credit: Francisco Costa

Figure 2: Share of commodities in trade of Brazil. ‘Commodities’ include products of the agricultural, forestry, fisheries/aquaculture and mining sectors. Trade data is from CEPII BACI. Credit: Francisco Costa


We first confirm that during this time, there was a negative effect of Chinese import competition on employees of manufacturing firms. Specifically, in parts of Brazil producing manufactured goods imported from China (such as electronics), growth in manufacturing workers’ wages between 2000 and 2010 was systematically slower.

But our findings also suggest that growth in trade with China created winners as well as losers within Brazil. Wages rose more quickly in parts of the country benefiting more from increasing Chinese demand, which were mainly regions producing soy or iron ore.

We also find that these regions saw a rise in the share of employed workers in formal jobs. Unlike jobs in the informal economy, jobs in the formal sector come with unemployment insurance, paid medical leave and other benefits, and so this increase in formality can be seen as a rise in non-wage compensation.

So while Brazil’s manufacturing workers seem to have lost out from Chinese import competition, rising exports to China appear to have benefited a different subset of Brazilian workers.

Our study concentrates on the short-run effects of trade with China on Brazilian workers. This means that our results don’t provide a full account of the trade-offs between the twin booms in commodity exports and manufacturing imports. For example, we don’t know what happened to the winners from the commodity boom once Chinese demand slowed in the mid-2010s.

We also do not consider the benefits to Brazilian consumers from access to cheaper imported goods from China. But what we do find suggests that trading raw materials for manufactures with China may not have been a raw deal for developing countries like Brazil after all.

‘Homo Economicus’ Reconsidered

Economists live in an ideological fantasyland. They see people as a collection of reliably rational, utility-maximising, calculating machines.

These ‘econs’ – whom the economists study – never make mistakes, which means their behaviour, when they interact in free markets, can be reliably modelled using a handful of equations that essentially apply the 250-year-old insights of Adam Smith and other classical economists.

That at least is one of the popular conceptions of what economists do.

But a few days at the 6th Lindau Meeting on Economic Sciences shows that this is a gross caricature of the profession.

Daniel McFadden, of the University of California, Berkeley, who won the Nobel Prize in 2000, used his presentation to demonstrate problems with applying the simple models of the likes of Adam Smith and David Ricardo to every issue.

‘We respect what they’ve done but we should always question whether it applies,’ he warned.


Daniel McFadden during his lecture at the 6th Lindau Meeting on Economic Sciences. Picture/Credit: Christian Flemming/Lindau Nobel Laureate Meetings

Daniel McFadden during his lecture at the 6th Lindau Meeting on Economic Sciences. Picture/Credit: Christian Flemming/Lindau Nobel Laureate Meetings


Peter Diamond of MIT, one of the 2010 Nobel Laureates, showed himself to be under no illusion that people always make decisions that are in their long-term self-interest, citing the example of failures in the private pensions market.

‘Left to their own devices people don’t save enough,’ he told the audience of young economists, pointing to a striking survey of a sample of US baby boomers, showing that almost 80% gave an incorrect answer to a simple question relating to compound interest.

No calculating machines there.

Diamond’s theme was what we can learn from international experience about designing better public and private pension systems – with examples ranging from Chile’s sudden denationalisation of its public pensions scheme to the low-cost and efficient funds available to some three million US civil servants.

Simple economic models, Diamond argued, were a poor basis for setting public policy. ‘Models are by definition incomplete’, he said, ‘so applying them literally would be a serious mistake’.

Sir James Mirrlees, of the Chinese University of Hong Kong and co-recipient of the 1996 Nobel Prize, used his talk to discuss our ‘bounded rationality’ as humans. He pointed out that the choices we make are all influenced not simply by a cold calculation of self-interest but external factors such as education, advertising and experience.

He imparted that his own modelling exercise showed that there were some circumstances where it delivered better outcomes if people were offered no choice, but were simply told what to do. ‘It’s unusual in economics to have a theory that says minimise freedom,’ he noted.


Sir James Mirrlees talking to young economists after his lecture at the 6th Lindau Meeting on Economic Sciences. Picture/Credit: Julia Nimke/Lindau Nobel Laureate Meetings

Sir James Mirrlees talking to young economists after his lecture at the 6th Lindau Meeting on Economic Sciences. Picture/Credit: Julia Nimke/Lindau Nobel Laureate Meetings


Robert Aumann, of the Hebrew University of Jerusalem and one of the 2005 Nobel laureates, kicked against the simplistic conception of human beings as utility-maximising machines from another angle.

The central argument of the game theorist’s talk on ‘mechanism design design’ was the imperative of thinking clearly about incentives and motivations.

We don’t, as Aumann stressed, eat because we want to digest food to give us energy to live (the kind of mistake that an economist who believes in calculating ‘econs’ might make about human incentives). We eat because we’re hungry.

Likewise, we don’t generally have sex because we want to propagate the human race. We have sex because it feels good.

If we miss these proximate motivations, we risk misunderstanding what drives human behaviour – and thus getting economics itself wrong.

Housing Talk: Why You Should Never Trust a House Price Index (Only)

An ever growing housing market? Picture/Credit: G0d4ather/

An ever growing housing market? Picture/Credit: G0d4ather/


Whenever I attend a dinner party or wedding or just meet old friends for coffee, at some point the topic turns to house prices, real estate investment and what seems to be generally perceived as a boom in the housing market. It seems that almost everyone is interested in buying a house or apartment. Often the aim is not just to cover the basic human need for shelter, but also to participate in the assumed never-ending boom and grab a small piece of the rapidly growing housing cake. House prices never fall, right?

People enthusiastically tell me stories of friends (or friends of friends) who finance multiple properties entirely via loans with zero down-payment. After all, real estate investment is a safe haven, isn’t it? And the expected rent will more than cover the monthly loan instalment, won’t it?

The figure show a rental price (right) index for Sydney together with quality-adjusted rental and sales price distributions. Credit: Sofie R. Waltl

The figures show a rental price index (top) for Sydney together with its quality-adjusted rental price distribution (bottom). Credit: Sofie R. Waltl

But wait, my experience of obsessive ‘housing talk’ may be biased in at least three ways. First, I’m in my late twenties and so are my friends. Thinking about buying property is common, even necessary, for my age (and socioeconomic) group. Key decisions for the rest of our lives need to be made: should I stay in my first job or get (another) postgraduate degree? Should I go abroad and try out the expat life? Should I marry or break up with my long-term boy/girlfriend? What about kids? And hey, where and how will I live? Which leads to the obvious next question: to rent or to buy.

Second, I wrote a PhD thesis about housing markets. Although I focused on better ways of measuring dynamics in housing and rental markets and mainly dealt with technical problems in statistical modelling of such markets, most of my friends tend (wrongly) to conclude that I am an expert in real estate investment. Naturally, I end up being asked for advice about housing markets more often than most.

Third, as a native Austrian currently living in Germany, I mainly come across people for whom dramatic changes in house prices are a new phenomenon. After years of generally flat house prices, these countries have only recently seen bigger shifts.

Still, housing markets seem to be a hot topic and I rarely meet someone who’s not at all interested. I believe that widely reported changes in house price indices are the main reason for that.

In his famous book Irrational Exuberance, Nobel laureate Robert Shiller describes how first the US stock market (near its peak in 1999) and then the US housing market (around 2004) became socially accepted topics of conversation with broad media coverage. In fact, he writes, whenever he went out for dinner with his wife, he successfully predicted that someone at an adjacent table would speak about the respective market.

Shiller is well-known for his analysis of markets driven by psychological effects, which help to explain observed developments that a theory based on full rationality would rule out. Although most people are aware of housing bubbles of the recent past – for example, in Ireland, Japan, Spain and the United States – the belief in real estate as a quasi risk-free investment seems to remain unquestioned. The fact that sharp and sudden drops in prices are possible and happen regularly is widely ignored. 

The figure shows a sales price index for Sydney together with quality-adjusted rental and sales price distributions. Credit: Sofie R. Waltl

The figures show a sales price index (top) for Sydney together with its quality-adjusted sales price distribution (bottom). Credit: Sofie R. Waltl

Everyone is affected by movements in housing and rental markets. If someone owns a property, it is usually her single largest asset; if someone rents, the cost often takes up a large fraction of her monthly income. This is why turbulence in these markets has larger effects on households than, for example, swings in the stock market (Case et al, 2005). The social implications of skyrocketing house prices and exploding rents but also of crashing markets are huge – which means that these markets need to be closely watched by policy-makers.

A house price index measures average movements of average houses in average locations belonging to an average price segment – a lot of averages! It is usually heavily aggregated, which implies that just because a national house price index reports rising prices, not every house will benefit equally from these increases. In fact, there is large variation in the distributions of prices and rents, and these distributions also change significantly over time.

Houses are highly heterogeneous goods (particularly compared with shares or bonds): no two houses are the same. Therefore, house price indices should be quality-adjusted, with differences in house characteristics taken into account. Still, changes in house price indices are often driven by developments in certain sub-markets, which are mainly determined by the three most important house characteristics: location, location, location. 

Hence, house price developments are extremely heterogeneous even within urban areas (see McMillen, 2014, Guerrieri et al, 2013, and Waltl, 2016b), and thus the interpretation of aggregated national or even supra-national indices is questionable. For example, the S&P/Case-Shiller US National Home Price Index reports changes for the entire United States, the ECB and EUROSTAT publish indices for the European Union and the euro area, and the IMF even produces a global house price index.

Missing bubbly episodes in sub-markets when looking at such heavily aggregated figures seems unavoidable; and basing an individual investment decision on them is dubious. Similarly problematic is the assessment of a housing market using such aggregated measures for financial stability purposes.

Price map showing the average price (in thousand AUD) for an average house for different locations over time in Sydney. Credit: Sofie R. Waltl

Price map showing the average price (in thousand AUD) for an average house for different locations over time in Sydney. Credit: Sofie R. Waltl

A typical pattern is that markets for low-quality properties in bad locations experience the sharpest rises shortly before the end of a housing boom. Look at the lowest price segment in Sydney’s suburbs (black, dashed line) compared with the highest price segment in the inner city (orange, dotted line) around the peak in 2004. It is also this segment that experiences the heaviest falls afterwards.

A possible behavioural explanation is as follows: the longer a housing boom lasts, the more people (and also the more financially less well-off people) want to participate in this apparently prosperous and safe market. Steady increases reported by house price indices give the impression that the entire market is booming with no end in sight. Whoever is able to participate becomes active in the housing market and investments boom in yet more affordable properties – the lowest segment in bad locations.

A common misconception is the assumption that rising house prices necessarily translate into higher rents almost immediately. But when the price contains a ‘bubble or speculative component’, this is not always the case.

In general, economists speak of a bubble whenever the price of an asset is high just because of the hope of future price increases without any justification from ‘fundamentals’ such as construction costs (Stiglitz, 1990). Investing in over-valued property and hoping for the rent to cover the mortgage is thus more dangerous than it might appear (see Himmelberg et al, 2005, for the components of the price-to-rent ratio measuring the relationship between prices and rents; and Martin and Ventura, 2012, for asset bubbles in general).


The figure shows location- and segment-specific indices for Sydney. CBD refers to the Central Business District. Credit: Sofie R. Waltl

The figure shows location- and segment-specific indices for Sydney. CBD refers to the Central Business District. Credit: Sofie R. Waltl


While we’ve already seen that price developments are very diverse, the same is also true for the relationship between prices and rents. Thus, simply looking at average price-to-rent ratios may miss the over-heating of a sub-market and its associated risks.

Credit: Sofie R. Waltl

Credit: Sofie R. Waltl

Buying property is thus more delicate than urban legends about the safety of real estate investment suggest. Above all, developments in housing markets are diverse even within small geographical areas and one number alone can never appropriately reflect what is going on. A complete picture of the dynamics in housing markets is essential from the perspective of an investor as well as a policy-maker.

And in case you’re hoping for investment advice, here’s the only piece I can offer: just because everyone buys does not mean that YOU should go out and buy whatever you can afford. In fact, when everyone (including your friend with questionable financial literacy) decides to invest in real estate, it might be exactly the wrong moment. Never rely on house price indices only, but go out and collect as much information as possible. And don’t forget: location, location, location… 



The figures show quality-adjusted developments in the Sydney housing market, and are part of the results of my doctoral thesis Modelling housing markets: Issues in economic measurement at the University of Graz under the supervision of Robert J. Hill. I am very grateful for his valuable support and advice. Calculations are based on data provided by Australian Property Monitors. Results, which this article is based on, are published as Waltl (2016a) and Waltl (2016b). The part about price-to-rent ratios is currently under review at a major urban economic journal (here is a working paper version). My work has benefitted from funding from the Austrian National Bank Jubiläumsfondsprojekt 14947, the 2014 Council of the University of Graz JungforscherInnenfonds, and the Austrian Marshallplan Foundation Fellowship (UC Berkeley Program 2016/2017). The views presented here are solely my own and do not necessarily reflect those of any past, present or future employer or sponsor.

Money Illusion and Economic Literacy: An Experimental Approach

The growing use of experimental methods in economics provides an opportunity to explore aspects of human behaviour that have long been discussed, but for which there have been little observable data. One such phenomenon is ‘money illusion’ – a term coined by Irving Fisher in the late 1920s to describe people’s failure to perceive that money can expand or shrink in value.

At a time when the German mark had fallen to a fiftieth of its original value, Fisher recounted his conversation with an intelligent shopkeeper in Berlin who had sold him a shirt. Claiming that she had made a profit since she had bought the shirt for less than he paid for it, she failed to understand the impact of inflation. Since her accounts were in a fluctuating currency, what looked like a profit was only so in nominal terms: in real terms, she had suffered a loss.


Photo/Credit: Nastco/

Photo/Credit: Nastco/


Since the ‘rational expectations’ revolution in economics in the 1970s, money illusion has typically been regarded as a contradiction of the idea that people are rational, profit-maximising agents. Nobel laureate James Tobin, for example, said that ‘an economic theorist can, of course, commit no greater crime than to assume money illusion.’

But the small deviations from rationality revealed by recent research in behavioural economics suggest that it is no longer necessary to deny the existence of money illusion. People might suffer from it depending on whether they are looking at their economic environment in nominal or real terms. Evidence for the ‘framing effect’ shows that alternative representations of the same situation may lead to systematically different responses, since some options loom larger in one situation than in another.

Experimental research indicates that money illusion can have substantial effects at the level of the aggregate economy. For example, it might be profitable for rational firms to imitate the behaviour of naive firms suffering from money illusion. In that case, only a negligible amount of individual money illusion is of great significance, since it can be multiplied across the economy by the behaviour of other rational agents.

Economists tend to argue that the threat of money illusion can easily be avoided by educating the public. But is there a certain level of economic literacy that will remove any aggregate effects of money illusion?

Certainly, central banks have been keen in recent years to promote economic literacy via various educational programmes. When Ben Bernanke was chairman of the US Federal Reserve, he made a strong case for economic literacy, claiming that it could deliver enhanced effectiveness of monetary policy, higher probability of achieving optimum outcomes, improved anchoring of inflation expectations and smoother functioning of financial markets.

The experimental method is a valuable tool for investigating whether a particular level of economic literacy acquired through economic education can lead to improved decision-making and alleviation of the aggregate effects of money illusion. That is what I have done in a laboratory experiment with nominal and real framings, and two different groups of participants – a well-educated group and a less well-educated group.

The experimental subjects were asked to take the role of firms and to select the correct profit-maximising price in an environment of nominal framings. In the middle of the experiment, the central bank announced a reduction in the money supply. In these circumstances, the logical step for the rational firm is to adjust prices downwards as long as others do the same.

I expected the well-educated group of participants to adjust their prices downwards faster than the less well-educated group. Given their ability to avoid the misperceptions of money illusion, they should not be attracted by misleading profits at high nominal values but instead base their decision-making on real values.

What’s more, the well-educated participants should have no reason to doubt that others will adjust their prices downwards as well. Not only should these participants not expect others to suffer from money illusion, but they should also have no reason to assume that other participants will expect others to suffer from money illusion.

In the context of the experiment, wider dissemination of knowledge about the economy might ensure that participants can develop a better understanding of signals from the central bank. At the aggregate level, this should contribute to a faster process of convergence to the economy’s equilibrium, in which firms immediately adjust their prices in response to changing economic circumstances.

But my results indicate that price convergence even on the part of well-educated participants is still slow. So while economic literacy has the potential to enhance the effectiveness of monetary policy, money illusion remains a pervasive phenomenon. Further investigation of the effects of education with respect to money illusion is highly desirable.

Faster Progress for Everyone

Martin Chalfie is promoting preprint archives for biological research papers that will make new results and findings accessible to a significantly bigger audience much faster.


Credit: exdez/

Credit: exdez/


Important questions that kept cropping up during the 67th Lindau Nobel Laureate Meeting include what the future of research can and will look like and how the status quo can be improved. Beside the oft-mentioned political events and their influence on science, another major issue concerns an intrinsic problem: the publication machinery and the importance of the impact factor. Shortly before the meeting, a number of Nobel Laureates publicly criticised the current journal-ranking method. During the meeting, Martin Chalfie also expressed his view that publications should be assessed more on the basis of their factual quality and less on which journal they appear in. I asked him what he had in mind as an alternative and what steps, if any, he has taken. His solution is: – Accelerating Science and Publication in Biology.

ASAPbio is an advocacy group founded by Ron Vale – an initiative instigated by scientists for scientists it aims to make new discoveries within the life science available to a broad audience much faster than previously possible. Chalfie helped launched the initiative in early 2016 together with Harold Varmus, Daniel Colón-Ramos and Jessica Polka, now the director of ASAPbio. “We wanted to develop a preprint archive for biological research. There has been something similar in physics for at least a quarter of a century.” As soon as researchers are ready to share their work and findings with the world, Chalfie continues, they can upload their articles to a preprint archive, where it can then be read and commented on by other scientists as well as by the general public. The largest preprint server for life science-related articles is bioRxiv.

ASAPbio promotes the use of open access centralised and comprehensive repositories for all life sciences. “This changes the overall dynamics of the publication process,” Chalfie says. The conventional publication pathway looks quite different: A scientific paper is submitted to a suitable journal. In an initial step, one or more editors then decide whether the paper is appropriate material for the journal in question. If the editors give the go-ahead, the paper is passed on to several experts in the field. They then form a picture of the work and can, if they deem it necessary, reject the paper as deficient or request further experiments. In such cases, the authors have several months to make the requested changes before a final decision is made, which can still be negative even after suggested changes have been made. All in all, the decision-making process can take from several months to a year, and if the paper is ultimately rejected, the authors have to submit it afresh to another journal. As a result, not only the authors lose valuable time but also the research community and the public at large, who have no access to the new findings during the decision-making process. “By contrast, preprint archives make new discoveries and research advances immediately available to everyone – whether scientists or students – and they do so free of charge,” Chalfie says, summarising the advantages.

Moreover, each paper is automatically assigned a definite submission date which the authors can refer to should a similar work be published soon afterwards.

However, Chalfie, points out, “it’s not about publishing raw data at an early stage.” Instead, a manuscript should be uploaded to an archive platform at the same time as it is submitted to a journal. It is then revised in stages in response to feedback from the journal and comments submitted via the platform.



Martin Chalfie talking to young scientists during the 67th Lindau Nobel Laureate Meeting,  Photo/Credit: Julia Nimke/Lindau Nobel Laureate Meetings

Martin Chalfie talking to young scientists during the 67th Lindau Nobel Laureate Meeting, Photo/Credit: Julia Nimke/Lindau Nobel Laureate Meetings


“During one of the first organisational meetings, we talked about how the established journals would be likely to react to such an initiative and these platforms. Fortunately, the major journals such as Science, Nature, the journals of professional societies and many others all support the idea of preprint archives and the general repository,” Chalfie explains. The journals have no problem with authors submitting their papers to them and uploading them to a platform simultaneously. Many journals even allow “joint submissions”, meaning that they ask authors whether they want to make their papers available on an archive server at the same time.

Another sign that this new pre-release system will catch on in the long term is the acceptance of such prearchived work as a criterion for grants, the allocation of project funds and similar selection procedures. “The Howard Hughes Medical Institute, the NIH, the Wellcome Trust and many universities now consider papers in the preprint archive in their evaluation of applicants,” as Chalfie relates proudly.

Although the new preprint archives as well as the general repository for biological research are still in their infancy compared to the fields of physics, and they have yet to be discovered by many scientists, they have already been acknowledged and accepted by major research institutes and renowned journals. Therefore, advocacy groups such as ASAPBio offer an excellent opportunity to take the cumbersome publication process in the life sciences to a new direction and focus once again on the actual quality of research work instead of mere impact factors.