The Hungry Brain

Gut brain Axis Feature Credited

 

Under normal, healthy conditions we eat whenever we are feeling hungry. In addition to the feeling of hunger, we also often have an appetite for a specific kind of food, and sometimes we simply crave the pleasure a certain food like chocolate or pizza may provide us. This pleasure is part of the hedonic aspect of food and eating. In fact, anhedonia or the absence of experiencing pleasure from previously pleasurable activities, such as eating enjoyable food, is a hallmark of depression. The hedonic feeling originates from the pleasure centre of the brain, which is the same one that lights up when addicts ‘get a fix’. Hedonic eating occurs independently of the gut-brain axis, which is why you will keep eating those crisps and chocolate, even when you know, you’re full. Hence, sayings like “These chips are addictive!” are much closer to the biological truth than many realise.  

But how do we know that we are hungry? Being aware of your surrounding and/or your internal feelings is the definition of consciousness. And a major hub for consciousness is a very primal brain structure, called the thalamus. This structure lies deep within the brain and constantly integrates sensory input from the outside world. It is connected to cognitive areas such as the cortex and the hippocampus, but also to distinct areas in the brainstem like the locus coeruleus, which is the main noradrenergic nucleus in the brain and regulates stress and panic responses. Directly below the thalamus and as such also closely connected to this ‘awareness hub’ lies the hypothalamus.

The hypothalamus is a very complex brain area with many different functions and nuclei. Some of them are involved in the control of our circadian rhythm and internal clock – the deciphering of which was awarded the 2017 Nobel Prize in Physiology or Medicine. But the main task of the hypothalamus is to connect the brain with the endocrine system (i.e. hormones) of the rest of the body. Hormones like ghrelin, leptin, or insulin are constantly signalling your brain whether you are hungry or not. They do so via several direct and indirect avenues, such as blood sugar levels, monitoring energy storage in adipose cells, or by secretion from the gastrointestinal mucosa.

There are also a number of mechanosensitive receptors that detect when your stomach walls distend, and you have eaten enough. However, similarly to the hormonal signals, the downstream effects of these receptors also take a little while to reach the brain and be (consciously) noticeable. Thus, the slower you eat, the less likely you will be to over-eat, because the satiety signals from hunger-hormones and stomach-wall-detectors will reach your consciousness only after about 20 to 30 minutes.

Leaving the gut and coming back to the brain, the hypothalamus receives endocrine and neuropeptidergic inputs related to energy metabolism and whether the body requires more food. Like most brain structures, the hypothalamus is made up of several sub-nuclei that differ in cell-type and downstream-function. One of these nuclei, the arcuate nucleus of the hypothalamus, is considered the main hub for feeding and appetite control. Within it there are a number of signalling avenues that converge and that – if altered or silenced – can induce for instance starvation. Major signalling molecules are the Neuropeptide Y, the main inhibitory neurotransmitter GABA, and the peptide hormone melanocortin. The neurons in the arcuate nucleus are stimulated by these and other signalling molecules in order to maintain energy homeostasis for the entire organism. There are two major subclasses of neurons in the arcuate nucleus that are essential for this homeostasis and that, once stimulated, cause very different responses: activation of the so-called POMC neurons decreases food intake, while the stimulation of AGRP neurons increases food intake. And this circuit even works the other way around: researchers found that by directly infusing nutrients into the stomach of mice, they were able to inhibit AGRP neurons and their promotion of food intake.

Given this intricate interplay between different signalling routes, molecules, and areas it is not surprising then that a disrupted balance between all of these players could be detrimental. Recent studies identified one key player that can either keep the balance or wreak havoc: the gut microbiome

 

Bacteria colonising intestinal villi make up the gut microbiome. Picture/Credit: ChrisChrisW/iStock.com

Bacteria colonising intestinal villi make up the gut microbiome. Picture/Credit: ChrisChrisW/iStock.com

 

The gut microbiome is the entirety of the microorganisms living in our gastrointestinal tract, and they can modulate the gut-brain axis. Most of the microorganisms living on and within us are harmless and in fact are very useful when it comes to digesting our food. However, sometimes this mutually beneficial symbiosis goes awry, and the microbes start ‘acting out’. For instance, they can alter satiety signals by modulating the ghrelin production and subsequently induce hunger before the stomach is empty, which could foster obesity. They can also block the absorption of vital nutrients by taking them up themselves and thereby inducing malnutrition. A new study which was published only last month revealed that Alzheimer patients display a different and less diverse microbiome composition than healthy control subjects. Another study from Sweden even demonstrated that the specific microbiome composition occurring in Alzheimer’s patients induces the development of disease-specific amyloid-beta plaques, thereby establishing a direct functional link between the gut microbiome and Alzheimer’s disease – at least in mice. Similarly, the composition and function of the microbiome might also directly affect movement impairments in Parkinson’s disease. In addition, there is also mounting evidence that neuropsychiatric diseases such as anxiety or autism are functionally linked to the microbiome

Moreover, even systemic diseases such as lung, kidney and bladder cancers have been recently linked to the gut microbiome. Albeit, in this case, not the disease development and progression seem to be directly related to our gut inhabitants. Instead, the researchers found that if the microbiome of the cancer patients was disrupted by a recent dose of antibiotics, they were less likely to respond well to the cancer treatment and their long-term survival was significantly diminished. It seems that the treatment with antibiotics disrupts specific components of the microbiome, which then negatively affects the function of the entire composition. 

While the cause or consequence mechanisms between these different afflictions and an altered microbiome have not been solved yet, it seems certain that it is involved in more than digestion. Hence, the already intricate gut-brain axis is further complicated by the gut microbiome, which not only affects when and what we eat, but can also determine our fate in health and disease.  

Immunotherapy: The Next Revolution in Cancer Treatment

Over the past 150 years, doctors have learned to treat cancer with surgery, radiation, chemotherapy and vaccines. Now there is a new weapon for treatment: immunotherapy. For some patients with previously incurable cancer, redirecting their immune system to recognise and kill cancer cells has resulted in long-term remission, with cancer disappearing for a year or two after treatment.

 

Lymphocytes attacking cancer cell. Credit: selvanegra/iStock.com

Lymphocytes attacking a cancer cell. Credit: selvanegra/iStock.com

 

Cancer immunotherapy has been used successfully to treat late stage cancers such as leukaemia and metastatic melanoma, and recently used to treat mid-stage lung cancer. Various forms of cancer immunotherapy have received regulatory approval in the US, or are in the approval process in the EU. These drugs free a patient’s immune system from cancer-induced suppression, while others engineer a patient’s own white blood cells to attack cancer. Another approach, still early in clinical development, uses antibodies to vaccinate patients against their own tumours, pushing their immune system to attack the cancer cells.

However, immunotherapy is not successful, or even an option, for all cancer patients. Two doctors used FDA approvals and US cancer statistics to estimate that 70 percent of American cancer deaths are caused by types of cancer for which there are no approved immunotherapy treatments. And patients that do receive immunotherapy can experience dramatic side effects: severe autoimmune reactions, cancer recurrence, and in some cases, death.

With such varied outcomes, opinions vary on the usefulness of immunotherapy. Recent editorials and conference reports describe “exciting times” for immunotherapy or caution to “beware the hype” about game-changing cancer treatment. Regardless of how immunotherapy could eventually influence cancer treatment, its development is a new revolution in cancer treatment, building on detailed biochemical knowledge of how cancer mutates and evades the immune response. Academic research into immunotherapy is also being quickly commercialised into personalised and targeted cancer treatments.

 

T-cells (red, yellow, and blue) attack a tumour in a mouse model of breast cancer following treatment with radiation and a PD-L1 immune checkpoint inhibitor, as seen by transparent tumour tomography. Credit: Steve Seung-Young Lee, National Cancer Institute\Univ. of Chicago Comprehensive Cancer Center

T-cells (red, yellow, and blue) attack a tumour in a mouse model of breast cancer following treatment with radiation and a PD-L1 immune checkpoint inhibitor, as seen by transparent tumour tomography. Credit: Steve Seung-Young Lee, National Cancer Institute\University of Chicago Comprehensive Cancer Center

Checkpoint inhibitors

Twenty years ago, James Allison, an immunologist at MD Anderson Cancer Center, was the first to develop an antibody in a class of immunotherapy called checkpoint inhibitors. These treatments release the immune system inhibition induced by a tumour. The drug he developed, Yervoy, received regulatory approval for the treatment of metastatic skin cancer in the US in 2011. By last year, Yervoy and two newer medications had reached 100,000 patients, and brought in $6 billion a year in sales.

In general, immunotherapy tweaks T-cells, white blood cells that recognise and kill invaders, to be more reactive to cancer cells. Tumours naturally suppress the immune response by secreting chemical messages that quiet T-cells. Cancer cells also bind to receptors on the surface of T-cells, generating internal messages that normally keep the immune system from attacking healthy cells.

One of those receptors is called CTLA-4. Allison and his colleagues blocked this receptor on T-cells with an antibody, and discovered that T-cells devoured cancer cells in mice. Since then, other checkpoint inhibitors have been developed and commercialised to block a T-cell receptor called PD-1 or its ligand PD-L1, present on some normal cells as well as cancer cells.

In the US, PD-1 and PD-LI inhibitors have been approved to treat some types of lung cancer, kidney cancer, and Hodgkin’s lymphoma. And the types of potentially treatable cancers are growing: Currently, more than 100 active or recruiting US clinical trials are testing checkpoint inhibitors to treat bladder cancer, liver cancer, and pancreatic cancer, among others.

 

CAR-T

Another type of cancer immunotherapy, called CAR-T, supercharges the ability of T-cells to target cancer cells circulating in the blood. In August, the first CAR-T treatment was approved in the US for children with aggressive leukaemia, and regulatory approval for a treatment for adults came in October.

To produce CAR T-cells, doctors send a patient’s blood to a lab where technicians isolate T-cells and engineer them to produce chimeric antigen receptors, or CARs. These CARs contain two fused parts: an antibody that protrudes from the surface of a T-cell to recognise a protein on cancerous B-cells (commonly CD-19) in the blood and a receptor inside the T-cell that sends messages to cellular machinery. When the antibody binds to a tumour cell, it activates the internal receptor, triggering the CAR T-cell to attack the attached cancer cell.

In clinical trials, some patients treated with CAR T-cells for aggressive leukaemia went into remission when other treatments had failed. But several high-profile trials had to be suspended because of autoimmune and neurological side effects, some leading to patient deaths.

To improve the safety of CAR-T treatment, researchers are now engineering “suicide switches” into the cells, genetically encoded cell surface receptors that trigger the cell to die when a small molecule drug binds them. If doctors see a patient experiencing side effects, they can prescribe the small molecule drug and induce cell death within 30 minutes.

Other safety strategies include improving the specificity of CAR T-cells for tumour cells because healthy cells also carry CD-19 receptors. To improve CAR-T tumour recognition, some researchers are adding a second CAR, so that the engineered cell has to recognise two antigens before mounting an attack.

 

As seen with pseudo-coloured scanning electron microscopy, two cell-killing T-cells (red) attack a squamous mouth cancer cell (white) after a patient received a vaccine containing antigens identified on the tumour. Credit: Rita Elena Serda, National Cancer Institute\Duncan Comprehensive Cancer Center at Baylor College of Medicine

As seen with pseudo-coloured scanning electron microscopy, two cell-killing T-cells (red) attack a squamous mouth cancer cell (white) after a patient received a vaccine containing antigens identified on the tumour. Credit: Rita Elena Serda, National Cancer Institute\Duncan Comprehensive Cancer Center at Baylor College of Medicine

Neoantigens

 A third type of immunotherapy aims to target mutated proteins that are a hallmark of cancer. Cancer cells display portions of these mutated proteins, called neoantigens, on their surface. Researchers are studying how to use tumour-specific neoantigens in vaccines to help the body mount an immune response targeted at the cancer.

Results from two recent small clinical trials for patients with advanced melanoma suggest that neoantigen vaccines can stop the cancer from growing, or in some cases, shrink the tumours with few reported side effects. But it’s too early in clinical development to know if the vaccines will extend the lives of cancer patients.

There are two steps to making a neoantigen vaccine: first, identify mutated proteins unique to most of a patient’s cancer cells and second, identify portions of those proteins that could most effectively stimulate an immune response.

To identify mutated proteins, researchers sequence the genome of cancer cells and compare it to the sequence in healthy cells. Next, they identify which mutations lead to the production of altered proteins. Finally, they use computer models or cellular tests to identify the portions of proteins that could be the most effective neoantigen.

This last step of predicting neoantigenicity is the most challenging part of developing a new neoantigen vaccine. Lab experiments to confirm the activity of multiple neoantigens are time consuming, and current computer models to predict antigenicity can be inaccurate due to low validation.

A few principles of cancer biology also make developing neoantigens for long-lasting treatment difficult. Some cancers may have too many mutations to test as potential neoantigens. Cancer cells also continue to mutate as tumours grow, and some cells may not display the neoantigens chosen for a vaccine. Finally, cancer cells may naturally stop displaying antigens on their surface, as part of their strategy for evading an immune response.

However, identifying neoantigens can still be useful as cancer biomarkers. Or if used in a vaccine, they may be most effective in combination with other drugs: a few patients in the small clinical trials whose cancer relapsed after the trials responded to treatment with a checkpoint inhibitor.

Cancer has been a common topic in Nobel Laureates’ lectures at many Lindau Meetings. Learn more about these lectures, as well as Nobel Prize winning research related to cancer, in the Mediatheque.

Resistant Bacteria vs. Antibiotics: A Fiercely Fought Battle

Antibiotics are an integral part of today’s medicine, not only to treat a strep throat or an ear infection – they also play a huge role in routine operations like appendecotomies or cecareans, and they are indispensable as co-treatment for many chemotherapies.

If you take an antibiotic today, it has most probably been developed and approved of in the last century. And since “bacteria want to live, and they are cleverer than us,” as Nobel laureate Ada Yonath describes them succinctly, many pathogens have become resistant to these common drugs. In September 2017, the World Health Organization (WHO) published an urgent appeal to increase funding for research into new antibiotics: not enough new drugs are in the ‘pipeline’ to combat the growing problem of multi-resistant strains. Currently, an estimated number of 700,000 patients die from infections with these strains every year – and this death toll might rise.

The WHO and other experts are especially concerned about multi-resistant tuberculosis that causes about 250,000 deaths per year, and less than half of all patients receive the necessary treatment that can take up to 20 months. The problem is that disrupted treatment inevitably leads to more resistances. Another very worrisome development is the emergence of multi-resistant Neisseria strains that cause the STD gonorrhoea. Neisseria gonorrhoeae are gram-negative bacteria, meaning that their surface is not coloured by gram staining. This resilient surface is also the reason why it is hard to treat gonorrhoea infections in the first place, even without resistances. Only this year, there have been several outbreaks of this multi-resistant variant around the world.

 

Antibiotic resistance tests: the bacteria in the culture on the left are sensitive to all seven antibiotics contained in the small white paper discs. The bacteria on the right are resistant to four of these seven antibiotics. Photo: Dr Graham Beards, 2011, CC BY-SA 4.0

Antibiotic resistance tests: the bacteria in the culture on the left are sensitive to all seven antibiotics contained in the small white paper discs. The bacteria on the right are resistant to four of these seven antibiotics. Photo: Dr Graham Beards, 2011, CC BY-SA 4.0

 

This brings us to another problem: resistant bugs travel fast. No matter where they develop, with modern travel they can spread around the world within days. The WHO also published a list with 12 pathogens that pose the greatest risks. This list includes Neisseria as well as the well-known and much-feared ‘hospital bug’ methicillin-resistant Staphylococcus aureus, or MRSA.

 

Imaging technologies help to develop new drugs

Relief from this dire situation might come from unexpected sources, like the technology honoured by the Nobel Prize in Chemistry 2017: cryo-electron microscopy, or cryo-EM. With the help of this new method, researchers can ‘see’ “proteins that confer resistance to chemotherapy and antibiotics”. This method was difficult to develop, and it leaned heavily on the experiences from X-ray crystallography and classic electron microscopy.

Often in research, being able to ‘see’ something is the first step of understanding its function, hence the strong interest in imaging technology in the life sciences: if a researcher can ‘see’ the workings of a resistance-inducing protein, he or she can start working on strategies to inhibit this process. Cryo-EM is especially good at depicting surface proteins, i.e., the location where infections or gene transfers usually start.

At the same time, optical microscopy is moving ahead as well, being able to ‘watch’ proteins being coded in living cells.  The Nobel Prize in Chemistry 2014 was dedicated to the breaking of the optical diffraction limit. Stefan Hell developed STED microscopy, American physicists Eric Betzig invented PALM microscopy, and both were awarded the Nobel Prize, together with William E. Moerner, “for the development of super-resolved fluorescence microscopy”. Shortly after receiving the most prestigious science award, Stefan Hell combined STED and PALM microscopy to develop the MINFLUX microscope: the very technology that can show proteins being coded. All these methods together will result in a “resolution revolution” that may contribute to the development of new classes of antibiotics.

 

Nobel laureate Ada Yonath during a discussion with young scientists at the 2016 Lindau Nobel Laureate Meeting. Photo: LNLMM/Christian Flemming

Nobel laureate Ada Yonath during a discussion with young scientists at the 2016 Lindau Nobel Laureate Meeting. Yonath has been studying bacterial ribosomes for many years. Photo: LNLM/Christian Flemming

Nobel Laureate Ada Yonath, who was awarded the 2009 Nobel Prize in Chemistry “for studies of the structure and function of the ribosome“ with X-ray crystallography, is currently researching species-specific antibiotics. Her starting point is that many antibiotics target bacteria’s’ ribosomes, “the universal cellular machines that translate the genetic code into proteins.” First, her team studied the inhibition of ribosome activity in eubacteria, i.e., ‘good’ bacteria. Next, she extended her studies to ribosomes from multi-resistant pathogens like MRSA. Her goal is to design species-specific drugs, meaning specific to a certain pathogen. These will minimise the harm done to the human microbiome by today’s antibiotics, resulting in a more efficient cure and a lower risk of antibiotic resistance, because fewer bacteria are affected.

 

Finding new drugs in unexpected places

Another attack strategy is to look for new antibiotic agents in places that never seemed very promising. For example, in 2010 the Leibniz Institute for Natural Product Research and Infection Biology in Jena (Germany) published a new antibiotic agent found in the soil bacterium Clostridium cellulolyticum. It belongs to the group of anearobic bacteria, a group that has long been neglected in the search for antibiotics. “Our research shows how the potential of a huge group of organisms has simply been overlooked in the past,” says Christian Hertweck, head of Biomolecular Chemistry. Just recently, scientists at the Imperial College London and the London School of Hygiene and Tropical Medicine have treated resistant Gonorrhoea bacteria with Closthioamide, the agent from Jena. They found that even small quantities were highly effective in the Petri dish; clinical trials will follow.

Yet another research strategy is to make antibiotics more ‘resistant’ to resistance formation. For instance, it has taken 60 years for bacteria to become resistant to vancomycin. Now, researchers at The Scripps Research Institute (TSRI) have successfully tested an improved version of vancomycin on vancomycin-resistant Enterococci that are on the WHO list of the most dangerous pathogenes. This improved drug attacks bacteria from three different sides. The study was led by Dale Boger, co-chair of TSRI’s department of chemistry, who said the discovery made the new version of vancomycin the first antibiotic to have three independent ‘mechanisms of action’ to kill bacteria. “This increases the durability of this antibiotic,” he said. “Organisms just can’t simultaneously work to find a way around three independent mechanisms of action. Even if they found a solution to one of those, the organisms would still be killed by the other two.”

 

Drug resistance can ‘jump’ between pathogens

Unfortunately, researchers and bacteria are not the only combatants, and this fiercly fought battle is not confined to clearly marked battlegrounds. Increasingly, multi-resistant bacteria can be found in our food, mostly due to the use of antibiotics in animal farming, and even in our natural environment. One such troubling example is Colistin, an antibiotic from the 1950s, which had never been widely used in humans due to toxic side-effects; however, in recent years it has been rediscovered as a last-resort antibiotic against multi-resistant bugs. Since it is an old drug, it’s also inexpensive and widely used – on pig farms in China.

As expected, Colistin-resistant bacteria developed in pigs, which was first discovered and published in 2015. But what makes this resistance perilous is the fact that the relevant gene is plasmid-mediated, meaning it can spread easily from one bacterium to another, possibly even from one species to another. In 2015, this resistance gene, called mcr-1, was also found in pork in Chinese supermarkets and in a few probes from hospital patients. Only 18 months later, 25 percent of hospital patients in certain areas in China tested positive for bacteria with this gene: resistances start spreading at unprecedented speeds.

Another highly disturbing example are large quantities of modern antibiotics and antimycotics found in the sewage from pharmaceutical production in India. In warm water, many bacteria find ideal conditions not only to live, but also to adapt to these novel antibiotics by quickly becoming resistant. Already travellers returning from some developing countries are considered a potential health threat, because many of them are unwitting carriers of multi-resistant pathogenes.

Since the discovery of Penicillin in 1928 by Nobel Laureate Alexander Fleming, the battle between bacteria and antibiotics is fierce and ongoing. This battle is fought in the laboratories, the hospitals and doctors’ offices all over the world, with some people seeming about as determined and creative as their opponents.

But resistance-breeding grounds like Chinese pig farms or sewage pipes from pharmaceutical companies present yet another battleground and call for a strategy that needs to be innovative as well as multifaceted. Only last week, a United Nations ad-hoc group met in Berlin to discuss these challenges. To sum it up: most of us do not live next to Indian sewer pipes, but the resistant bacteria bred there may reach us all.

 

Sign by the US Centers for Disease Control and Prevention CDC how antibiotic resistances occur - you use them and you lose them. But in this graph, large-scale pollution with resistant bacteria is not even included. Image: Centers for Disease Control and Prevention, 2013 Public Domain

Sign by the US Centers for Disease Control CDC how antibiotic resistance occurs: “you use it and you lose it”. Sewage pollution with resistant bacteria from pharmaceutical production is not included in this graph. Image: Centers for Disease Control and Prevention, 2013 Public Domain

Cool Microscope Technology – Nobel Prize in Chemistry 2017

Being able to see something often precedes understanding its function. In the case of molecules and atoms, this requires advanced methods. Visualising biomolecules is crucial for both the basic understanding of the chemistry of life and for the design of pharmaceuticals. Thanks to the ground-breaking work of Jacques Dubochet, Joachim Frank and Richard Henderson to the development of cryo-electron microscopy (cryo-EM), researchers can now freeze biomolecules mid-movement and image cellular processes they have never previously seen.

There have been two powerful imaging methods before cryo-EM, namely, X-ray crystallography and nuclear magnetic resonance (NMR) spectroscopy. These methods have enabled the structural analysis of thousands of biomolecules. However, both methods suffer from fundamental limitations. NMR only works for relatively small proteins in solution. X-ray crystallography requires that the molecules form well-organised crystals. The images are like black and white portraits from early cameras – their rigid pose reveals very little about the protein’s dynamics.

 

The 2017 Nobel Laureates in Chemistry: Jacques Dubochet, Joachim Frank, and Richard Henderson (from left). Illustrations: Niklas Elmehed. Copyright: Nobel Media AB 2017

TheNobel Laureates in Chemistry 2017: Jacques Dubochet, Joachim Frank, and Richard Henderson (from left). Illustrations: Niklas Elmehed. Copyright: Nobel Media AB 2017

 

Richard Henderson succeeded in using an electron microscope to generate a three-dimensional image of a protein at atomic resolution. This breakthrough proved the technology’s potential. Henderson used this older method for imaging proteins, but setbacks arose when he attempted to crystallise a protein that was naturally embedded in the membrane surrounding the cell. Membrane proteins are difficult to manage, because they tend to clump up into a useless mass once they are removed from their natural environment – the membrane. The first membrane protein that Richard Henderson worked with was difficult to produce in adequate amounts; the second one failed to crystallise. After years of disappointment, he turned to the only available alternative: the electron microscope.

When the electron microscope was invented in the 1930s, scientists thought that it was only suitable for studying dead matter. The intense electron beam necessary for obtaining high resolution images incinerates biological material and, if the beam is weakened, the images lose contrast. In addition, electron microscopy requires a vacuum, a condition in which biomolecules deteriorate because the surrounding water evaporates. When biomolecules dry out, they collapse and lose their natural structure, rendering the images useless.

Fig4_ke_en_RGB

Bacteriorhodopsin is a purple protein that is embedded in the membrane of a photosynthesising organism, where it captures the energy from the sun’s rays. Instead of removing the sensitive protein from the membrane, as Richard Henderson had previously tried to do, he and his colleagues took the complete purple membrane and put it under the electron microscope. In this way, the protein retained its structure because it remained membrane-bound. To prevent the sample’s surface from drying out in the vacuum, they covered it with a glucose solution.

The harsh electron beam was a major problem, but the researchers made use of the way in which bacteriorhodopsin molecules are packed in the organism’s membrane. Instead of blasting it with a full dose of electrons, they used a weaker beam. The image’s contrast was poor, and they could not see the individual molecules, but they were able to make use of the fact that the proteins were regularly packed and oriented in the same direction. When all the proteins diffracted the electron beams in an almost identical manner, they could calculate a more detailed image based on the diffraction pattern – they used a similar mathematical approach to that used in X-ray crystallography.

To get the sharpest images, Henderson travelled to the best electron microscopes in the world. All of them had their weaknesses, but they complemented each other. Finally, in 1990, 15 years after he had published the first model, Henderson achieved his goal and was able to present a structure of bacteriorhodopsin at atomic resolution. He thereby proved that cryo-EM could provide images as detailed as those generated using X-ray crystallography, which was a crucial milestone. However, this progress was built upon an exception: the way that the protein naturally packed itself regularly in the membrane. Few other proteins spontaneously order themselves like this. The question was whether the method could be generalised: would it be able to produce high-resolution three-dimensional images of proteins that were randomly scattered in the sample and oriented in different directions?

On the other side of the Atlantic, at the New York State Department of Health, Joachim Frank had long worked to find a solution to just that problem. Joachim Frank made the technology generally applicable. Between 1975 and 1986, he developed an image processing method in which the electron microscope’s fuzzy two-dimensional images are analysed and merged to reveal a sharp three-dimensional structure.

Already in 1975, Frank presented a theoretical strategy where the apparently minimal information found in the electron microscope’s two-dimensional images could be merged to generate a three-dimensional whole. His strategy built upon having a computer discriminate between the traces of randomly positioned proteins and their background in a fuzzy electron microscope image. He developed a mathematical method that allowed the computer to identify different recurring patterns in the image. The computer then sorted similar patterns into the same group and merged the information in these images to generate a sharper image. In this way he obtained a number of high-resolution, two-dimensional images that showed the same protein but from different angles. The algorithms for the software were complete in 1981.

The next step was to mathematically determine how the different two-dimensional images were related to each other and, based on this, to create a three-dimensional image. Frank published this part of the image analysis method in the mid-1980s and used it to generate a model of the surface of a ribosome, the gigantic molecular machinery that builds proteins inside the cell. Joachim Frank’s image processing method was fundamental to the development of cryo-EM.

Fig5_ke_eng_RGB

Back in 1978, at the same time as Frank was perfecting his computer programmes, Jacques Dubochet was recruited to the European Molecular Biology Laboratory in Heidelberg to solve another of the electron microscope’s basic problems: how biological samples dry out and are damaged when exposed to a vacuum. Henderson had used a glucose solution to protect his membrane from dehydrating in 1975, but this method did not work for water-soluble biomolecules. Other researchers had tried freezing the samples because ice evaporates more slowly than water, but the ice crystals disrupted the electron beams so much that the images were useless. Also, the vaporising water was a major dilemma.

Jacques Dubochet saw a potential solution: cooling the water so rapidly that it solidified in its liquid state to form a glass instead of crystals. A glass appears to be a solid material, but is actually a fluid because it has disordered molecules. Dubochet realised that if he could get water to form glass – also known as vitrified water – the electron beam would diffract evenly and provide a uniform background.

Initially, the research group attempted to vitrify tiny drops of water in liquid nitrogen at –196°C, but were successful only when they replaced the nitrogen with ethane that had, in turn, been cooled by liquid nitrogen. Under the microscope they saw a drop that was like nothing they had seen before. They first assumed it was ethane, but when the drop warmed slightly the molecules suddenly rearranged themselves and formed the familiar structure of an ice crystal. This was a great success, particularly as some researchers had claimed it was impossible to vitrify water drops.

After the breakthrough in 1982, Dubochet’s research group rapidly developed the basis of the technique that is still used in cryo-EM. They dissolved their biological samples – initially different forms of viruses – in water. The solution was then spread across a fine metal mesh as a thin film. Using a bow-like construction they shot the net into the liquid ethane so that the thin film of water vitrified. In 1984, Jacques Dubochet published the first images of a number of different viruses, round and hexagonal, that are shown in sharp contrast against the background of vitrified water. Biological material could now be relatively easily prepared for electron microscopy, and researchers were soon knocking on Dubochet’s door to learn the new technique.

In 1991, when Joachim Frank prepared ribosomes using Dubochet’s vitrification method and analysed the images with his own software, he obtained a three-dimensional structure that had a resolution of 40 Å. This was an amazing step forward for electron microscopy, but the image only showed the ribosome’s contours. In fact, it looked like a blob and the image did not even come close to the atomic resolution of X-ray crystallography.

fig_ke_en_17_blobology

The electron microscope has gradually been optimised, greatly due to Richard Henderson stubbornly maintaining his vision that electron microscopy would one day routinely provide images that show individual atoms. Indeed, recent years have witnessed a ‘resolution revolution’. Resolution has improved, Ångström by Ångström, and the final technical hurdle was overcome in 2013, when a new type of electron detector came into use. Researchers can now routinely produce three-dimensional structures of biomolecules. 

There are a number of benefits that make cryo-EM so revolutionary: Dubochet’s vitrification method is relatively easy to use and requires a minimal sample size. Due to the rapid cooling process, biomolecules can be frozen mid-action and researchers can take image series that capture different parts of a process. This way, they produce ‘films’ that reveal how proteins move and interact with other molecules. Using cryo-EM, it is also easier than ever before to depict membrane proteins, which often function as targets for pharmaceuticals. For instance in the Zika virus outbreak in 2015-16, cryo-EM was used to visualise the virus’ membrane within months. As the Nobel Committee’s press release appreciates: “this method has moved biochemistry into a new era.”

Nobel Prize in Physics 2017 – the Discovery of Gravitational Waves

On 14 September 2015, the LIGO detectors in the USA saw space vibrate with gravitational waves for the very first time. Even though the signal was tiny – the time difference between the two light beams in one LIGO interferometer was only 0.0069 seconds, as Olga Botner from the Nobel Committee for Physics points out – it marked the beginning of a new era in astronomy: with Gravitational Wave Astronomy, researchers will be able study the most violent events in the universe, like the merging of black holes. Such a merger was detected in September 2015, and it happened incredible 1.3 billion lightyears away from earth.

 

Fig2_fy_EN_RGB

 

The fourth observation of a gravitational wave was only announced on 27 September 2017 at the meeting of G7 science ministers in Turin, Italy. It was also the first to have been picked up by the Virgo detector, located near Pisa. This detection at a third site, besides the two LIGO detectors in the US states of Washington and Louisiana, provides a much better understanding of the three-dimensional pattern of the wave. It is also the result of two merging black holes and was detected on 14 August 2017.

Gravitational waves had been predicted in 1915 by Nobel Laureate Albert Einstein in his General Theory of Relativity. In his mathematical model, Einstein combined space and time in a continuum he called ‘spacetime’. This is where the expression ‘ripples in spacetime’ for gravitational waves comes from.

LIGO, the Laser Interferometer Gravitational Wave Observatory, is a collaborative project with over one thousand researchers from more than twenty countries. Together, they have realised a vision that is almost fifty years old. The 2017 Nobel Laureates all have been invaluable to the success of LIGO. Pioneers Rainer Weiss and Kip S. Thorne, together with Barry C. Barish, the scientist and leader who brought the project to completion, have ensured that more than four decades of effort led to gravitational waves finally being observed.

 

The three new Nobel Laureates: Rainer Weiss, Barry C. Barish, and Kip S. Thorne (from left). Copyright: Nobel Media, Illustration by N. Elmehed

The three new Nobel Laureates in Physics: Rainer Weiss, Kip S. Thorne, and Barry C. Barish (from left). Copyright: Nobel Media, Illustrations by Niklas Elmehed

 

Already in the mid-1970s, both Kip Thorne and Rainer Weiss were firmly convinced that gravitational waves could be detected. Weiss had already analysed possible sources of background noise that would disturb their measurements. He had also designed a detector, a laser-based interferometer, which would overcome this noise. While Rainer Weiss was developing his detectors at MIT in Cambridge, outside Boston, Kip Thorne started working with Ronald Drever, who built his first prototypes in Glasgow, Scotland. Drever eventually moved to join Thorne at Caltech in Los Angeles. Together, Weiss, Thorne and Drever formed a trio that pioneered development for many years. Drever learned about the first discovery, but then passed away in March 2017.

Together, Weiss, Thorne and Drever developed a laser-based interferometer. The principle has long been known: an interferometer consists of two arms that form an L. At the corner and the ends of the L, massive mirrors are installed. A passing gravitational wave affects each interferometer’s arm differently – when one arm is compressed, the other is stretched. The laser beam that bounces between the mirrors can measure the change in the lengths of the arms. If nothing happens, the light beams cancel each other out when they meet at the corner of the L. However, if either of the interferometer’s arms changes length, the light travels different distances, so the light waves lose synchronisation and the resulting light’s intensity changes where the beams meet; the minimal time difference of the two beams can also be detected.

The idea was fairly simple, but the devil was in the details, so it took over forty years to realise. Large-scale instruments are required to measure microscopic changes of lengths less than an atom’s nucleus. The plan was to build two interferometers, each with four-kilometre-long arms along which the laser beam bounces many times, thus extending the path of the light and increasing the chance of detecting any tiny stretches in spacetime. It took years of developing the most sensitive instrument ever to be able to distinguish gravitational waves from all the background noise. This required sophisticated analysis and advanced theory, for which Kip Thorne was the expert.

 

Fig4_fy_EN_RGB

 

Running such a project on a small scale was no longer possible and a new approach was needed. In 1994, when Barry Barish took over as leader for LIGO, he transformed the small research group of about forty people into a large-scale international collaboration with more than a thousand participants. He searched for the necessary expertise and brought in numerous research groups from many countries.

In September 2015, LIGO was about to start up again after an upgrade that had lasted several years. Now equipped with tenfold more powerful lasers, mirrors weighing 40 kilos, highly advanced noise filtering, and one of the world’s largest vacuum systems, it captured a wave signal a few days before the experiment was set to officially start. The wave first passed the Livingston, Louisiana, facility and then, seven milliseconds later – moving at the speed of light – it appeared at Hanford, Washington, three thousand kilometres away.

 

Young researcher was first person the ‘see’ a gravitational wave

A message from the computerised system was sent early in the morning on 14 September 2015. Everyone in the US was sleeping, but in Hannover, Germany, it was 11:51 hours and Marco Drago, a young Italian physicist at the Max Planck Institute for Gravitational Physics, also named Albert Einstein Institute and part of the LIGO Collaboration, was getting ready for lunch. The curves he glimpsed looked exactly like those he had practiced recognising so many times. Could he really be the first person in the world to see gravitational waves? Or was it just a false alarm, one of the occasional blind tests about which only a few people knew?

The wave’s form was exactly as predicted, and it was not a test. Everything fit perfectly. The pioneers, now in their 80s, and their LIGO colleagues were finally able to hear the music of their dreams, like a bird chirping. The discovery was almost too good to be true, but it was not until February the following year that they were allowed to reveal the news to anyone, even their families.

What will we learn from the observation of gravitational waves? As Karsten Danzmann, Director of the Albert Einstein Institute and Drago’s boss, explained: “More than 99 percent of the universe are dark to direct observation.” And Rainer Weiss elaborated during a telephone conversation with Thors Hans Hansson of the Nobel Committee: Merging black holes probably send the strongest signal, but there are many other possible sources, like neutron stars orbiting each other, and supernovae explosions. Thus, Gravitational Waves Astronomy opens a new and surprising window to the Universe.

The Workings of Our Inner Clock – Nobel Prize in Physiology or Medicine 2017

2017 Nobel Laureates in Physiology or Medicine: Jeffrey C. Hall, Michael Rosbash and Michael W. Young. Illustration: Niklas Elmehed. Copyright: Nobel Media AB 2017

2017 Nobel Laureates in Physiology or Medicine: Jeffrey C. Hall, Michael Rosbash and Michael W. Young. Illustration: Niklas Elmehed. Copyright: Nobel Media AB 2017

 

Our body functions differently during the day than it does during the night – as do those of many organisms. This phenomenon, referred to as the circadian rhythm, is an adaptation to the drastic changes in the environment over the course of the 24-hour cycle in which the Earth rotates about its own axis. How does the biological clock work? A complex network of molecular reactions within our cells ensures that certain proteins accumulate at high levels at night and are degraded during the daytime. For elucidating these fundamental molecular mechanisms, Jeffrey C. Hall, Michael Rosbash and Michael W. Young were awarded the Nobel Prize in Physiology or Medicine 2017.

Already in the 18th century, the astronomer Jean Jacque d’Ortous de Mairan observed that plants moved their leaves and flowers according to the time of the day no matter whether they were placed in the light or in the dark, suggesting the existence of an inner clock that worked independently of external stimuli. However, the idea remained controversial for centuries until additional physiological processes were shown to be regulated by a biological clock, and the concept of endogenous circadian rhythms was finally established.

 

Simplified illustration of the feedback regulation of the period gene.  Illustration: © The Nobel Committee for Physiology or Medicine. Illustrator: Mattias Karlén

Simplified illustration of the feedback regulation of the period gene. Illustration: © The Nobel Committee for Physiology or Medicine. Illustrator: Mattias Karlén

The first evidence of an underlying genetic programme was found by Seymour Benzer and Ronald Konopka in 1971 when they discovered that mutations in a particular gene, later named period, disturbed the circadian rhythm in fruit flies. In the 1980s, the collaborating teams of the American geneticists Jeffrey C. Hall and Michael Rosbash at Brandeis University as well as the laboratory of Michael W. Young at Rockefeller University succeeded in deciphering the molecular structure of period. Hall and Rosbash subsequently discovered how it was involved in the circadian cycle: they found that the levels of the gene’s product, the protein PER, oscillated in a 24-hour cycle, and suggested that high levels of PER may in fact block further production of the protein in a negative self-regulatory feedback loop. However, how exactly this feedback mechanism might work remained elusive.

Years later, the team of Michael W. Young contributed the next piece to the circadian puzzle with the discovery of another clock gene, named timeless. The protein products of period and timeless bind each other and are then able to enter the cell’s nucleus to block the activity of the period gene. The cycle was closed when, in 1998, the teams of Hall and Rosbash found two further genes, clock and cycle, that regulate the activity of both period and timeless, and another group showed that vice versa the gene products of timeless and period control the activity of clock. Later studies by the laureates and others found additional components of this highly complex self-regulating network and discovered how it can be affected by light.

The ability of this molecular network to regulate itself explains how it can oscillate. However, it does not explain why this oscillation occurs every 24 hours. After all, both gene expression and protein degradation are relatively fast processes. It was thus clear that a delay mechanism must be in place. An important insight came from Young’s team: the researchers found that a particular protein can delay the process and named the corresponding gene doubletime.

It has since been discovered that the physiological clock of humans works according to the same principles as that of fruit flies. To ensure that our whole body is in sync, our circadian rhythm is regulated by a central pacemaker in the hypothalamus. The circadian clock is affected by external cues such as food intake, physical activity or temperature. But how does the circadian clock affect us? Our biological rhythm influences our sleep patterns, how much we eat, our hormone levels, our blood pressure and our body temperature. Dysfunction of the circadian clock is associated with a range of diseases including sleep disorders, depression, bipolar disorders and neurological diseases. There is also some evidence suggesting that a misalignment between lifestyle and the inner biological clock can have negative consequences for our health. An aim of ongoing research in the field of chronobiology is thus to regulate circadian rhythms to improve health.

 

Tackling the Intractable

Depression is one of the most common and debilitating illnesses worldwide, especially because many sufferers do not respond adequately to any of the currently available treatment options. Picture/Credit: SanderStock/iStock.com

Depression is one of the most common and debilitating illnesses worldwide, especially because many sufferers do not respond adequately to any of the currently available treatment options. Picture/Credit: SanderStock/iStock.com

 

The scourge of depression affects more than 300 million people worldwide, and is the leading global cause of disability. The Nobel Prize-winning research of Arvid Carlsson, Paul Greengard and Eric Kandel among others, paved the way for effective drugs to treat the condition.

How do nerve cells communicate with each other? This was the question that fascinated Paul Greengard and which led him to unravel the biochemical basis for how dopamine acts as a neurotransmitter between nerve cells. His scientific discoveries provided part of the underlying scientific rationale for drugs such as Prozac that act to increase the levels of serotonin, another neurotransmitter whose levels are implicated in depression. Indeed, several so-called selective serotonin reuptake inhibitors (SSRIs) have been developed for the treatment of depression and other disorders, and they are the most commonly prescribed anti-depressants in many countries.

However, even though these compounds provide relief to many, a substantial proportion of individuals with depression do not respond adequately either to these drugs or to cognitive behavioural therapy, the other common first-line treatment for depression.  In fact, about one third of people with severe depression do not initially respond adequately to any currently available therapy. Recently revived research into the medicinal potential of psychedelic drugs, which include LSD and psilocybin from mushrooms, indicates that such substances, when combined with appropriate psychiatric care, may be an effective tool in combatting depressive disorders. The stage is now set for the largest ever clinical trial examining the effectiveness of a psychedelic substance to treat depression.

Although psychedelic drugs may revolutionise the treatment of depression, at a molecular level, their mode of action is very similar to that of traditional SSRIs: they decrease the amount of serotonin that is “resorbed” by the signalling neuron and thus increase the amount of the neurotransmitter that can be taken up by the neuron which is receiving the signal. The key difference is that psychedelics primarily engage different serotonin receptors, which means that different regions of the brain are affected leading to very different physiological effects. Thus, while traditional SSRIs act to reduce stress, anxiety and aggression and to promote increased resilience and emotional blunting, the goal of treatment with psychedelics is rather to dissolve rigid thinking and provide environmental sensitivity and emotional release. The proponents of psychedelics thus claim that the cumulative effect is to increase well-being, while more traditional medications seek to rather simply decrease the symptoms of depression.

The potential of psychedelics to tackle depression head-on and “wipe the slate clean” instead of simply addressing the symptoms almost sounds too good to be true. Psychedelic drugs are strictly prohibited in most countries around the world. In the UK, for example, both LSD and psilocybin are classified as Class A drugs (those whose consumption is deemed most dangerous). With good reason: in particular, LSD abuse is linked with a range of adverse consequences, including panic attacks, psychosis and perceptual disorders. Many users apply Paracelsus’ maxim: “The dose makes the poison.” The regular ingestion of LSD at amounts that are not sufficient to elicit full-blown hallucinations, but which users claim improves focus and creativity, referred to as micro-dosing, has attracted a huge amount of attention in recent times, in large part due to anecdotal evidence that the practice is rife in Silicon Valley. Micro-dosing with psilocybin is also increasing in popularity. The use of psilocybin, found in “magic mushrooms”, was an element of some pre-historic cultures, and, as with other psychedelics, its use both recreational and medicinal was popular in the 1960s. Prohibitive anti-drug legislation across the globe meant that in subsequent decades research into the drug was severely curtailed. However, the last 20 years have witnessed a gradual renaissance of psilocybin research.

 

Psilocybin, a psychedelic substance found in “magic mushrooms”, has shown promise in tackling treatment-resistant depression and in alleviating the anxiety and depressive symptoms of cancer patients. Picture/Credit: Misha Kaminsky/iStock.com

Psilocybin, a psychedelic substance found in “magic mushrooms”, has shown promise in tackling depression and in alleviating anxiety. Picture/Credit: Misha Kaminsky/iStock.com

 

While regular small doses appear to be one potential approach, most recent clinical studies that have tested the effects of psilocybin for depression in a controlled set-up have adopted a strategy in which a single higher dose of the substance or several such doses are administered over a short period of time. This approach is in sharp contrast to the one taken for classical anti-depressants, which are consumed daily. The single high dose strategy has yielded promising results for patients with treatment-resistant depression and also for those suffering from the anxiety and depression often experienced by individuals with cancer. The majority of patients treated with psilocybin in this way exhibited an improvement in the symptoms of depression for up to six months. However, even though these recent studies have shown positive results, there remain a number of significant caveats: firstly, one of the most recent trials was open-label, meaning that the participants knew in advance that they would be receiving a psychedelic drug; secondly, most of the studies to date have been small with only 50 subjects or less; finally, as in most other trials of this kind, the reporting measures are very subjective in nature and rely upon observation by health care professionals, friends or self-reporting by the patients themselves.

It is thus still too early to draw any definitive conclusions regarding the efficacy of psilocybin in alleviating the symptoms of depression. This might be about to change, however: the British start-up company Compass Pathways is close to sealing final approval to carry out what would be the largest clinical trial to date looking at the efficacy of psilocybin in treating depression. As well as incorporating far more subjects than in previous trials (approximately 400) the scientists involved also aim to use more objective digital tracking methods to monitor the effects of psilocybin. In common with previous smaller-scale studies, careful psychological support and monitoring will be crucial in future trials. Research has shown that simply administering psychedelic drugs without providing a proper supportive environment, including counselling, greatly reduces the efficacy of psychedelics against depression and may even be counter-productive.

Even though the first clinical data suggest promising effects of psychedelic drugs in the treatment of depression, several questions remain open: it is unclear how representative the study populations have been, as there may have been a bias toward recruiting those who are more favourably disposed to using psychedelics, and positive prior experiences with such substances may affect treatment outcome. Furthermore, it has yet to be determined at which point substances should be introduced as therapy – as a front-line therapy before depressive symptoms become too ingrained and before long-term therapy with classical anti-depressants, or rather as a treatment of last resort when all else fails.

Winners and Losers From a ‘Commodities-For-Manufactures’ Trade Boom

 

Soy planting in Parana, Brazil.  Photo/Credit: alffoto/iStock.com

Soy planting in Parana, Brazil. Photo/Credit: alffoto/iStock.com

 

The rise of China has been one of the most important events to hit the world economy in recent decades. Rapid economic growth has had enormous implications within China, lifting millions of Chinese citizens out of poverty. But China’s rise has also deeply affected the economies of other countries in ways that we are only beginning to understand.

One fact that economists have learned from studying China’s impact on other countries is that competition from the booming Chinese manufacturing sector has had a big effect on manufacturing workers elsewhere. According to research by David Autor, David Dorn and Gordon Hanson, manufacturing employment has declined much more quickly in parts of the United States that produce goods imported from China.

These findings of negative impacts of Chinese competition for manufacturing workers have been corroborated by studies of European countries. For example, research by João Paulo Pessoa finds that UK workers initially employed in industries competing with Chinese products earned less and spent more time out of employment in the early 2000s.

But China is not only a competitor for other countries’ industries; it has also become an increasingly important consumer of goods produced elsewhere. In particular, China’s rapidly growing economy fuelled a worldwide commodity boom in the early 2000s.

This had an especially big impact on developing countries, whose swiftly rising exports to China became dominated by raw materials such as crops, ores and oil. Exports from low- and middle-income countries to China grew twelvefold from 1995 to 2010, compared with a twofold rise in their exports to everywhere else, so that China became an increasingly important trade partner for the developing world.

In 1995, commodities made up only 20%of these countries’ rather limited exports to China. But by 2010, nearly 70% of exports to China from developing countries were commodities (Figure 1A). Meanwhile, these countries’ rapidly growing imports from China consisted almost entirely of manufactured goods (Figure 1B).

 

Figure 1: Share of commodities in trade of developing countries Notes: ‘Commodities’ include products of the agricultural, forestry, fisheries/aquaculture and mining sectors. ‘Developing countries’ include non-high-income countries as defined by the World Bank, excluding countries in East and Southeast Asia, which tend to participate in regional manufacturing supply chains. Trade data is from CEPII BACI. Credit: Francisco Costa

Figure 1: Share of commodities in trade of developing countries. ‘Commodities’ include products of the agricultural, forestry, fisheries/aquaculture and mining sectors. ‘Developing countries’ include non-high-income countries as defined by the World Bank, excluding countries in East and Southeast Asia, which tend to participate in regional manufacturing supply chains. Trade data is from CEPII BACI. Credit: Francisco Costa

 

This swift transition to a new kind of trade relationship has sometimes been unpopular with China’s trade partners. For example, before a visit to China in 2011, Brazil’s former president Dilma Rousseff promised that she would be “working to promote Brazilian products other than basic commodities,” amid worries about “overreliance on exports of basic items such as iron ore and soy” (Los Angeles Times).

So for countries like Brazil, how did the benefits from the China-driven commodity boom compare to the costs of rising competition from Chinese manufactures?

In my research with Jason Garred and João Paulo Pessoa, published recently in the Journal of International Economics, we look at how the steep rise in ‘commodities-for-manufactures’ trade with China affected workers in Brazil. It turns out that Brazil’s evolving trade relationship with China in the early 2000s echoed that of the rest of the developing world:

  • First, trade with China exploded: just 2% of Brazil’s exports went to China in 1995, but this had risen to 15% by 2010.
  • Second, exports to China became increasingly concentrated in a few commodities (Figure 2A). In 2010, more than 80% of Brazilian exports to China were commodities, mostly soybeans and iron ore. In the first decade of the 2000s, almost all of the growth in export demand for these two Brazilian products came from China.
  • Finally, like the rest of the developing world, Brazil’s imports from China rose quickly but included almost exclusively manufactured goods (Figure 2B).

Our study analyses the 2000 and 2010 Brazilian censuses to check how the fortunes of workers across different regions and industries evolved during the boom in trade with China.

 

Figure 2: Share of commodities in trade of Brazil. ‘Commodities’ include products of the agricultural, forestry, fisheries/aquaculture and mining sectors. Trade data is from CEPII BACI. Credit: Francisco Costa

Figure 2: Share of commodities in trade of Brazil. ‘Commodities’ include products of the agricultural, forestry, fisheries/aquaculture and mining sectors. Trade data is from CEPII BACI. Credit: Francisco Costa

 

We first confirm that during this time, there was a negative effect of Chinese import competition on employees of manufacturing firms. Specifically, in parts of Brazil producing manufactured goods imported from China (such as electronics), growth in manufacturing workers’ wages between 2000 and 2010 was systematically slower.

But our findings also suggest that growth in trade with China created winners as well as losers within Brazil. Wages rose more quickly in parts of the country benefiting more from increasing Chinese demand, which were mainly regions producing soy or iron ore.

We also find that these regions saw a rise in the share of employed workers in formal jobs. Unlike jobs in the informal economy, jobs in the formal sector come with unemployment insurance, paid medical leave and other benefits, and so this increase in formality can be seen as a rise in non-wage compensation.

So while Brazil’s manufacturing workers seem to have lost out from Chinese import competition, rising exports to China appear to have benefited a different subset of Brazilian workers.

Our study concentrates on the short-run effects of trade with China on Brazilian workers. This means that our results don’t provide a full account of the trade-offs between the twin booms in commodity exports and manufacturing imports. For example, we don’t know what happened to the winners from the commodity boom once Chinese demand slowed in the mid-2010s.

We also do not consider the benefits to Brazilian consumers from access to cheaper imported goods from China. But what we do find suggests that trading raw materials for manufactures with China may not have been a raw deal for developing countries like Brazil after all.

‘Homo Economicus’ Reconsidered

Economists live in an ideological fantasyland. They see people as a collection of reliably rational, utility-maximising, calculating machines.

These ‘econs’ – whom the economists study – never make mistakes, which means their behaviour, when they interact in free markets, can be reliably modelled using a handful of equations that essentially apply the 250-year-old insights of Adam Smith and other classical economists.

That at least is one of the popular conceptions of what economists do.

But a few days at the 6th Lindau Meeting on Economic Sciences shows that this is a gross caricature of the profession.

Daniel McFadden, of the University of California, Berkeley, who won the Nobel Prize in 2000, used his presentation to demonstrate problems with applying the simple models of the likes of Adam Smith and David Ricardo to every issue.

‘We respect what they’ve done but we should always question whether it applies,’ he warned.

 

Daniel McFadden during his lecture at the 6th Lindau Meeting on Economic Sciences. Picture/Credit: Christian Flemming/Lindau Nobel Laureate Meetings

Daniel McFadden during his lecture at the 6th Lindau Meeting on Economic Sciences. Picture/Credit: Christian Flemming/Lindau Nobel Laureate Meetings

 

Peter Diamond of MIT, one of the 2010 Nobel Laureates, showed himself to be under no illusion that people always make decisions that are in their long-term self-interest, citing the example of failures in the private pensions market.

‘Left to their own devices people don’t save enough,’ he told the audience of young economists, pointing to a striking survey of a sample of US baby boomers, showing that almost 80% gave an incorrect answer to a simple question relating to compound interest.

No calculating machines there.

Diamond’s theme was what we can learn from international experience about designing better public and private pension systems – with examples ranging from Chile’s sudden denationalisation of its public pensions scheme to the low-cost and efficient funds available to some three million US civil servants.

Simple economic models, Diamond argued, were a poor basis for setting public policy. ‘Models are by definition incomplete’, he said, ‘so applying them literally would be a serious mistake’.

Sir James Mirrlees, of the Chinese University of Hong Kong and co-recipient of the 1996 Nobel Prize, used his talk to discuss our ‘bounded rationality’ as humans. He pointed out that the choices we make are all influenced not simply by a cold calculation of self-interest but external factors such as education, advertising and experience.

He imparted that his own modelling exercise showed that there were some circumstances where it delivered better outcomes if people were offered no choice, but were simply told what to do. ‘It’s unusual in economics to have a theory that says minimise freedom,’ he noted.

 

Sir James Mirrlees talking to young economists after his lecture at the 6th Lindau Meeting on Economic Sciences. Picture/Credit: Julia Nimke/Lindau Nobel Laureate Meetings

Sir James Mirrlees talking to young economists after his lecture at the 6th Lindau Meeting on Economic Sciences. Picture/Credit: Julia Nimke/Lindau Nobel Laureate Meetings

 

Robert Aumann, of the Hebrew University of Jerusalem and one of the 2005 Nobel laureates, kicked against the simplistic conception of human beings as utility-maximising machines from another angle.

The central argument of the game theorist’s talk on ‘mechanism design design’ was the imperative of thinking clearly about incentives and motivations.

We don’t, as Aumann stressed, eat because we want to digest food to give us energy to live (the kind of mistake that an economist who believes in calculating ‘econs’ might make about human incentives). We eat because we’re hungry.

Likewise, we don’t generally have sex because we want to propagate the human race. We have sex because it feels good.

If we miss these proximate motivations, we risk misunderstanding what drives human behaviour – and thus getting economics itself wrong.

Housing Talk: Why You Should Never Trust a House Price Index (Only)

An ever growing housing market? Picture/Credit: G0d4ather/iStock.com

An ever growing housing market? Picture/Credit: G0d4ather/iStock.com

 

Whenever I attend a dinner party or wedding or just meet old friends for coffee, at some point the topic turns to house prices, real estate investment and what seems to be generally perceived as a boom in the housing market. It seems that almost everyone is interested in buying a house or apartment. Often the aim is not just to cover the basic human need for shelter, but also to participate in the assumed never-ending boom and grab a small piece of the rapidly growing housing cake. House prices never fall, right?

People enthusiastically tell me stories of friends (or friends of friends) who finance multiple properties entirely via loans with zero down-payment. After all, real estate investment is a safe haven, isn’t it? And the expected rent will more than cover the monthly loan instalment, won’t it?

The figure show a rental price (right) index for Sydney together with quality-adjusted rental and sales price distributions. Credit: Sofie R. Waltl

The figures show a rental price index (top) for Sydney together with its quality-adjusted rental price distribution (bottom). Credit: Sofie R. Waltl

But wait, my experience of obsessive ‘housing talk’ may be biased in at least three ways. First, I’m in my late twenties and so are my friends. Thinking about buying property is common, even necessary, for my age (and socioeconomic) group. Key decisions for the rest of our lives need to be made: should I stay in my first job or get (another) postgraduate degree? Should I go abroad and try out the expat life? Should I marry or break up with my long-term boy/girlfriend? What about kids? And hey, where and how will I live? Which leads to the obvious next question: to rent or to buy.

Second, I wrote a PhD thesis about housing markets. Although I focused on better ways of measuring dynamics in housing and rental markets and mainly dealt with technical problems in statistical modelling of such markets, most of my friends tend (wrongly) to conclude that I am an expert in real estate investment. Naturally, I end up being asked for advice about housing markets more often than most.

Third, as a native Austrian currently living in Germany, I mainly come across people for whom dramatic changes in house prices are a new phenomenon. After years of generally flat house prices, these countries have only recently seen bigger shifts.

Still, housing markets seem to be a hot topic and I rarely meet someone who’s not at all interested. I believe that widely reported changes in house price indices are the main reason for that.

In his famous book Irrational Exuberance, Nobel laureate Robert Shiller describes how first the US stock market (near its peak in 1999) and then the US housing market (around 2004) became socially accepted topics of conversation with broad media coverage. In fact, he writes, whenever he went out for dinner with his wife, he successfully predicted that someone at an adjacent table would speak about the respective market.

Shiller is well-known for his analysis of markets driven by psychological effects, which help to explain observed developments that a theory based on full rationality would rule out. Although most people are aware of housing bubbles of the recent past – for example, in Ireland, Japan, Spain and the United States – the belief in real estate as a quasi risk-free investment seems to remain unquestioned. The fact that sharp and sudden drops in prices are possible and happen regularly is widely ignored. 

The figure shows a sales price index for Sydney together with quality-adjusted rental and sales price distributions. Credit: Sofie R. Waltl

The figures show a sales price index (top) for Sydney together with its quality-adjusted sales price distribution (bottom). Credit: Sofie R. Waltl

Everyone is affected by movements in housing and rental markets. If someone owns a property, it is usually her single largest asset; if someone rents, the cost often takes up a large fraction of her monthly income. This is why turbulence in these markets has larger effects on households than, for example, swings in the stock market (Case et al, 2005). The social implications of skyrocketing house prices and exploding rents but also of crashing markets are huge – which means that these markets need to be closely watched by policy-makers.

A house price index measures average movements of average houses in average locations belonging to an average price segment – a lot of averages! It is usually heavily aggregated, which implies that just because a national house price index reports rising prices, not every house will benefit equally from these increases. In fact, there is large variation in the distributions of prices and rents, and these distributions also change significantly over time.

Houses are highly heterogeneous goods (particularly compared with shares or bonds): no two houses are the same. Therefore, house price indices should be quality-adjusted, with differences in house characteristics taken into account. Still, changes in house price indices are often driven by developments in certain sub-markets, which are mainly determined by the three most important house characteristics: location, location, location. 

Hence, house price developments are extremely heterogeneous even within urban areas (see McMillen, 2014, Guerrieri et al, 2013, and Waltl, 2016b), and thus the interpretation of aggregated national or even supra-national indices is questionable. For example, the S&P/Case-Shiller US National Home Price Index reports changes for the entire United States, the ECB and EUROSTAT publish indices for the European Union and the euro area, and the IMF even produces a global house price index.

Missing bubbly episodes in sub-markets when looking at such heavily aggregated figures seems unavoidable; and basing an individual investment decision on them is dubious. Similarly problematic is the assessment of a housing market using such aggregated measures for financial stability purposes.

Price map showing the average price (in thousand AUD) for an average house for different locations over time in Sydney. Credit: Sofie R. Waltl

Price map showing the average price (in thousand AUD) for an average house for different locations over time in Sydney. Credit: Sofie R. Waltl

A typical pattern is that markets for low-quality properties in bad locations experience the sharpest rises shortly before the end of a housing boom. Look at the lowest price segment in Sydney’s suburbs (black, dashed line) compared with the highest price segment in the inner city (orange, dotted line) around the peak in 2004. It is also this segment that experiences the heaviest falls afterwards.

A possible behavioural explanation is as follows: the longer a housing boom lasts, the more people (and also the more financially less well-off people) want to participate in this apparently prosperous and safe market. Steady increases reported by house price indices give the impression that the entire market is booming with no end in sight. Whoever is able to participate becomes active in the housing market and investments boom in yet more affordable properties – the lowest segment in bad locations.

A common misconception is the assumption that rising house prices necessarily translate into higher rents almost immediately. But when the price contains a ‘bubble or speculative component’, this is not always the case.

In general, economists speak of a bubble whenever the price of an asset is high just because of the hope of future price increases without any justification from ‘fundamentals’ such as construction costs (Stiglitz, 1990). Investing in over-valued property and hoping for the rent to cover the mortgage is thus more dangerous than it might appear (see Himmelberg et al, 2005, for the components of the price-to-rent ratio measuring the relationship between prices and rents; and Martin and Ventura, 2012, for asset bubbles in general).

 

The figure shows location- and segment-specific indices for Sydney. CBD refers to the Central Business District. Credit: Sofie R. Waltl

The figure shows location- and segment-specific indices for Sydney. CBD refers to the Central Business District. Credit: Sofie R. Waltl

 

While we’ve already seen that price developments are very diverse, the same is also true for the relationship between prices and rents. Thus, simply looking at average price-to-rent ratios may miss the over-heating of a sub-market and its associated risks.

Credit: Sofie R. Waltl

Credit: Sofie R. Waltl

Buying property is thus more delicate than urban legends about the safety of real estate investment suggest. Above all, developments in housing markets are diverse even within small geographical areas and one number alone can never appropriately reflect what is going on. A complete picture of the dynamics in housing markets is essential from the perspective of an investor as well as a policy-maker.

And in case you’re hoping for investment advice, here’s the only piece I can offer: just because everyone buys does not mean that YOU should go out and buy whatever you can afford. In fact, when everyone (including your friend with questionable financial literacy) decides to invest in real estate, it might be exactly the wrong moment. Never rely on house price indices only, but go out and collect as much information as possible. And don’t forget: location, location, location… 

 

 


The figures show quality-adjusted developments in the Sydney housing market, and are part of the results of my doctoral thesis Modelling housing markets: Issues in economic measurement at the University of Graz under the supervision of Robert J. Hill. I am very grateful for his valuable support and advice. Calculations are based on data provided by Australian Property Monitors. Results, which this article is based on, are published as Waltl (2016a) and Waltl (2016b). The part about price-to-rent ratios is currently under review at a major urban economic journal (here is a working paper version). My work has benefitted from funding from the Austrian National Bank Jubiläumsfondsprojekt 14947, the 2014 Council of the University of Graz JungforscherInnenfonds, and the Austrian Marshallplan Foundation Fellowship (UC Berkeley Program 2016/2017). The views presented here are solely my own and do not necessarily reflect those of any past, present or future employer or sponsor.