Nobel Prize in Physics 2017 – the Discovery of Gravitational Waves

On 14 September 2015, the LIGO detectors in the USA saw space vibrate with gravitational waves for the very first time. Even though the signal was tiny – the time difference between the two light beams in one LIGO interferometer was only 0.0069 seconds, as Olga Botner from the Nobel Committee for Physics points out – it marked the beginning of a new era in astronomy: with Gravitational Wave Astronomy, researchers will be able study the most violent events in the universe, like the merging of black holes. Such a merger was detected in September 2015, and it happened incredible 1.3 billion lightyears away from earth.

 

Fig2_fy_EN_RGB

 

The fourth observation of a gravitational wave was only announced on 27 September 2017 at the meeting of G7 science ministers in Turin, Italy. It was also the first to have been picked up by the Virgo detector, located near Pisa. This detection at a third site, besides the two LIGO detectors in the US states of Washington and Louisiana, provides a much better understanding of the three-dimensional pattern of the wave. It is also the result of two merging black holes and was detected on 14 August 2017.

Gravitational waves had been predicted in 1915 by Nobel Laureate Albert Einstein in his General Theory of Relativity. In his mathematical model, Einstein combined space and time in a continuum he called ‘spacetime’. This is where the expression ‘ripples in spacetime’ for gravitational waves comes from.

LIGO, the Laser Interferometer Gravitational Wave Observatory, is a collaborative project with over one thousand researchers from more than twenty countries. Together, they have realised a vision that is almost fifty years old. The 2017 Nobel Laureates all have been invaluable to the success of LIGO. Pioneers Rainer Weiss and Kip S. Thorne, together with Barry C. Barish, the scientist and leader who brought the project to completion, have ensured that more than four decades of effort led to gravitational waves finally being observed.

 

The three new Nobel Laureates: Rainer Weiss, Barry C. Barish, and Kip S. Thorne (from left). Copyright: Nobel Media, Illustration by N. Elmehed

The three new Nobel Laureates in Physics: Rainer Weiss, Kip S. Thorne, and Barry C. Barish (from left). Copyright: Nobel Media, Illustrations by Niklas Elmehed

 

Already in the mid-1970s, both Kip Thorne and Rainer Weiss were firmly convinced that gravitational waves could be detected. Weiss had already analysed possible sources of background noise that would disturb their measurements. He had also designed a detector, a laser-based interferometer, which would overcome this noise. While Rainer Weiss was developing his detectors at MIT in Cambridge, outside Boston, Kip Thorne started working with Ronald Drever, who built his first prototypes in Glasgow, Scotland. Drever eventually moved to join Thorne at Caltech in Los Angeles. Together, Weiss, Thorne and Drever formed a trio that pioneered development for many years. Drever learned about the first discovery, but then passed away in March 2017.

Together, Weiss, Thorne and Drever developed a laser-based interferometer. The principle has long been known: an interferometer consists of two arms that form an L. At the corner and the ends of the L, massive mirrors are installed. A passing gravitational wave affects each interferometer’s arm differently – when one arm is compressed, the other is stretched. The laser beam that bounces between the mirrors can measure the change in the lengths of the arms. If nothing happens, the light beams cancel each other out when they meet at the corner of the L. However, if either of the interferometer’s arms changes length, the light travels different distances, so the light waves lose synchronisation and the resulting light’s intensity changes where the beams meet; the minimal time difference of the two beams can also be detected.

The idea was fairly simple, but the devil was in the details, so it took over forty years to realise. Large-scale instruments are required to measure microscopic changes of lengths less than an atom’s nucleus. The plan was to build two interferometers, each with four-kilometre-long arms along which the laser beam bounces many times, thus extending the path of the light and increasing the chance of detecting any tiny stretches in spacetime. It took years of developing the most sensitive instrument ever to be able to distinguish gravitational waves from all the background noise. This required sophisticated analysis and advanced theory, for which Kip Thorne was the expert.

 

Fig4_fy_EN_RGB

 

Running such a project on a small scale was no longer possible and a new approach was needed. In 1994, when Barry Barish took over as leader for LIGO, he transformed the small research group of about forty people into a large-scale international collaboration with more than a thousand participants. He searched for the necessary expertise and brought in numerous research groups from many countries.

In September 2015, LIGO was about to start up again after an upgrade that had lasted several years. Now equipped with tenfold more powerful lasers, mirrors weighing 40 kilos, highly advanced noise filtering, and one of the world’s largest vacuum systems, it captured a wave signal a few days before the experiment was set to officially start. The wave first passed the Livingston, Louisiana, facility and then, seven milliseconds later – moving at the speed of light – it appeared at Hanford, Washington, three thousand kilometres away.

 

Young researcher was first person the ‘see’ a gravitational wave

A message from the computerised system was sent early in the morning on 14 September 2015. Everyone in the US was sleeping, but in Hannover, Germany, it was 11:51 hours and Marco Drago, a young Italian physicist at the Max Planck Institute for Gravitational Physics, also named Albert Einstein Institute and part of the LIGO Collaboration, was getting ready for lunch. The curves he glimpsed looked exactly like those he had practiced recognising so many times. Could he really be the first person in the world to see gravitational waves? Or was it just a false alarm, one of the occasional blind tests about which only a few people knew?

The wave’s form was exactly as predicted, and it was not a test. Everything fit perfectly. The pioneers, now in their 80s, and their LIGO colleagues were finally able to hear the music of their dreams, like a bird chirping. The discovery was almost too good to be true, but it was not until February the following year that they were allowed to reveal the news to anyone, even their families.

What will we learn from the observation of gravitational waves? As Karsten Danzmann, Director of the Albert Einstein Institute and Drago’s boss, explained: “More than 99 percent of the universe are dark to direct observation.” And Rainer Weiss elaborated during a telephone conversation with Thors Hans Hansson of the Nobel Committee: Merging black holes probably send the strongest signal, but there are many other possible sources, like neutron stars orbiting each other, and supernovae explosions. Thus, Gravitational Waves Astronomy opens a new and surprising window to the Universe.

The Workings of Our Inner Clock – Nobel Prize in Physiology or Medicine 2017

2017 Nobel Laureates in Physiology or Medicine: Jeffrey C. Hall, Michael Rosbash and Michael W. Young. Illustration: Niklas Elmehed. Copyright: Nobel Media AB 2017

2017 Nobel Laureates in Physiology or Medicine: Jeffrey C. Hall, Michael Rosbash and Michael W. Young. Illustration: Niklas Elmehed. Copyright: Nobel Media AB 2017

 

Our body functions differently during the day than it does during the night – as do those of many organisms. This phenomenon, referred to as the circadian rhythm, is an adaptation to the drastic changes in the environment over the course of the 24-hour cycle in which the Earth rotates about its own axis. How does the biological clock work? A complex network of molecular reactions within our cells ensures that certain proteins accumulate at high levels at night and are degraded during the daytime. For elucidating these fundamental molecular mechanisms, Jeffrey C. Hall, Michael Rosbash and Michael W. Young were awarded the Nobel Prize in Physiology or Medicine 2017.

Already in the 18th century, the astronomer Jean Jacque d’Ortous de Mairan observed that plants moved their leaves and flowers according to the time of the day no matter whether they were placed in the light or in the dark, suggesting the existence of an inner clock that worked independently of external stimuli. However, the idea remained controversial for centuries until additional physiological processes were shown to be regulated by a biological clock, and the concept of endogenous circadian rhythms was finally established.

 

Simplified illustration of the feedback regulation of the period gene.  Illustration: © The Nobel Committee for Physiology or Medicine. Illustrator: Mattias Karlén

Simplified illustration of the feedback regulation of the period gene. Illustration: © The Nobel Committee for Physiology or Medicine. Illustrator: Mattias Karlén

The first evidence of an underlying genetic programme was found by Seymour Benzer and Ronald Konopka in 1971 when they discovered that mutations in a particular gene, later named period, disturbed the circadian rhythm in fruit flies. In the 1980s, the collaborating teams of the American geneticists Jeffrey C. Hall and Michael Rosbash at Brandeis University as well as the laboratory of Michael W. Young at Rockefeller University succeeded in deciphering the molecular structure of period. Hall and Rosbash subsequently discovered how it was involved in the circadian cycle: they found that the levels of the gene’s product, the protein PER, oscillated in a 24-hour cycle, and suggested that high levels of PER may in fact block further production of the protein in a negative self-regulatory feedback loop. However, how exactly this feedback mechanism might work remained elusive.

Years later, the team of Michael W. Young contributed the next piece to the circadian puzzle with the discovery of another clock gene, named timeless. The protein products of period and timeless bind each other and are then able to enter the cell’s nucleus to block the activity of the period gene. The cycle was closed when, in 1998, the teams of Hall and Rosbash found two further genes, clock and cycle, that regulate the activity of both period and timeless, and another group showed that vice versa the gene products of timeless and period control the activity of clock. Later studies by the laureates and others found additional components of this highly complex self-regulating network and discovered how it can be affected by light.

The ability of this molecular network to regulate itself explains how it can oscillate. However, it does not explain why this oscillation occurs every 24 hours. After all, both gene expression and protein degradation are relatively fast processes. It was thus clear that a delay mechanism must be in place. An important insight came from Young’s team: the researchers found that a particular protein can delay the process and named the corresponding gene doubletime.

It has since been discovered that the physiological clock of humans works according to the same principles as that of fruit flies. To ensure that our whole body is in sync, our circadian rhythm is regulated by a central pacemaker in the hypothalamus. The circadian clock is affected by external cues such as food intake, physical activity or temperature. But how does the circadian clock affect us? Our biological rhythm influences our sleep patterns, how much we eat, our hormone levels, our blood pressure and our body temperature. Dysfunction of the circadian clock is associated with a range of diseases including sleep disorders, depression, bipolar disorders and neurological diseases. There is also some evidence suggesting that a misalignment between lifestyle and the inner biological clock can have negative consequences for our health. An aim of ongoing research in the field of chronobiology is thus to regulate circadian rhythms to improve health.

 

Tackling the Intractable

Depression is one of the most common and debilitating illnesses worldwide, especially because many sufferers do not respond adequately to any of the currently available treatment options. Picture/Credit: SanderStock/iStock.com

Depression is one of the most common and debilitating illnesses worldwide, especially because many sufferers do not respond adequately to any of the currently available treatment options. Picture/Credit: SanderStock/iStock.com

 

The scourge of depression affects more than 300 million people worldwide, and is the leading global cause of disability. The Nobel Prize-winning research of Arvid Carlsson, Paul Greengard and Eric Kandel among others, paved the way for effective drugs to treat the condition.

How do nerve cells communicate with each other? This was the question that fascinated Paul Greengard and which led him to unravel the biochemical basis for how dopamine acts as a neurotransmitter between nerve cells. His scientific discoveries provided part of the underlying scientific rationale for drugs such as Prozac that act to increase the levels of serotonin, another neurotransmitter whose levels are implicated in depression. Indeed, several so-called selective serotonin reuptake inhibitors (SSRIs) have been developed for the treatment of depression and other disorders, and they are the most commonly prescribed anti-depressants in many countries.

However, even though these compounds provide relief to many, a substantial proportion of individuals with depression do not respond adequately either to these drugs or to cognitive behavioural therapy, the other common first-line treatment for depression.  In fact, about one third of people with severe depression do not initially respond adequately to any currently available therapy. Recently revived research into the medicinal potential of psychedelic drugs, which include LSD and psilocybin from mushrooms, indicates that such substances, when combined with appropriate psychiatric care, may be an effective tool in combatting depressive disorders. The stage is now set for the largest ever clinical trial examining the effectiveness of a psychedelic substance to treat depression.

Although psychedelic drugs may revolutionise the treatment of depression, at a molecular level, their mode of action is very similar to that of traditional SSRIs: they decrease the amount of serotonin that is “reabsorbed” by the signalling neuron and thus increase the amount of the neurotransmitter that can be taken up by the neuron which is receiving the signal. The key difference is that psychedelics primarily engage different serotonin receptors, which means that different regions of the brain are affected leading to very different physiological effects. Thus, while traditional SSRIs act to reduce stress, anxiety and aggression and to promote increased resilience and emotional blunting, the goal of treatment with psychedelics is rather to dissolve rigid thinking and provide environmental sensitivity and emotional release. The proponents of psychedelics thus claim that the cumulative effect is to increase well-being, while more traditional medications seek to rather simply decrease the symptoms of depression.

The potential of psychedelics to tackle depression head-on and “wipe the slate clean” instead of simply addressing the symptoms almost sounds too good to be true. Psychedelic drugs are strictly prohibited in most countries around the world. In the UK, for example, both LSD and psilocybin are classified as Class A drugs (those whose consumption is deemed most dangerous). With good reason: in particular, LSD abuse is linked with a range of adverse consequences, including panic attacks, psychosis and perceptual disorders. Many users apply Paracelsus’ maxim: “The dose makes the poison.” The regular ingestion of LSD at amounts that are not sufficient to elicit full-blown hallucinations, but which users claim improves focus and creativity, referred to as micro-dosing, has attracted a huge amount of attention in recent times, in large part due to anecdotal evidence that the practice is rife in Silicon Valley. Micro-dosing with psilocybin is also increasing in popularity. The use of psilocybin, found in “magic mushrooms”, was an element of some pre-historic cultures, and, as with other psychedelics, its use both recreational and medicinal was popular in the 1960s. Prohibitive anti-drug legislation across the globe meant that in subsequent decades research into the drug was severely curtailed. However, the last 20 years have witnessed a gradual renaissance of psilocybin research.

 

Psilocybin, a psychedelic substance found in “magic mushrooms”, has shown promise in tackling treatment-resistant depression and in alleviating the anxiety and depressive symptoms of cancer patients. Picture/Credit: Misha Kaminsky/iStock.com

Psilocybin, a psychedelic substance found in “magic mushrooms”, has shown promise in tackling depression and in alleviating anxiety. Picture/Credit: Misha Kaminsky/iStock.com

 

While regular small doses appear to be one potential approach, most recent clinical studies that have tested the effects of psilocybin for depression in a controlled set-up have adopted a strategy in which a single higher dose of the substance or several such doses are administered over a short period of time. This approach is in sharp contrast to the one taken for classical anti-depressants, which are consumed daily. The single high dose strategy has yielded promising results for patients with treatment-resistant depression and also for those suffering with the anxiety and depression often experienced by individuals with cancer. The majority of patients treated with psilocybin in this way exhibited an improvement in the symptoms of depression for up to six months. However, even though these recent studies have shown positive results, there remain a number of significant caveats: firstly, one of the most recent trials was open-label, meaning that the participants knew in advance that they would be receiving a psychedelic drug; secondly, most of the studies to date have been small with only 50 subjects or less; finally, as in most other trials of this kind, the reporting measures are very subjective in nature and rely upon observation by health care professionals, friends or self-reporting by the patients themselves.

It is thus still too early to draw any definitive conclusions regarding the efficacy of psilocybin in alleviating the symptoms of depression. This might be about to change, however: the British start-up company Compass Pathways is close to sealing final approval to carry out what would be the largest clinical trial to date looking at the efficacy of psilocybin in treating treatment-resistant depression. The two-part trial will incorporate a much larger number of patients than in previous trials (approximately 400), and will be performed with leading clinical research institutions across Europe. The first part will be focused on determining the most effective dosage of psilocybin; in the second part, patients will receive the psilocybin therapy as a single treatment. An important feature of the trial will be the use of more objective digital tracking methods to monitor the effects of psilocybin. In common with previous smaller-scale studies, careful psychological support and monitoring will be crucial. Research has shown that simply administering psychedelic drugs without providing a proper supportive environment, including counselling, greatly reduces the efficacy of psychedelics against depression and may even be counter-productive.

Even though the first clinical data suggest promising effects of psychedelic drugs in the treatment of depression, several questions remain open: it is unclear how representative the study populations have been, as there may have been a bias toward recruiting those who are more favourably disposed to using psychedelics, and positive prior experiences with such substances may affect treatment outcome. Furthermore, it has yet to be determined at which point substances should be introduced as therapy – as a front-line therapy before depressive symptoms become too ingrained and before long-term therapy with classical anti-depressants, or rather as a treatment of last resort when all else fails.

Winners and Losers From a ‘Commodities-For-Manufactures’ Trade Boom

 

Soy planting in Parana, Brazil.  Photo/Credit: alffoto/iStock.com

Soy planting in Parana, Brazil. Photo/Credit: alffoto/iStock.com

 

The rise of China has been one of the most important events to hit the world economy in recent decades. Rapid economic growth has had enormous implications within China, lifting millions of Chinese citizens out of poverty. But China’s rise has also deeply affected the economies of other countries in ways that we are only beginning to understand.

One fact that economists have learned from studying China’s impact on other countries is that competition from the booming Chinese manufacturing sector has had a big effect on manufacturing workers elsewhere. According to research by David Autor, David Dorn and Gordon Hanson, manufacturing employment has declined much more quickly in parts of the United States that produce goods imported from China.

These findings of negative impacts of Chinese competition for manufacturing workers have been corroborated by studies of European countries. For example, research by João Paulo Pessoa finds that UK workers initially employed in industries competing with Chinese products earned less and spent more time out of employment in the early 2000s.

But China is not only a competitor for other countries’ industries; it has also become an increasingly important consumer of goods produced elsewhere. In particular, China’s rapidly growing economy fuelled a worldwide commodity boom in the early 2000s.

This had an especially big impact on developing countries, whose swiftly rising exports to China became dominated by raw materials such as crops, ores and oil. Exports from low- and middle-income countries to China grew twelvefold from 1995 to 2010, compared with a twofold rise in their exports to everywhere else, so that China became an increasingly important trade partner for the developing world.

In 1995, commodities made up only 20%of these countries’ rather limited exports to China. But by 2010, nearly 70% of exports to China from developing countries were commodities (Figure 1A). Meanwhile, these countries’ rapidly growing imports from China consisted almost entirely of manufactured goods (Figure 1B).

 

Figure 1: Share of commodities in trade of developing countries Notes: ‘Commodities’ include products of the agricultural, forestry, fisheries/aquaculture and mining sectors. ‘Developing countries’ include non-high-income countries as defined by the World Bank, excluding countries in East and Southeast Asia, which tend to participate in regional manufacturing supply chains. Trade data is from CEPII BACI. Credit: Francisco Costa

Figure 1: Share of commodities in trade of developing countries. ‘Commodities’ include products of the agricultural, forestry, fisheries/aquaculture and mining sectors. ‘Developing countries’ include non-high-income countries as defined by the World Bank, excluding countries in East and Southeast Asia, which tend to participate in regional manufacturing supply chains. Trade data is from CEPII BACI. Credit: Francisco Costa

 

This swift transition to a new kind of trade relationship has sometimes been unpopular with China’s trade partners. For example, before a visit to China in 2011, Brazil’s former president Dilma Rousseff promised that she would be “working to promote Brazilian products other than basic commodities,” amid worries about “overreliance on exports of basic items such as iron ore and soy” (Los Angeles Times).

So for countries like Brazil, how did the benefits from the China-driven commodity boom compare to the costs of rising competition from Chinese manufactures?

In my research with Jason Garred and João Paulo Pessoa, published recently in the Journal of International Economics, we look at how the steep rise in ‘commodities-for-manufactures’ trade with China affected workers in Brazil. It turns out that Brazil’s evolving trade relationship with China in the early 2000s echoed that of the rest of the developing world:

  • First, trade with China exploded: just 2% of Brazil’s exports went to China in 1995, but this had risen to 15% by 2010.
  • Second, exports to China became increasingly concentrated in a few commodities (Figure 2A). In 2010, more than 80% of Brazilian exports to China were commodities, mostly soybeans and iron ore. In the first decade of the 2000s, almost all of the growth in export demand for these two Brazilian products came from China.
  • Finally, like the rest of the developing world, Brazil’s imports from China rose quickly but included almost exclusively manufactured goods (Figure 2B).

Our study analyses the 2000 and 2010 Brazilian censuses to check how the fortunes of workers across different regions and industries evolved during the boom in trade with China.

 

Figure 2: Share of commodities in trade of Brazil. ‘Commodities’ include products of the agricultural, forestry, fisheries/aquaculture and mining sectors. Trade data is from CEPII BACI. Credit: Francisco Costa

Figure 2: Share of commodities in trade of Brazil. ‘Commodities’ include products of the agricultural, forestry, fisheries/aquaculture and mining sectors. Trade data is from CEPII BACI. Credit: Francisco Costa

 

We first confirm that during this time, there was a negative effect of Chinese import competition on employees of manufacturing firms. Specifically, in parts of Brazil producing manufactured goods imported from China (such as electronics), growth in manufacturing workers’ wages between 2000 and 2010 was systematically slower.

But our findings also suggest that growth in trade with China created winners as well as losers within Brazil. Wages rose more quickly in parts of the country benefiting more from increasing Chinese demand, which were mainly regions producing soy or iron ore.

We also find that these regions saw a rise in the share of employed workers in formal jobs. Unlike jobs in the informal economy, jobs in the formal sector come with unemployment insurance, paid medical leave and other benefits, and so this increase in formality can be seen as a rise in non-wage compensation.

So while Brazil’s manufacturing workers seem to have lost out from Chinese import competition, rising exports to China appear to have benefited a different subset of Brazilian workers.

Our study concentrates on the short-run effects of trade with China on Brazilian workers. This means that our results don’t provide a full account of the trade-offs between the twin booms in commodity exports and manufacturing imports. For example, we don’t know what happened to the winners from the commodity boom once Chinese demand slowed in the mid-2010s.

We also do not consider the benefits to Brazilian consumers from access to cheaper imported goods from China. But what we do find suggests that trading raw materials for manufactures with China may not have been a raw deal for developing countries like Brazil after all.

‘Homo Economicus’ Reconsidered

Economists live in an ideological fantasyland. They see people as a collection of reliably rational, utility-maximising, calculating machines.

These ‘econs’ – whom the economists study – never make mistakes, which means their behaviour, when they interact in free markets, can be reliably modelled using a handful of equations that essentially apply the 250-year-old insights of Adam Smith and other classical economists.

That at least is one of the popular conceptions of what economists do.

But a few days at the 6th Lindau Meeting on Economic Sciences shows that this is a gross caricature of the profession.

Daniel McFadden, of the University of California, Berkeley, who won the Nobel Prize in 2000, used his presentation to demonstrate problems with applying the simple models of the likes of Adam Smith and David Ricardo to every issue.

‘We respect what they’ve done but we should always question whether it applies,’ he warned.

 

Daniel McFadden during his lecture at the 6th Lindau Meeting on Economic Sciences. Picture/Credit: Christian Flemming/Lindau Nobel Laureate Meetings

Daniel McFadden during his lecture at the 6th Lindau Meeting on Economic Sciences. Picture/Credit: Christian Flemming/Lindau Nobel Laureate Meetings

 

Peter Diamond of MIT, one of the 2010 Nobel Laureates, showed himself to be under no illusion that people always make decisions that are in their long-term self-interest, citing the example of failures in the private pensions market.

‘Left to their own devices people don’t save enough,’ he told the audience of young economists, pointing to a striking survey of a sample of US baby boomers, showing that almost 80% gave an incorrect answer to a simple question relating to compound interest.

No calculating machines there.

Diamond’s theme was what we can learn from international experience about designing better public and private pension systems – with examples ranging from Chile’s sudden denationalisation of its public pensions scheme to the low-cost and efficient funds available to some three million US civil servants.

Simple economic models, Diamond argued, were a poor basis for setting public policy. ‘Models are by definition incomplete’, he said, ‘so applying them literally would be a serious mistake’.

Sir James Mirrlees, of the Chinese University of Hong Kong and co-recipient of the 1996 Nobel Prize, used his talk to discuss our ‘bounded rationality’ as humans. He pointed out that the choices we make are all influenced not simply by a cold calculation of self-interest but external factors such as education, advertising and experience.

He imparted that his own modelling exercise showed that there were some circumstances where it delivered better outcomes if people were offered no choice, but were simply told what to do. ‘It’s unusual in economics to have a theory that says minimise freedom,’ he noted.

 

Sir James Mirrlees talking to young economists after his lecture at the 6th Lindau Meeting on Economic Sciences. Picture/Credit: Julia Nimke/Lindau Nobel Laureate Meetings

Sir James Mirrlees talking to young economists after his lecture at the 6th Lindau Meeting on Economic Sciences. Picture/Credit: Julia Nimke/Lindau Nobel Laureate Meetings

 

Robert Aumann, of the Hebrew University of Jerusalem and one of the 2005 Nobel laureates, kicked against the simplistic conception of human beings as utility-maximising machines from another angle.

The central argument of the game theorist’s talk on ‘mechanism design design’ was the imperative of thinking clearly about incentives and motivations.

We don’t, as Aumann stressed, eat because we want to digest food to give us energy to live (the kind of mistake that an economist who believes in calculating ‘econs’ might make about human incentives). We eat because we’re hungry.

Likewise, we don’t generally have sex because we want to propagate the human race. We have sex because it feels good.

If we miss these proximate motivations, we risk misunderstanding what drives human behaviour – and thus getting economics itself wrong.

Housing Talk: Why You Should Never Trust a House Price Index (Only)

An ever growing housing market? Picture/Credit: G0d4ather/iStock.com

An ever growing housing market? Picture/Credit: G0d4ather/iStock.com

 

Whenever I attend a dinner party or wedding or just meet old friends for coffee, at some point the topic turns to house prices, real estate investment and what seems to be generally perceived as a boom in the housing market. It seems that almost everyone is interested in buying a house or apartment. Often the aim is not just to cover the basic human need for shelter, but also to participate in the assumed never-ending boom and grab a small piece of the rapidly growing housing cake. House prices never fall, right?

People enthusiastically tell me stories of friends (or friends of friends) who finance multiple properties entirely via loans with zero down-payment. After all, real estate investment is a safe haven, isn’t it? And the expected rent will more than cover the monthly loan instalment, won’t it?

The figure show a rental price (right) index for Sydney together with quality-adjusted rental and sales price distributions. Credit: Sofie R. Waltl

The figures show a rental price index (top) for Sydney together with its quality-adjusted rental price distribution (bottom). Credit: Sofie R. Waltl

But wait, my experience of obsessive ‘housing talk’ may be biased in at least three ways. First, I’m in my late twenties and so are my friends. Thinking about buying property is common, even necessary, for my age (and socioeconomic) group. Key decisions for the rest of our lives need to be made: should I stay in my first job or get (another) postgraduate degree? Should I go abroad and try out the expat life? Should I marry or break up with my long-term boy/girlfriend? What about kids? And hey, where and how will I live? Which leads to the obvious next question: to rent or to buy.

Second, I wrote a PhD thesis about housing markets. Although I focused on better ways of measuring dynamics in housing and rental markets and mainly dealt with technical problems in statistical modelling of such markets, most of my friends tend (wrongly) to conclude that I am an expert in real estate investment. Naturally, I end up being asked for advice about housing markets more often than most.

Third, as a native Austrian currently living in Germany, I mainly come across people for whom dramatic changes in house prices are a new phenomenon. After years of generally flat house prices, these countries have only recently seen bigger shifts.

Still, housing markets seem to be a hot topic and I rarely meet someone who’s not at all interested. I believe that widely reported changes in house price indices are the main reason for that.

In his famous book Irrational Exuberance, Nobel laureate Robert Shiller describes how first the US stock market (near its peak in 1999) and then the US housing market (around 2004) became socially accepted topics of conversation with broad media coverage. In fact, he writes, whenever he went out for dinner with his wife, he successfully predicted that someone at an adjacent table would speak about the respective market.

Shiller is well-known for his analysis of markets driven by psychological effects, which help to explain observed developments that a theory based on full rationality would rule out. Although most people are aware of housing bubbles of the recent past – for example, in Ireland, Japan, Spain and the United States – the belief in real estate as a quasi risk-free investment seems to remain unquestioned. The fact that sharp and sudden drops in prices are possible and happen regularly is widely ignored. 

The figure shows a sales price index for Sydney together with quality-adjusted rental and sales price distributions. Credit: Sofie R. Waltl

The figures show a sales price index (top) for Sydney together with its quality-adjusted sales price distribution (bottom). Credit: Sofie R. Waltl

Everyone is affected by movements in housing and rental markets. If someone owns a property, it is usually her single largest asset; if someone rents, the cost often takes up a large fraction of her monthly income. This is why turbulence in these markets has larger effects on households than, for example, swings in the stock market (Case et al, 2005). The social implications of skyrocketing house prices and exploding rents but also of crashing markets are huge – which means that these markets need to be closely watched by policy-makers.

A house price index measures average movements of average houses in average locations belonging to an average price segment – a lot of averages! It is usually heavily aggregated, which implies that just because a national house price index reports rising prices, not every house will benefit equally from these increases. In fact, there is large variation in the distributions of prices and rents, and these distributions also change significantly over time.

Houses are highly heterogeneous goods (particularly compared with shares or bonds): no two houses are the same. Therefore, house price indices should be quality-adjusted, with differences in house characteristics taken into account. Still, changes in house price indices are often driven by developments in certain sub-markets, which are mainly determined by the three most important house characteristics: location, location, location. 

Hence, house price developments are extremely heterogeneous even within urban areas (see McMillen, 2014, Guerrieri et al, 2013, and Waltl, 2016b), and thus the interpretation of aggregated national or even supra-national indices is questionable. For example, the S&P/Case-Shiller US National Home Price Index reports changes for the entire United States, the ECB and EUROSTAT publish indices for the European Union and the euro area, and the IMF even produces a global house price index.

Missing bubbly episodes in sub-markets when looking at such heavily aggregated figures seems unavoidable; and basing an individual investment decision on them is dubious. Similarly problematic is the assessment of a housing market using such aggregated measures for financial stability purposes.

Price map showing the average price (in thousand AUD) for an average house for different locations over time in Sydney. Credit: Sofie R. Waltl

Price map showing the average price (in thousand AUD) for an average house for different locations over time in Sydney. Credit: Sofie R. Waltl

A typical pattern is that markets for low-quality properties in bad locations experience the sharpest rises shortly before the end of a housing boom. Look at the lowest price segment in Sydney’s suburbs (black, dashed line) compared with the highest price segment in the inner city (orange, dotted line) around the peak in 2004. It is also this segment that experiences the heaviest falls afterwards.

A possible behavioural explanation is as follows: the longer a housing boom lasts, the more people (and also the more financially less well-off people) want to participate in this apparently prosperous and safe market. Steady increases reported by house price indices give the impression that the entire market is booming with no end in sight. Whoever is able to participate becomes active in the housing market and investments boom in yet more affordable properties – the lowest segment in bad locations.

A common misconception is the assumption that rising house prices necessarily translate into higher rents almost immediately. But when the price contains a ‘bubble or speculative component’, this is not always the case.

In general, economists speak of a bubble whenever the price of an asset is high just because of the hope of future price increases without any justification from ‘fundamentals’ such as construction costs (Stiglitz, 1990). Investing in over-valued property and hoping for the rent to cover the mortgage is thus more dangerous than it might appear (see Himmelberg et al, 2005, for the components of the price-to-rent ratio measuring the relationship between prices and rents; and Martin and Ventura, 2012, for asset bubbles in general).

 

The figure shows location- and segment-specific indices for Sydney. CBD refers to the Central Business District. Credit: Sofie R. Waltl

The figure shows location- and segment-specific indices for Sydney. CBD refers to the Central Business District. Credit: Sofie R. Waltl

 

While we’ve already seen that price developments are very diverse, the same is also true for the relationship between prices and rents. Thus, simply looking at average price-to-rent ratios may miss the over-heating of a sub-market and its associated risks.

Credit: Sofie R. Waltl

Credit: Sofie R. Waltl

Buying property is thus more delicate than urban legends about the safety of real estate investment suggest. Above all, developments in housing markets are diverse even within small geographical areas and one number alone can never appropriately reflect what is going on. A complete picture of the dynamics in housing markets is essential from the perspective of an investor as well as a policy-maker.

And in case you’re hoping for investment advice, here’s the only piece I can offer: just because everyone buys does not mean that YOU should go out and buy whatever you can afford. In fact, when everyone (including your friend with questionable financial literacy) decides to invest in real estate, it might be exactly the wrong moment. Never rely on house price indices only, but go out and collect as much information as possible. And don’t forget: location, location, location… 

 

 


The figures show quality-adjusted developments in the Sydney housing market, and are part of the results of my doctoral thesis Modelling housing markets: Issues in economic measurement at the University of Graz under the supervision of Robert J. Hill. I am very grateful for his valuable support and advice. Calculations are based on data provided by Australian Property Monitors. Results, which this article is based on, are published as Waltl (2016a) and Waltl (2016b). The part about price-to-rent ratios is currently under review at a major urban economic journal (here is a working paper version). My work has benefitted from funding from the Austrian National Bank Jubiläumsfondsprojekt 14947, the 2014 Council of the University of Graz JungforscherInnenfonds, and the Austrian Marshallplan Foundation Fellowship (UC Berkeley Program 2016/2017). The views presented here are solely my own and do not necessarily reflect those of any past, present or future employer or sponsor.

Money Illusion and Economic Literacy: An Experimental Approach

The growing use of experimental methods in economics provides an opportunity to explore aspects of human behaviour that have long been discussed, but for which there have been little observable data. One such phenomenon is ‘money illusion’ – a term coined by Irving Fisher in the late 1920s to describe people’s failure to perceive that money can expand or shrink in value.

At a time when the German mark had fallen to a fiftieth of its original value, Fisher recounted his conversation with an intelligent shopkeeper in Berlin who had sold him a shirt. Claiming that she had made a profit since she had bought the shirt for less than he paid for it, she failed to understand the impact of inflation. Since her accounts were in a fluctuating currency, what looked like a profit was only so in nominal terms: in real terms, she had suffered a loss.

 

Photo/Credit: Nastco/iStock.com

Photo/Credit: Nastco/iStock.com

 

Since the ‘rational expectations’ revolution in economics in the 1970s, money illusion has typically been regarded as a contradiction of the idea that people are rational, profit-maximising agents. Nobel laureate James Tobin, for example, said that ‘an economic theorist can, of course, commit no greater crime than to assume money illusion.’

But the small deviations from rationality revealed by recent research in behavioural economics suggest that it is no longer necessary to deny the existence of money illusion. People might suffer from it depending on whether they are looking at their economic environment in nominal or real terms. Evidence for the ‘framing effect’ shows that alternative representations of the same situation may lead to systematically different responses, since some options loom larger in one situation than in another.

Experimental research indicates that money illusion can have substantial effects at the level of the aggregate economy. For example, it might be profitable for rational firms to imitate the behaviour of naive firms suffering from money illusion. In that case, only a negligible amount of individual money illusion is of great significance, since it can be multiplied across the economy by the behaviour of other rational agents.

Economists tend to argue that the threat of money illusion can easily be avoided by educating the public. But is there a certain level of economic literacy that will remove any aggregate effects of money illusion?

Certainly, central banks have been keen in recent years to promote economic literacy via various educational programmes. When Ben Bernanke was chairman of the US Federal Reserve, he made a strong case for economic literacy, claiming that it could deliver enhanced effectiveness of monetary policy, higher probability of achieving optimum outcomes, improved anchoring of inflation expectations and smoother functioning of financial markets.

The experimental method is a valuable tool for investigating whether a particular level of economic literacy acquired through economic education can lead to improved decision-making and alleviation of the aggregate effects of money illusion. That is what I have done in a laboratory experiment with nominal and real framings, and two different groups of participants – a well-educated group and a less well-educated group.

The experimental subjects were asked to take the role of firms and to select the correct profit-maximising price in an environment of nominal framings. In the middle of the experiment, the central bank announced a reduction in the money supply. In these circumstances, the logical step for the rational firm is to adjust prices downwards as long as others do the same.

I expected the well-educated group of participants to adjust their prices downwards faster than the less well-educated group. Given their ability to avoid the misperceptions of money illusion, they should not be attracted by misleading profits at high nominal values but instead base their decision-making on real values.

What’s more, the well-educated participants should have no reason to doubt that others will adjust their prices downwards as well. Not only should these participants not expect others to suffer from money illusion, but they should also have no reason to assume that other participants will expect others to suffer from money illusion.

In the context of the experiment, wider dissemination of knowledge about the economy might ensure that participants can develop a better understanding of signals from the central bank. At the aggregate level, this should contribute to a faster process of convergence to the economy’s equilibrium, in which firms immediately adjust their prices in response to changing economic circumstances.

But my results indicate that price convergence even on the part of well-educated participants is still slow. So while economic literacy has the potential to enhance the effectiveness of monetary policy, money illusion remains a pervasive phenomenon. Further investigation of the effects of education with respect to money illusion is highly desirable.

Faster Progress for Everyone

Martin Chalfie is promoting preprint archives for biological research papers that will make new results and findings accessible to a significantly bigger audience much faster.

 

Credit: exdez/iStock.com

Credit: exdez/iStock.com

 

Important questions that kept cropping up during the 67th Lindau Nobel Laureate Meeting include what the future of research can and will look like and how the status quo can be improved. Beside the oft-mentioned political events and their influence on science, another major issue concerns an intrinsic problem: the publication machinery and the importance of the impact factor. Shortly before the meeting, a number of Nobel Laureates publicly criticised the current journal-ranking method. During the meeting, Martin Chalfie also expressed his view that publications should be assessed more on the basis of their factual quality and less on which journal they appear in. I asked him what he had in mind as an alternative and what steps, if any, he has taken. His solution is: ASAPbio.org – Accelerating Science and Publication in Biology.

ASAPbio is an advocacy group founded by Ron Vale – an initiative instigated by scientists for scientists it aims to make new discoveries within the life science available to a broad audience much faster than previously possible. Chalfie helped launched the initiative in early 2016 together with Harold Varmus, Daniel Colón-Ramos and Jessica Polka, now the director of ASAPbio. “We wanted to develop a preprint archive for biological research. There has been something similar in physics for at least a quarter of a century.” As soon as researchers are ready to share their work and findings with the world, Chalfie continues, they can upload their articles to a preprint archive, where it can then be read and commented on by other scientists as well as by the general public. The largest preprint server for life science-related articles is bioRxiv.

ASAPbio promotes the use of open access centralised and comprehensive repositories for all life sciences. “This changes the overall dynamics of the publication process,” Chalfie says. The conventional publication pathway looks quite different: A scientific paper is submitted to a suitable journal. In an initial step, one or more editors then decide whether the paper is appropriate material for the journal in question. If the editors give the go-ahead, the paper is passed on to several experts in the field. They then form a picture of the work and can, if they deem it necessary, reject the paper as deficient or request further experiments. In such cases, the authors have several months to make the requested changes before a final decision is made, which can still be negative even after suggested changes have been made. All in all, the decision-making process can take from several months to a year, and if the paper is ultimately rejected, the authors have to submit it afresh to another journal. As a result, not only the authors lose valuable time but also the research community and the public at large, who have no access to the new findings during the decision-making process. “By contrast, preprint archives make new discoveries and research advances immediately available to everyone – whether scientists or students – and they do so free of charge,” Chalfie says, summarising the advantages.

Moreover, each paper is automatically assigned a definite submission date which the authors can refer to should a similar work be published soon afterwards.

However, Chalfie, points out, “it’s not about publishing raw data at an early stage.” Instead, a manuscript should be uploaded to an archive platform at the same time as it is submitted to a journal. It is then revised in stages in response to feedback from the journal and comments submitted via the platform.

 

 

Martin Chalfie talking to young scientists during the 67th Lindau Nobel Laureate Meeting,  Photo/Credit: Julia Nimke/Lindau Nobel Laureate Meetings

Martin Chalfie talking to young scientists during the 67th Lindau Nobel Laureate Meeting, Photo/Credit: Julia Nimke/Lindau Nobel Laureate Meetings

 

“During one of the first organisational meetings, we talked about how the established journals would be likely to react to such an initiative and these platforms. Fortunately, the major journals such as Science, Nature, the journals of professional societies and many others all support the idea of preprint archives and the general repository,” Chalfie explains. The journals have no problem with authors submitting their papers to them and uploading them to a platform simultaneously. Many journals even allow “joint submissions”, meaning that they ask authors whether they want to make their papers available on an archive server at the same time.

Another sign that this new pre-release system will catch on in the long term is the acceptance of such prearchived work as a criterion for grants, the allocation of project funds and similar selection procedures. “The Howard Hughes Medical Institute, the NIH, the Wellcome Trust and many universities now consider papers in the preprint archive in their evaluation of applicants,” as Chalfie relates proudly.

Although the new preprint archives as well as the general repository for biological research are still in their infancy compared to the fields of physics, and they have yet to be discovered by many scientists, they have already been acknowledged and accepted by major research institutes and renowned journals. Therefore, advocacy groups such as ASAPBio offer an excellent opportunity to take the cumbersome publication process in the life sciences to a new direction and focus once again on the actual quality of research work instead of mere impact factors.

From Copper Photocatalysts to Chemical Topology

When Jean-Pierre Sauvage started his own research lab, he focused on developing copper catalysts that could absorb light and use that energy to split water into hydrogen and oxygen gases. After characterising the shape of one of these catalysts, the focus of his research changed to that recognised by the 2016 Nobel Prize in chemistry: synthesising molecules with interlocking rings and knots.

The game-changing catalyst was a copper ion binding to the concave portions of two crescent-shaped phenanthroline molecules. Because of its binding geometry, the copper ion held the arcs in perpendicular planes. Sauvage realised that closing each crescent to form a loop would create a molecule with two interlocking rings, called a [2]catenane. “At that stage, we had to decide whether we would continue in the field of inorganic photochemistry, or be more adventurous and jump into a field we didn’t know so well,” Sauvage said. “We decided to jump.”

 

Jean-Pierre Sauvage during his lecture at the 67th Lindau Meeting, Picture/Credit: Christian Flemming/Lindau Nobel Laureate Meetings

Jean-Pierre Sauvage during his lecture at the 67th Lindau Meeting, Picture/Credit: Christian Flemming/Lindau Nobel Laureate Meetings

The field less familiar to him at that time is called chemical topology, and it has foundations in mathematics and biological molecules. Topology is the study of infinitely deformable objects. Mathematicians classify topological knots as identical if they have the same number of loops and crossings, even if their shapes appear drastically different. Topological knots can also be found in biological structures. Some bacteria have a loop of DNA, and two interlocking rings of nucleic acid can appear as an intermediate during cell replication. In viruses that infect bacteria, intertwined cyclic proteins can provide rigidity to their outer shells.

In 1961, H. L. Frisch and E. Wasserman, at Bell Labs, connected topology to the chemical world, publishing ideas to synthesise molecules with interlocking rings and knots. Three years later, Profs. Schill and Lüttringhaus synthesised the first molecule with two interlocking rings, in an elegant, but lengthy, process that built each ring of the [2]catenane sequentially.

About twenty years later, Sauvage recognised that his copper catalyst pre-assembled the interlocking portion of the catenane, providing a fast and efficient route to the simplest molecular chain. In 1983, he, along with Christiane Dietrich-Buchecker and J.P. Kintzinger, synthesised a [2]catenane in two steps, compared to the 15 steps needed in the previous synthesis. Sauvage says the researchers knew their work was novel, but they partly hid it in the literature, publishing in a lesser-read journal and writing the article in French. Although the paper remains one of the few French papers of his career, the concept of templating catenane synthesis has become a standard method in the field.

 

A molecule with interlocking rings syntheised by Jean-Pierre Sauvage and Christiane Dietrich-Buchecker in 1983. Credit: Wikimedia Commons

A molecule with interlocking rings synthesised by Jean-Pierre Sauvage and Christiane Dietrich-Buchecker in 1983. Credit: Wikimedia Commons

Over the next decade, Sauvage and his group synthesised and characterised molecules with more complex topologies, including a doubly-interlocking catenane and molecular trefoil knot with three loops and three crossings.

As the researchers continued to follow their interest in the challenge of making molecules with novel structures, they also developed an interest in molecular motion. In interlocking rings, for example, one ring can rotate around the other. With a reliable way to make a variety of interlocking molecules, researchers could then build new structures, experiment with ways to control the motion, and then convert that motion to work in molecular machines – advances achieved by Sauvage’s colleagues, co-laureates, and friends J. Fraser Stoddard and Bernard L. Feringa.

From the story of his research, Sauvage had four pieces of advice for the young scientists:

Novelty is the most important thing when choosing research, and he stressed the importance of working in a team, interacting with other scientists inside or outside your group. Moving to an unfamiliar field can be very beneficial, Sauvage said. And although that jump can be intimidating, he encouraged the young scientists to be self-confident: “Do not ask yourself whether you are good enough to tackle a new problem: Just do it!”

Chemists Respond to Climate Change with Sustainable Fuel and Chemical Production

Climate change is a common lecture topic at the Lindau Nobel Laureate Meetings. At the opening of the 67th Lindau Meeting, William E. Moerner presented the keynote speech prepared by Steven Chu, 1997 Nobel Laureate in physics and former U.S. Secretary of Energy. In his speech, Chu described how clean energy technologies provide an insurance policy against the societal risks of climate change.

At previous meetings, Nobel Laureates Mario Molina, Paul J. Crutzen, and F. Sherwood Rowland have detailed how greenhouse gases produced by burning fossil fuels alter atmospheric chemistry and warms the planet. Reducing greenhouse gases, particularly carbon dioxide emissions, is key to stopping the planet’s warming temperature. But instead of viewing carbon dioxide as a problem, what happens if it is also part of a solution to climate change?

 

Science Breakfast Austria during the 67th Lindau Nobel Laureate Meeting, Photo/Credit: Julia Nimke/Lindau Nobel Laureate Meeting

Science Breakfast Austria during the 67th Lindau Nobel Laureate Meeting, Credit: Julia Nimke/Lindau Nobel Laureate Meeting

 

Research discussed by Nobel Laureates and young scientists at the 67th Lindau Meeting included ways to use carbon dioxide as a renewable source of synthetic fuel and useful chemicals. Currently, fuels and chemicals come from refined and processed oil and natural gas. Producing these compounds from carbon dioxide captured from the atmosphere or factory emissions could be environmentally sustainable because carbon dioxide released during production or consumption is recycled to make new fuel or material. Sustainable and renewable feedstocks are one aspect of green chemistry, a key topic at this year’s meeting.

During a science breakfast hosted by the Austrian Federal Ministry of Science, Research, and Economy on Tuesday morning, Bernard L. Feringa, 2016 Nobel Laureate in Chemistry, outlined three challenges for carbon capture and utilisation: separating carbon dioxide from other gases, efficiently concentrating it, and catalytically converting the inert molecule to useful fuel and chemicals.

In addition to his Nobel-winning work on molecular machines, Feringa also studies catalysis. While working at Shell in the early 1980s, he developed lithium catalysts to reduce carbon dioxide. The project ended after a couple of years, however, when the researchers realised they would need all the lithium in the world just to make a reasonable amount of fuel.

 

and Melissae Fellet during a Poster Session at the 67th Lindau Nobel Laureate Meeting, Picture/Credit: Christian Flemming/Lindau Nobel Laureate Meetings

Biswajit Mondal and Melissae Fellet during the Poster Session at the 67th Lindau Meeting, Credit: Christian Flemming/Lindau Nobel Laureate Meetings

Since then, researchers around the world have developed various electrochemical and photothermal catalysts that reduce carbon dioxide into compounds such as carbon monoxide, formic acid, ethylene and methane. Several young scienists attending the meeting are studying these catalysts, and two presented their work during the poster session.

Biswajit Mondal, at the Indian Association for the Cultivation of Science, studies the mechanism of iron-porphyrin electrocatalysts for carbon dioxide reduction. With an understanding of the precise molecular changes during every step of the reduction reaction, researchers can then tailor the catalyst structure to enhance the reaction efficiency.

Dayne F. Swearer, at Rice University, combines two reactive functions in one aluminum nanoparticle to unlock new catalytic mechanisms for known reactions. In his nanoparticles, the aluminium core absorbs light and generates an energy carrier called a plasmon, which can alter and enhance the activity of a metal catalyst on the outside of the nanoparticle. For example, a particle with a shell of copper oxide its aluminium core reduces carbon dioxide to carbon monoxide faster and more efficiently than particles made of either material alone.

Back at the science breakfast, Feringa encouraged young scientists to investigate photoredox catalysts that reduce carbon dioxide using absorbed light energy. These catalysts can create a variety of reactive intermediates, including radical anions and cations, which could be used to add carbon dioxide to hydrocarbons. Such reactions provide renewable ways to make building blocks for plastics and other common polymers.

 

Young scientist Anna Eibel during the Science Breakfast, Credit: Julia Nimke/Lindau Nobel Laureate Meetings

Young scientist Anna Eibel during the Science Breakfast, Credit: Julia Nimke/Lindau Nobel Laureate Meetings

Renewable routes to acrylic acid, the building block of acrylate polymers common in dental work, are interesting to Anna Eibel, a young scientist at the Graz University of Technology in Austria and a speaker at the science breakfast. She develops new molecules to induce acrylate polymerisation with light at longer wavelengths than the ultraviolet used now.

To really address carbon dioxide emissions, however, renewable routes to synthetic fuels such as methane and methanol are needed. In 1998, George Olah, the 1994 Nobel Laureate in Chemistry, talked about synthetic methanol production from carbon dioxide at the 48th Lindau Meeting, and the topic reappeared at the science breakfast this year.

Chemists are in a unique position to advance renewable fuels and chemicals, Feringa said. The main research questions in this area involve problems of catalysis, electrochemistry, photochemistry, material synthesis and chemical conversions. Feringa encouraged the young scientists to take opportunities to tackle these questions. “Of course you may contribute only a small step, but of course we have to do it. It is our duty to society […] to open opportunities for the future.”