Regulating Synthetic Biology When Its Risks Are Unknown

Synthetic biology uses tools from genetic engineering to design, create, and assemble living organisms with a particular function. Picture/Credit: artoleshko/

Zur deutschen Version

In the fast-moving world of synthetic biology, discoveries are closely tied to their social implications. Synthetic biologists use tools from genetic engineering to design, create and assemble — sometimes from scratch — living organisms with a particular function. Commercial kits let high school students create bacteria that smell like bananas, a company uses engineered yeast to produce an anti-malaria drug on a large scale, and researchers have created a microbe that alters bee behaviour.

Although these applications are playful and practical, the tools of synthetic biology might eventually be used in other ways. Potential environmental, public health and national security risks from an uncontrolled release of a synthetic organism are unknown, making it difficult to imagine ways to regulate the products of synthetic biology.

But that doesn’t stop researchers and social scientists from trying: About 120 people attended a session about the relationship between technology development and risk governance in synthetic biology on the morning of the third day of the American Association for the Advancement of Science annual meeting in mid-February. Political debates surrounding genetically modified organisms provide clues to potential sticking points in synthetic biology regulation, and scientists can contribute to policy discussions.


Risks unknown

During the session, Andrew Ellington, a biochemist at the University of Texas, Austin, described one of his projects that could have future national security implications. Ellington and his colleagues wanted to engineer a microbe to influence bee behaviour. If successful, the researchers imagined feeding Africanized bees customised probiotics to make them less aggressive or giving probiotics to pollinating bees that encourage foraging at pesticide-free plants to prevent colony collapse disorder.

Ellington and his team engineered microbes to produce L-dopamine, a chemical signal in the brain that affects learning. Then they fed the engineered microbe to bees and gave the bees an electric shock at the same time they released a specific smell. Bees with the engineered microbe in their guts extended their stinger at the smell, learning to anticipate a shock, slightly faster than normal bees. They also retained that learned knowledge longer than normal bees.

Ellington said his team wants to test these dopamine-producing microbes in mice — and eventually humans — as a potential treatment for Parkinson’s disease. But considering testing behaviour-altering microbes in humans raises large social questions: Could these engineered organisms be a potential security threat? What are the health and environmental risks if those organisms become uncontrolled?

Existing regulatory pathways analyse risks from nanomaterials, genetically engineered crops and chemical weapons by considering the health and environmental impacts should these materials spread through the air, water or soil. These systems are effective when the hazards are well understood, the risks of exposure known and the materials can be controlled.

But traditional risk assessments don’t apply to synthetic biology, says Igor Linkov, who leads the Risk and Decision Science Team at the US Army Corps of Engineers. Synthetically engineered organisms are often designed to pass their genetic changes to their offspring. This means their uncontrolled release could spread genetic information that impacts other species.


Lessons from GMOs

 Clues to important components of effective regulatory systems for synthetic biology can be found in the history of genetically modified organisms. In 2014, Jennifer Kuzma, an expert in governance of emerging technologies at North Carolina State University, tracked the evolution of policies involving GMO insects and plants and found that the pacing of the policy process did not allow time for public participation. Environmental and consumer groups responded by pressuring policy makers, forcing them to take actions that advanced the regulatory process into new phases.

But sometimes, hurdles in a regulatory process keep new products from reaching consumers. Thomas Bostick, Chief Operating Officer at Intrexon, shared the company’s experience trying to bring a genetically modified salmon to market in the US. The AquAdvantage salmon contains a gene so that it grows to market weight in half the time of current salmon, while eating 75% less feed and staying healthy without vaccines or antibiotics. This genetically engineered salmon could also be produced in inland facilities, eliminating disease and parasites that spread into surrounding marine ecosystems from the open cages of typical salmon farms.

The larger of these genetically engineered salmons expresses a growth hormone that allows it to reach market size in half the time as its sister. Picture/credit: AquaBounty Technologies


But much of the 20-year development of this salmon has been held up in regulatory battles, Bostick says. The US Food and Drug Administration (FDA) has approved the AquAdvantage salmon as safe to eat, but it can’t yet be sold in stores. In 2016 and 2017, a bill specifically requiring labeling of the AquAdvantage Salmon as a genetically engineered organism caused the FDA to block imports of any genetically engineered salmon. Now the FDA is waiting for the US Department of Agriculture to decide on labelling requirements before it can pass regulations allowing the salmon to be sold in stores. The regulatory system needs flexibility to accept innovation, Bostick says.


Participatory and anticipatory governance

Kuzma offered several ways to improve risk assessments for synthetic biology applications. First, the policy process needs to operate from a middle ground that respects the knowledge and process of science while also acknowledging the value-based concerns of citizens. The system also needs a combination of participation and anticipation.

There are several models for processes of rulemaking that could be useful for synthetic biology. Kuzma describes one that she developed, along with Christopher Cummings, at Nanyang Technological University in Singapore. This model could help policymakers evaluate synthetic biology risks. The researchers interviewed 48 synthetic biology experts regarding case studies of four research projects: biomining using highly engineered microbes in situ, “cyberplasm” for environmental detection, de-extinction of the passenger pigeon, and engineered plant microbes to fix nitrogen on non-legumes.

For each case study, the interviewees scored how much information was available in each of eight categories: (1) human health risks, (2) environmental health risks, (3) unmanageability, (4) irreversibility, (5) the likelihood that a technology will enter the marketplace, (6) lack of human health benefits, (7) lack of environmental benefits and (8) anticipated level of public concern.

Then the researchers plotted the average score for each category in a given case study on an octagonal chart to show the relationship between all the categories. A deficit in environmental or human health risks might suggest more research in that area, while low health risks and high public concern might warrant an outreach campaign. The same tool could be used early in regulation development to gauge the perspective of concerned citizens too.

Questions about how to regulate synthetic biology are not just for social scientists to wrestle. Communicating policy implications of research, along with associated uncertainties, is part of responsible conduct for researchers worldwide, says the IAP, a global network of science academies, in a 2012 report. “There’s a role for the scientific community to be involved with discussions about implications, concerns, and governance from the conception of research to funding to execution,” says Katherine Bowman, at the National Academies of Science, Engineering, and Medicine.

#LindauForLife: Leverage Lindau for a Long and Beloved Career

Young scientists at the Lindau Meeting. Photo/Credit: Christian Flemming/Lindau Nobel Laureate Meetings


Hi Lindau Nerds! If you are reading this, you are either heading to the Lindau Nobel Laureate Meeting (#LiNo18) or are a Lindau alum. Or perhaps you are merely a fan. I get it! I’m a Lindau Lover, too! There’s something so special about this annual conference of established and emerging Nerds, held every summer in the quaint town of Lindau, Germany. For 68 years, this charming hamlet has hosted Nobel Laureates (and a few Nobles too!) along with upwards of 500 young scientists from 80+ countries. It is a knowledge and networking jubilee, where nerds from the four corners of the Earth and at least three parallel universes get to engage with laureates, hear their wisdom, and exchange ideas to improve the human condition. It is truly #NerdHeaven!

But there is a problem here. Once you get a taste of Lindau, you want more. #NerdHeaven sticks with you – it burrows into the deepest part of your heart, and it inspires you to be brighter and better. It’s the culture of Lindau that has made the biggest and longest lasting impression on me, and it’s why, after having the privilege of attending three times, I still look forward to experiencing, participating in and partaking of this nexus of creativity.

So how to solve this lust for Lindau? As a Lindau alum, you should continue the leitmotif of Lindau – Educate, Inspire, Connect – throughout your career. When you do, you will help other scientists and engineers, you will be able to solve humanity’s grand challenges, and you can and will change the world. And in doing so, you will find intense professional satisfaction.

But to change the world and achieve career bliss, you have to start by recognising your own power that you have as a STEM-educated professional, understanding your value to employers and collaborators, and realising your career options are limitless.

Indeed, scientists and engineers have a wealth of career potential – there are myriad and diverse jobs you can pursue, professional paths you can take, and organisations that covet, hire and pay people like you (handsomely!). But to access these career opportunities and land these jobs you have to think strategically about the value that you offer.

And as we move forward, endeavouring to advance your career, I want to offer you a few fundamentals to ponder. These are concepts that will help you design your own professional path and land the career and job of your own making.


  1. You, and only you, define and decide on what your career will be. This is an extremely important mantra to always remember. It can be a challenge to realise that you are the one in the driver’s seat of your career and you alone get to make the decisions about how your professional experience will play out and transpire. I know that in science, particularly when you are early in your career, everyone seems to be doing the same thing – grad school, postdoc, research, publishing, presenting at conferences. You are all on the same road together, seemingly going towards the same goal. And yet, this is merely an illusion. Look to your left and right – your peers will take different roads to find professional bliss and you will too. No one’s career looks the same. Furthermore, your career is up to you – it is not your PI’s career or your parent’s career, it is yours. So you get to make the rules and define success for yourself.


  1. You are a rock star. You are of enormous value to many, many ecosystems. Your career and job search are not limited by the discipline of your degree or the title of your department. You have extensive technical skills, but you also have talents in project management, business, communications, teambuilding, marketing and even negotiations. These hard and soft skills contribute to the extensive value you provide any organisation that is lucky enough to get you as their employee. And as a rock star, this contributes to you having even more choice of careers.


  1. You are a problem solver. This is the central component of your value. Know what kinds of problems you can solve and where those problems exist. Since the core of every job is to solve problems, you give yourself a competitive advantage in the job marketplace when you can elucidate this critical information.


  1. You must understand and be able to articulate your brand. A brand is simply a promise of value, and your promise of value is your promise to deliver excellence, dependability and expertise in whatever you do. Your brand consists of your STEM education and training and all of the skills and expertise that you have acquired in pursuit of your degree. Communicating your brand to others via appropriate self-promotion channels is what will open doors to professional advancement.


  1. You must network. Networking is the most honourable enterprise that you can undertake, because it is about crafting win-win alliances with other parties, where you are both providing value to each other in various ways over time. Networking is NOT me trying to take something from you; rather, networking is about exploring what I can offer you and what I can do to inject value into your team. With this positive definition of networking in mind, you can begin to appreciate that the action of networking must be done all the time to ensure you find collaborators with whom you can partner and for whom you can solve problems. The Lindau Alumni Network is just one of the many networks you have access to that can help you access hidden career opportunities.


The extent of the value that you offer and how you can leverage this to craft your dream career is one of the many topics we discussed in the first webinar for Lindau Alumni and #LiNo18 participants, as part of the Lindau Nobel Laureate Meetings Alumni Network Initiative. The webinar, ‘What Should I Do With My Career? Recognising Your Passion and Catalysing Your Potential’, took place on 22 March at 1700 CET. You can watch a recording of the full webinar below. 



We discussed how you, as a science-educated professional, have a lot more career opportunities than you realise. The key is being able to identify and articulate your unique value and problem-solving abilities to diverse decision-makers and build and nurture strong networks to create career opportunities for you and those around you. When you do this, everyone wins. Even science wins!

With tens of thousands of Lindau Alumni worldwide, the Lindau Team is taking on an ambitious scheme to Educate, Inspire and Connect alumni, to enable your success and to assist you with your career explorations and examinations. Indeed, you should think of Lindau as your strategic career partner, and over the next year, you’ll see some very exciting projects and events launched all designed to help you be a better you. So stay tuned! More webinars abound, as do career advice articles and blogs and a special career development presentation taking place during the European Science Open Forum (ESOF) this July in Toulouse. I am very excited and honoured to be a part of this initiative!  

You’ll be hearing more from me and my colleagues in the Lindau Alumni Network. And we want to hear from you! How can we help you achieve professional victory? In the meantime, remember that you move mountains – your energy and dedication to improving the world, which allowed you to experience Lindau in the first place, is what will enable us to address our Grand Challenges, improve the human condition, and peer through the mist of knowledge to know our place in time, space and history. I can’t wait to see what your future has in store for you, and the Lindau Alumni Network will be right there with you! #LindauForLife


Author’s Note: Some of these concepts have appeared in other works by the author, including her book, Networking for Nerds, career columns in Physics Today and Nature Astronomy, and other publications.



Read More

Harnessing the Power of Technology for Health Care


Smartphone-based apps are driving a revolution in health care. However, we are likely at the beginning of a long road. Many approaches lack validation and excessive use of technology itself also has detrimental impacts on mental well-being.

The ubiquity of smartphones and social media is a compelling reason for their use to monitor and even improve mental health. If everybody is already carrying around a highly advanced piece of technology in their pockets, then why not harness that potential? The sophisticated sensors with which such devices are equipped means that they can be used to continuously and unobtrusively gather a wealth of information without any input from the users themselves, so-called ‘passive sensing’. This approach predates the invention of smartphones and is already widely used to track sleep and physical activity, for instance. Its very unobtrusiveness is what makes it so promising as a tool to track mental health, an area where sensitivity and inconspicuousness are often paramount. In currently used mental health applications, such passive sensing often involves capturing data on location, physical activity as well as call and text activity. These data are then interpreted by the software to determine whether the user is showing signs of depression, loneliness or stress. Initial studies have shown that this approach can be feasible and suitable for assessing mental health and compares favourably with traditional approaches. Yet, a significant issue for ‘passive sensing’ using smartphones is data security. Not only must all data be securely transmitted or encrypted, but of equal importance, the use of personal data by third-parties is a concern that must be addressed. Further, it remains unclear how best to combine ‘passive sensing’ with care and treatment by mental health professionals.

Another passive approach to monitoring mental health involves the use of machine-learning algorithms that scour a person’s social media posts for language and patterns that may indicate depression or that a person is contemplating harming themselves. However, there are significant concerns connected with this approach. For one, it remains questionable how companies like Facebook use the data that they glean. Indeed, it appears that the company’s plans have already run foul of strict EU laws on online privacy, and last year, Facebook was forced to deny that even though it was evaluating the emotional state of users, it was not passing on such information to third parties for advertising purposes. Moreover, Facebook is remaining reticent on the exact methods that they are using to flag worrying online behaviour as well as how the algorithms have been validated. The issue remains fraught to say the least.

So-called ‘digital therapies’, applications that monitor a user’s mood on a daily basis and suggest activities that developers claim promote mental well-being, represent a more active approach to using technology to improve mental health. Recent years have seen a striking proliferation of such resources, and thousands are now available. Indeed, this huge choice coupled with the fact that many seem to lack any rigorous scientific validation has led the chair of the American Psychiatric Association’s Smartphone App Evaluation Task Force to describe the situation as “…like the Wild West of health care”. A recent meta-analysis sought to bring some clarity to this issue and to sort the wheat from the chaff. The authors of this study analysed data from 18 randomised controlled trials and concluded that there were indeed significant positive effects associated with these tools.


Digital therapies monitor the user’s mood and suggest actions to promote well-being. Photo/Credit: Martin Dimitrov/

In another randomised control trial, currently ongoing in Spain, the app iFightDepression is being tested. It has been developed in an initiative of the European Alliance Against Depression with the aim of helping “individuals to self-manage their symptoms of depression and to promote recovery.” The tool, which is based on the principles of cognitive-behavioural therapy, is guided, meaning that while it is based on self-management, it is also intended that users are supported by doctors and trained mental health professionals. 

Aside from considerations of data security and validation, another major concern related to the use of technology for mental health relates to the potentially corrosive effects of indiscriminate and immoderate use of technology and social media on a person’s well-being. Even social media giant Facebook has now admitted that users who spend time “passively consuming information” are likely to feel the worse for it. Salesforce CEO Marc Benioff has called for technology and social media to be regulated like the tobacco industry as he believes they are similarly addictive and also pose risks to mental health, while the influential philanthropist George Soros has described social media companies as a “menace” whose “days are numbered”. Some researchers have even stridently claimed that the massive spike in depression in U.S. teenagers seen from 2012 onwards can be attributed primarily to the explosion in smartphone use.

There are some obvious challenges on the road ahead. For instance, if symptoms are at least partially caused by technology, is a technology-based solution really the right one? Also, how do we ensure that sensitive personal data does not fall into the hands of bad actors or is used in ways that compromise our right to privacy? Last but not least, the rampant growth in this sector means that efforts to evaluate the large number of different apps and approaches have not kept up, and potential users are faced with a huge number of products of dubious effectiveness. Like the Wild West, the technology-based approach to monitoring and improving mental health may be full of opportunity, but the lack of regulation and of a basis in hard scientific evidence also represents a danger.

Rethinking Authorship

How to forget about author positions and better acknowledge contributions


The true winners of current scientific publishing practices are those in the key author positions. Illustration/Credit:


Being a scientist is more than the pipet in your hand or the script on your computer. As an early career scientist, I want to share my opinion on a central policy in our community, which I think needs major revision. I will start with an overheard conversation (names changed) at a high-profile conference:

“If Alice [the PI] gets the last authorship on the current story we should at least get shared-last for the other one. For her it doesn’t matter anyway, and we did most of the supervision. It is more important for us. Bob [the PhD student] will then easily get his two first authorships and for Dave [a postdoc], well there is always room in the middle…”

Authorships and positions are important for scientists, since they serve as the key currency in an academic career, yet something is odd with this type of bargaining in an academic environment. The convention in biology is to annotate the importance of individual contributions in the list of authors. Typically, the first author is responsible for most of the experiments. The senior/last authors are usually supposed to supervise and coordinate the project and ensure the quality of the work reported. They therefore have an overarching responsibility for scientific accuracy, the validity of the methods, analysis and conclusions. Corresponding authors, who are typically at the last or first positions of an author list, are responsible for the communication between editors, authors and readers. Contributing authors in the “middle” of the manuscript are supposed to be ordered according to their relative contributions. However, if one talks to peer scientists it seems that the authorship policy very much depends on the lab culture and does not follow strict criteria. You may hear about labs where early postdocs can receive last and corresponding authorships, labs where even senior postdocs never will get this prominent position, groups in which people that “need to get a paper” are granted positions (guest authorships), or in which contributions – for instance those that are farther in the past – are neglected (ghost authorships).

In 2011, 89% of all publications indexed in Web of Science had more than one author. In fact, ambitious long-term projects that tackle complex scientific questions may rely on dozens or hundreds of scientists investing years of research. Impressive examples of highly collaborative long-term projects are the detection of gravitational waves, as well as the Higgs boson with decades of research and over 1,000 and 2,500 authors, respectively. These papers provide an alphabetical author list. In biology, alphabetical author lists were also used, at least partially (supplemental author list), in huge collaborations such as the human genome project. However, in biology this is the exception and more commonly the convention is to annotate the importance of individual contributions in the list of authors with the last and first positions indicating the main contributors. Thus, only a few researchers receive the important authorships at key positions. Alternatively, a ‘big story’ may be broken down into smaller pieces just for the sake of inflating the key positions instead of serving scientific progress and dissemination.


Shifting focus to contribution sections

An important question is how robust, reliable, and fair our authorship policy is when it comes to acknowledging the input to complex, projects that are often the work of multiple contributors. Previous studies have assessed that aspect in detail, but despite major criticism, not much has changed in publication practices in biology. In a recent article, Munafò and Smith discussed how triangulation, which is three entirely independent experimental validations, should become good scientific practice to increase accuracy of published data and validity of claimed conclusions. The authors concluded that adding more validations will require to rethink how contributions are assigned and suggested that in contrast to the common linear ranking a “contributorship” should be the core of evaluation.

My suggestion is to consider ordering authors alphabetically or to remove author names completely from the header of the paper. Instead, author contributions should be described in detail in the contribution section. The aim is to shift the focus from “What is your position on the paper?” or “How many first/last authorships do you have?” to the question “What have you done?”.

Numerous publications have shown that a simple quantitative metric, such as linear contribution ranking, is open to exploitation and corruption. Over time, such metric ceases to be a good measure. The introduction of detailed contribution sections in publications was a first step towards improving transparency in scientific publications. This section should be the core of any paper and, in assigning individual contributions, it ought to provide a detailed link between the presented data and ideas and the authors. Removing the authorship order as an ‘easy’ proxy for contribution will shift the focus entirely to the contribution section making evaluation more transparent.

The issue of unethical practices in authorship assignments have been recognised, analysed and critically debated for decades. Emphasising the contribution section may help to prevent scientific misconduct such as honour-, ghost- or guest-authorships.
If everyone had to take detailed responsibility for a specific aspect of the paper and this section was the cornerstone of the evaluation, vague contributions would be more obvious and of less value, and questionable acknowledgements such as the
“curious case of Dr. Schatten” may be prevented.

For individual contributions to be properly reflected in the author order of the publication it is necessary that the value of individual contributions can be assigned objectively. However, the significance of a contribution can be a matter of perspective. In fact, the impact of scientific discoveries is evaluated by the scientific community and takes place over a timeframe of years. That should also be the case for the evaluation of author contributions. Omitting the author order as a proxy for contributions will shift the focus to actual contributions instead of a subjective ranking of importance; in other words, it will shift the focus to the ‘raw data’ instead of the interpretations.


The bargaining for contributions. Credit: The Upturned Microscope by Nik Papageorgiou


Prioritising scientific progress

A detailed contribution section referencing the data presented in the manuscript could lead to the exclusion of scientists that provided negative results. Because science is per definition progressing into the realm of uncertain outcomes, positive results are an incomplete measure of performance and negative results are very important contributions. Adding a mandatory section that mentions the researchers that performed all inconclusive experiments may prevent others from wasting resources stumbling over the same pitfalls.

The strong dependency of individual scientific careers on first and last authorships currently leads to situations in which scientific progress is not the main concern of scientists. Researchers may save their ‘own’ ideas for their ‘first-author project’. If only the contribution counts, that will provide a strong incentive to contribute as soon and for as long as it benefits the research, promoting both scientific collaboration and progress.


Calling for action

It is certainly not possible to implement a perfect system, but it should always be our aim to improve on critical policies in the scientific community and to re-evaluate concepts and strategies in the light of the changing scientific zeitgeist and new evidence. It would be interesting to perform an empirical analysis of the positive and negative long- and short-term effects comparing journals and disciplines that handle authorships differently. What is the perception of the community? Has it affected the frequency of scientific misconduct? Does it lead to a more objective evaluation of individual scientists? Does it promote collaborative efforts and how does it affect the motivation of young scientists to stay in academia?

Rethinking how to assign authorships, we must be aware of biases and caveats that may accompany new policies. For instance, the bias of self-reported contributions. A study published in CMAJ concluded a strong discrepancy between the reported contributions evaluated by the co-authors compared to the corresponding authors. Strikingly, self-assessment of authors even differed when evaluated twice. In fact, the design of the contribution disclosure section may affect the bias and needs careful consideration. Another aspect is that currently dedicated first and last authorships outline scientists as significant contributors independent of their previous work. Removing obvious author positioning may bias credits to already well-known scientists on a paper, a principle known as the “Matthew effect”.

Let us, in the true scientific spirit, be guided by an open discussion and thorough evaluation. If changes such as stopping the race for the two important positions on a paper result in a net improvement, we, as scientists, should break with the conventions. Removing the authorship order and promoting a detailed contribution section as the only metric to evaluate contributions may be one step in the right direction. In my view, if no one can simply rely on an author position anymore this only puts people with questionable contributions at a disadvantage. If evidence suggests a better strategy, for the scientific community, change is an imperative.  



van Dijk, D., Manor, O. & Carey, L. B. Publication metrics and success on the academic job market. Current biology: CB 24, R516-7 (2014).
Baerlocher, M. O., Newton, M., Gautam, T., Tomlinson, G. & Detsky, A. S. The meaning of author order in medical research. Journal of investigative medicine: the official publication of the American Federation for Clinical Research 55, 174–180 (2007).
Gaeta, T. J. Authorship. “Law” and Order. Acad Emergency Med 6, 297–301 (1999).
Shapiro, D. W., Wenger, N. S. & Shapiro, M. F. The contributions of authors to multiauthored biomedical research papers. JAMA 271, 438–442 (1994).
McKneally, M. Put my name on that paper: reflections on the ethics of authorship. The Journal of thoracic and cardiovascular surgery 131, 517–519 (2006).
Waltman, L. An empirical analysis of the use of alphabetical authorship in scientific publishing. Journal of Informetrics 6, 700–711 (2012).
Strange, K. Authorship: why not just toss a coin? American journal of physiology. Cell physiology 295, C567-75 (2008).
Lawrence, P. A. Rank injustice. Nature 415, 835–836 (2002).
Venkatraman, V. Conventions of Scientific Authorship. Science (2010).
Munafò, M. R. & Davey Smith, G. Robust research needs many lines of evidence. Nature 553, 399–401 (2018).
Smaldino, P. E. & McElreath, R. The natural selection of bad science. Royal Society open science 3, 160384 (2016).
Campbell, D. Assessing the Impact of Planned Social Change*. Journal of multidisciplinary evaluation 7, 3–43 (2010).
Goodhart, C. A. E. in Monetary Theory and Practice: The UK Experience (Macmillan Education UK, London, 1984), pp. 91–121.
Lucas, R. E. Econometric policy evaluation. A critique. Carnegie-Rochester Conference Series on Public Policy 1, 19–46 (1976).
Rennie, D. et al. When Authorship Fails. JAMA 278, 579 (1997).
Bennett, D. M. & Taylor, D. M. Unethical practices in authorship of scientific papers. Emerg Med Australas 15, 263–270 (2003).
Logan, J. M., Bean, S. B., Myers, A. E. & Lozano, S. Author contributions to ecological publications. What does it mean to be an author in modern ecological research? PLoS ONE 12, e0179956 (2017).
Ilakovac, V., Fister, K., Marusic, M. & Marusic, A. Reliability of disclosure forms of authors’ contributions. CMAJ: Canadian Medical Association journal = journal de l’Association medicale canadienne 176, 41–46 (2007).
Marusić, A., Bates, T., Anić, A. & Marusić, M. How the structure of contribution disclosure statements affects validity of authorship: a randomized study in a general medical journal. Current medical research and opinion 22, 1035–1044 (2006).
Merton, R. K. The Matthew Effect in Science: The reward and communication systems of science are considered. Science (New York, N.Y.) 159, 56–63 (1968).


Read More

“The quality of students has improved enormously.”

Edmond Fischer during the 61st Lindau Meeting. Picture/Credit: Lindau Nobel Laureate Meetings


On the occasion of the 65th Lindau Nobel Laureate Meeting in 2015, science historian Ralph Burmester spoke to Nobel Laureate Edmond Fischer about his first Lindau experience and the development of the Lindau Meeting since the early 1990s. This interview is part of Burmester’s book ‘Science at First Hand – 65 years Lindau Nobel Laureate Meetings’.


Ralph Burmester: What did you expect when you first came to Lindau in 1993?

Edmond Fischer: I remember well the first time I ever heard of Lindau. It must have been forty-fifty years ago; I was flying to Europe on TWA and, seated behind me, was George Wald. While we were chatting together, he told me he was going to a place called Lindau, on Lake Constance, where Nobel Laureates would be giving lectures to many students. And I thought: what an incredible experience it must be for young researchers to hear some of the foremost scientists discussing their work in such an informal setting. What a rewarding experience it must be for the laureates to have this opportunity to communicate with very bright students from all over the world. So, it’s no wonder that, when I was in Stockholm in 1992 for the Nobel Award Ceremony and I was invited by Count Lennart and Countess Sonja to attend the Lindau Meeting, I accepted with enthusiasm.


How did you perceive the Nobel Laureate Meetings personally?

I went to Lindau for the first time with my wife Bev in 1993, and the meeting was all that we had expected, and more. We were overwhelmed by the gracious and friendly way we were received. We were all lodged at the stylish and charming old Bad Schachen Hotel with its lovely lakeside garden, and we often walked together along the lake to the Inselhalle were the meetings were held. There were several friends of us and we met many other laureates whom we knew only by name. The opening ceremony was both solemn and whimsical, with the display of extravagant hats by Countess Sonja, and the talks and other events including the Friday trip to Mainau were outstanding.

The meetings have been planned for them, for the students, not for the laureates.

Which elements of these meetings do you hold so dear that they make you return every once in a while?

Meeting many friends, both from Lindau and fellow laureates. Having an opportunity of encountering recent laureates whom I didn’t know and listening to their superb presentations. And, of course, the prospect of meeting and speaking with students from all over the world. The meetings have been planned for them, for the students, not for the laureates.


In your opinion, which dimension of these meetings is more beneficial, the scientific or the social one?

Undoubtedly, their scientific contribution. Social occasions are obviously very pleasant, because they allow one to interact with people and provide some needed relaxation amid very intense activities, but they are secondary to the mission of the Lindau Meetings which is to inspire, motivate and connect.


Edmond Fischer’s Sketch of Science. Picture/Credit: Volker Steger/Lindau Nobel Laureate Meetings

What kind of topics are you discussing with young researchers?

Obviously, topics related to one’s field of expertise. The students come to you after having heard your talk and realise that some of the material you covered is relevant to their own research project. And those already involved in scientific research are eager to tell you what they are doing.


Since the interdisciplinary Jubilee Meeting the scientific standard is reported to have much improved – thanks to the committee of organisers here. It has also become much more international. I wonder how you perceived this development – was it much to your convenience?

Yes, indeed. The quality of students has improved enormously. I remember well, 20 years ago, meeting a bunch of students who were not even in science. They came to Lindau to have fun, to camp with friends. Some didn’t attend any lectures or group discussions. Lindau was barely known at that time. Universities and most of their faculty had never heard of it and there was no incentive for them to suggest students or write letters of recommendation on their behalf. Today, in contrast, the Lindau Meetings are known throughout the world and there is almost a competition among institutions to have their students admitted, and they feel highly honoured when this occurs.


This interview is part of Ralph Burmester’s book ‘Science at First Hand – 65 years Lindau Nobel Laureate Meetings’

How do you like the interdisciplinary meetings which have been set up every five years from then on?

Very much. It is an occasion to learn what is going on and what is new in different fields of science, and to meet the friends we have in those other disciplines. In fact, those are my preferred meetings.


What – in your eyes – are the factors that contribute to making the Lindau Nobel Laureate Meetings a true success?

Superb lectures, the beauty of Lindau and Bad Schachen, and Mainau, and the warmth, kindness and friendliness with which we are received.


What is the ‘Spirit of Lindau’ to you?

I now feel as if I were part of the Lindau family.


What are your hopes and expectations for the future of the Lindau Nobel Laureate Meetings?

They can only increase. It’s like an opera: it takes years before everything runs to perfection. In my opinion, under the guidance of Sonja and Bettina, the meetings are now running flawlessly and with enormous efficiency. They run like a very well-oiled machine.

Furthermore, the quality and dedication of the students is unprecedented.



Visualising the Genome’s 3D Code

Zur deutschen Version

The genetic code is a sequence of letters spelling instructions for a cell’s normal growth, repair and daily housekeeping. Now, evidence is growing for a second code contained in DNA’s tangled structure. The location and packing density of nucleic acids may control which genetic instructions are accessible and active at any given time. Disrupted genome structure could contribute to diseases such as cancer and physical deformities.

To fit inside a cell, DNA performs an incredible contortionist feat, squeezing two metres of material into a nucleus only a few micrometres wide. DNA compacts itself by first wrapping around histone proteins, forming a chain of nucleosomes that looks like beads on a string. Nucleosomes then coil into chromatin fibres that loop and tangle like a bowl of noodles.

To reveal the structural genetic code, researchers examine chromatin from its sequence of nucleotides to the organisation of an entire genome. As they develop microscopy techniques to better visualise the details of chromatin structure, even in living cells, they’re better able to explore how structural changes relate to gene expression and cell function. These developing pictures of chromatin structure are providing clues to some of the largest questions in genome biology.


Chromatin compartments

A prevailing theory about chromatin structure is that nucleosomes coil into 30 nm fibres, which aggregate to form structures of increasing width, eventually forming chromosomes. The evidence for this comes from observing 30 nm and 120 nm wide fibres formed by DNA and nucleosomes purified from cells.

A team led by Clodagh O’Shea, at the Salk Institute for Biological Studies wondered what chromatin looked like in intact cells. In 2017, the researchers developed a method to visualise chromatin in intact human cells that were resting or dividing. The researchers coated the cells’ DNA with a material that absorbs osmium ions, enabling the nucleic acid to better scatter an electron beam and by doing so appear in an electron micrograph. Next, they used an advanced electron microscopy technique that tilts samples in an electron beam and provides structural information in 3D. The researchers noticed that chromatin formed a semi-flexible chain 5 to 24 nm wide that was densely packed in some parts of the cell and loosely packed in others.


New method to visualise chromatin organisation in 3D within a cell nucleus (purple): chromatin is coated with a metal cast and imaged using electron microscopy (EM). Front block: illustration of chromatin organisation; middle block: EM image; rear block: contour lines of chromatin density from sparse (cyan and green) to dense (orange and red). Credit: Salk Institute

“We show that chromatin does not need to form discrete higher-order structures to fit in the nucleus,” said O’Shea. “It’s the packing density that could change and limit the accessibility of chromatin, providing a local and global structural basis through which different combinations of DNA sequences, nucleosome variations and modifications could be integrated in the nucleus to exquisitely fine-tune the functional activity and accessibility of our genomes.”

Along with packing density, location is another component of chromatin structural organisation. Researchers have known for three decades that chromatin forms loops, drawing genes closer to sequences that regulate their expression. Biologist Job Dekker, at University of Massachusetts Medical School in Worcester, and his colleagues have developed several molecular biology-based techniques to identify neighbouring sections of chromatin 200,000 to one million bases long. One of these techniques, called Hi-C, maps chromatin structure using its sequence.

In Hi-C, researchers first chemically crosslink the nucleic acid to join portions of chromatin that are near each other. Then they use enzymes to cut the crosslinked chromatin, label the dangling ends with a modified nucleotide, and reconnect only crosslinked fragments. Finally, the researchers isolate the chromatin fragments, sequence them, and match the sequences to their position in a cell’s whole genome.

In 2012, Bing Ren, at the University of California, San Diego School of Medicine,and colleagues used Hi-C to identify regions of chromatin they called topologically associating domains (TADs). Genes within the same TAD interact with each other more than with genes in other TADs, and domains undergoing active transcription occupy different locations in a nucleus than quiet domains. Altered sequences within a TAD can lead to cancer and malformed limbs in mice.

The basic unit of a TAD is thought to be loops of chromatin pulled through a protein anchor. Advanced computer models of chromatin folding recreate chromatin interactions observed using Hi-C when they incorporate loop formation. But genome scientists still aren’t sure which proteins help form the loops. Answering that question addresses a basic property of DNA folding and could point to a cellular mechanism for disease through mutations in a loop anchor protein.


Super resolution microscopy

Advanced optical microscopy techniques, based on a method recognised by the 2014 Nobel Prize in Chemistry, are also providing information about how regions of chromatin tens of bases long could influence cell function. Super-resolution fluorescence microscopy enhances the resolution of light microscopes beyond the 300-nm diffraction limit. This technique uses a pulse of light to excite fluorescent molecules, and then applies various tricks to suppress light shining from those molecules not centred in the path of the excitation beam. The result is the ability to image a single fluorescent molecule.

Biological molecules, however, can carry many fluorescent labels, making it difficult to localise a single molecule. Using fluorescent labels that switch on and off, researchers activate and deactivate fluorescent molecules in specific regions at specific times. Then they stitch the images together to capture the locations of all the fluorescent tags.

Xiaowei Zhuang, at Harvard University, and colleagues used super resolution microscopy to follow how chromatin packing changed based on its epigenetic modifications. Their method provided images on a scale of kilobases to megabases, a resolution between that of pure sequence information and large-scale interactions available through Hi-C. Information about gene regulation and transcription happens on this scale. This technique also offers the potential of imaging nanometre-scale structures in live cells.


Structural dictionary

 Using a variety of methods to capture static and dynamic cellular changes, researchers around the world are working to write a dictionary of the structural genetic code throughout space and time. The 4D Nucleome Network, funded by the National Institutes of Health, and the 4D Genome Project, funded by the European Research Council, are identifying a vocabulary of DNA structural elements and relating how that structure impacts gene expression. They’re also curious about how chromatin structure changes over the course of normal development as well as in diseases such as cancer and premature aging. With many basic questions outstanding, much remains to be discovered along the way.

Cryptocurrencies and the Blockchain Technology


During the late 1990s, investors were eager to invest in any company with an Internet-related name or a “.com” suffix. Today, the word “blockchain” has a similar effect. Like the Internet, blockchains are an open source technology that becomes increasingly valuable as more people use it due to what economists call “the network effect”. Blockchains allow digital information to be transferred from one individual to another without an intermediary. Bitcoin was the first use of the blockchain technology. However, the volatility, transaction fees, and uncertain legal framework have stalled Bitcoin’s widespread adoption.

The creator of Bitcoin, Satoshi Nakamoto, combined several ideas from game theory and information science to create Bitcoin. The basic idea for the blockchain technology originated with two cryptographers named Stuart Haber and Scott Stornetta. Their research focused on how to chronologically link a list of transactions. Today, when people refer to a blockchain, they are referring to a distributed database that keeps track of data. The type of data that the Bitcoin blockchain tracks is financial. Bitcoin users can send accounting units that store value from one user’s account to another user’s account without intermediaries. Since the Bitcoin Blockchain sends financial data and relies on cryptography, the accounting units in the blockchain are referred to as cryptocurrencies. The accounting units are stored in digital wallets, which are like bank accounts.

As a cryptocurrency, Bitcoin was designed to be a store of value and a payment system combined in one. Bitcoin has a fixed supply capped at 21 million and the currency’s inflation rate is programmed to decrease by half about every four years. Since Bitcoin was launched in 2009, the transactions on the network have doubled every year and the value of Bitcoin has increased by 100,000 percent. The current market price of approximately $11,500 is the result of the cryptocurrency’s limited supply and increasing demand.

The blockchain is a distributed database that stores a continuously growing list of all the transactions between the users. Imagine a Google Drive Document that has thousands of collaborators around the world that are constantly updating the information in the document. Like Google Docs, each editor sees the same information in the document, and when updates are made, each editor’s Google Doc shows the new changes. Like Google Docs, the Bitcoin blockchain stores the same duplicate database in thousands of locations throughout the world. This ensures that the database and the network cannot be easily destroyed.

When your hard drive crashes right before your doctoral dissertation is due, you are in big trouble. If you had used Google Docs or Overleaf instead, your data would be easily recoverable. To destroy an open source software, every single computer that has downloaded the software must be destroyed. This feature of the blockchain technology makes it the best method for preserving important information.

In addition to being hard to destroy, Bitcoin is a major technological breakthrough because Bitcoin solves the double-spend problem. Double-spending is the digital version of counterfeiting fiat currency or debasing a physical commodity money, such as gold. To solve the double-spend problem, Bitcoin relies on the “proof-of-work” consensus mechanism that I explained in my last article for the Lindau Nobel Laureate Meetings Blog. Proof-of-work is an incentive structure in the Bitcoin software that rewards Bitcoin users who make successful changes to the database. The users that are responsible for these changes are called “miners”. These individuals or groups of individuals listen to new incoming Bitcoin transactions using special hardware. Miners create blocks containing a list of the newest transactions that have been broadcast to the network by users. After approximately ten minutes, the transaction will be confirmed by all of the computers in the network. Next, blocks are added one after the other in a chronological order, creating a chain, hence, the name, blockchain. Each miner stores a copy of the entire Bitcoin blockchain and can see all changes that are being made as new transactions are settled on the network. Transparent accounting ensures that users cannot double-spend the same Bitcoin or create new bitcoin out of thin air.

Advancements in technology are a constant factor of the world around us. Artificial Intelligence (AI), Internet of Things (IOT), and Geolocation are just some of the buzzwords that we must add to our vocabulary. Bitcoin and Blockchain are two more terms to add to the list of potentially life-changing technologies. Whether the cryptocurrency market’s value will follow the same trajectory as the dot-com stocks is yet to be seen; however, the blockchain technology, like the Internet, is a revolutionary technology that is most likely here to stay.


Further reading:

Demelza Hays publishes a free quarterly report on cryptocurrencies in collaboration with Incrementum AG and Bank Vontobel. The report is available in English and in German.

The Ageing Brain

Zur deutschen Version

Ageing seems to be an inevitable part of life, in fact, every organism appears to have a pre-set limited life span, sometimes this covers several decades and sometimes merely weeks. Over the course of this life span, one cornerstone of the ageing process is the so-called “Hayflick limit”, named after Leonard Hayflick, who in the 1960s discovered that cultured normal human cells have a limited capacity to divide. Once that limit is reached, cell division stops and a state of replicative senescence is entered – a clear cellular marker for ageing. On a molecular level, this limit is due to shortening telomeres. Telomeres are specific regions at the end of chromosomes, and with each cell division, and thus genomic replication, this region gets shortened until the replication can no longer be completed.

This process explains the basic molecular ageing process for most cell types, not, however, for neurons. Because brain cells do not divide at all; therefore, the Hayflick limit cannot be the reason for their demise. Thus, the brain and its function should remain stable until the end of our lives. And yet, a major hallmark of ageing is loss of brain matter volume and loss of cognitive abilities – even in the absence of clear-cut neurodegenerative diseases, such as Alzheimer’s disease. Moreover, not everyone seems to be affected by age-related cognitive decline – there are several examples of cognitively sharp and highly functioning individuals well into their 80s and 90s and beyond, while other, seemingly heathy, seniors show severe cognitive impairments at the same age. What causes this difference? What causes our brains to stop functioning, and can we prevent this?

Let’s start at the volume loss: under “healthy” ageing conditions, i.e., without the occurrence of neurodegenerative diseases, the brain volume loss is due to a loss of connections rather than due to a loss of cells. In other words: imagine flying in a helicopter over a thick, green, leafy forest – you can barely see the ground underneath the treetops; this is your young, healthy brain. Years later, you’re flying over the same forest again. The number of trees has remained roughly the same, but now many of them have lost a few branches and leaves and now you can see the ground below.

A loss of synapses and dendrites would account for the structural (and functional) changes that occur in the aging brain. But why are they lost? Recently, several molecular changes that have long been used as senescence markers in dividing cell lines have also been found in aging neurons. For instance, an increase in senescence-associated beta-galactosidase activity and an age-dependent increase in DNA damage have been observed in aged mouse brains. Under healthy cellular conditions beta-galactosidase is an enzyme that catalyses sugar metabolism and thus plays a pivotal role in the cellular energy supply. Although the mechanisms behind it are still unclear, the enzyme accumulates in aging cells and is a widely used molecular marker for senescent cells.  However, when it comes to its accumulation in neurons, there is some debate whether these changes are truly age-dependent. The mechanisms behind the accruing DNA damage also remain unclear since it couldn’t occur during cell division.


During ageing, neuronal connections are lost. Picture/Credit: ktsimage/


Leaving the cause of such changes aside – what are their consequences? Could they be the reason for age-related cognitive impairments? At least for the changes in galactosidase activity there could be a connection: increased activity of this enzyme results in a lower pH-level within the cells. This in turn affects the functionality of lysosomes – small vesicles with a very acidic pH that function as a “clean-up-crew” for used or malfunctioning proteins in the cell, which were first discovered by Nobel Laureate Christian de Duve. If the pH of the entire cell drops, the function of the lysosomes could be disturbed, leaving unwanted proteins to aggregate within the cell. If the cell is “preoccupied” with internal protein aggregates, outward functions such as signal transmission and the likes suffer, eventually leading to phenotypic changes such as cognitive decline. In a similar manner, accumulating DNA damage could also lead to functional changes.

Another reason why many synaptic connections are lost with age could be that as we get older, we learn and experience fewer new things. Neuronal connections, however, must be used to stay intact, otherwise they degrade.

As with many age-related ailments, life experiences and exposure to toxins also seem to affect our cognitive abilities in old age. For instance, according to a recent study, even moderate long-term alcohol abuse can negatively affect cognition in later years. However, an ongoing study at the Albert Einstein College of Medicine in New York also highlights the importance of ‘good’ genes. The Longevity Genes Project follows more than 600 so-called super-agers aged over 90 and is aiming to identify certain genes that promote healthy ageing. According to the lead investigator, Nir Barzilai, the goal is to develop specific drugs based on these genes and thereby halt or at least slow down the aging process.

Aside from certain genes that seem to positively affect the way we age, there is something else that has been shown to even reverse the aging process and its unpopular companions such as hair loss, decreasing muscle tone and cognitive decline: the blood of the young. In a much-hyped paper from 2014 researchers from Stanford University show that infusing old mice that are physically and cognitively impaired, with the blood of younger mice at least partially reverses the effects of ageing. After treatment, the old mice solved cognitive tasks faster and more accurate, their muscle tone improved and even their coats looked better again. Ever since then, Tony Wyss-Coray, the senior researcher of the paper, and his colleagues have been trying to identify the specific component that drives this improvement. With his startup company Alkahest he even ran a first very small human trial in 2017, which – if nothing else – proved that the treatment with young blood was safe. For this trial the researchers infused plasma (blood without the red blood cells) from young donors for four weeks into patients with mild to moderate Alzheimer’s disease. Although there were no apparent adverse effects of the treatment, the patients also did not improve when undergoing cognitive testing. However, the mechanisms underlying Alzheimer’s dementia are distinct to those underlying cognitive decline in healthy ageing individuals. Hence, cognitively impaired but otherwise healthy elder individuals might in fact benefit more from such infusions.  

While we now know a lot more about age-related structural, cellular and molecular mechanisms that could lead to cognitive decline, a specific and unifying culprit has not yet been identified. Nevertheless, Wyss-Coray, Barzilai and others are currently working on finding a cure for age-related cognitive and physical decline, thereby hoping to turn aging from an inevitability of life into a minor error that could be cured.


Read More

Topic Cluster: Why Do We Get Old?

Breaking the Shyness Barrier

Sir Christopher Pissarides discussing with young economists during the 6th Lindau Meeting on Economic Sciences. Photo/Credit: Lisa Vincenz-Donnelly/Lindau Nobel Laureate Meetings

Sir Christopher Pissarides in discussion with young economists during the 6th Lindau Meeting on Economic Sciences. Picture/Credit: Lisa Vincenz-Donnelly/Lindau Nobel Laureate Meetings


When I was growing up as an economist, first at Essex University and then at the London School of Economics, I was hearing about the Nobel Prize and all the gossip around it and I thought those winning it must be some kind of superhumans, that every word that came out of them is a word of wisdom. I guess in economics in my formative years, there were indeed some superhumans around: Samuelson, Hicks, Arrow, Friedman, to name a few who made the subject what it is. But it is still puzzling to me why, as human beings, we attach so much importance to the few who have the medal in their hand. And it’s not new: in Classical Greece, a city would destroy part of its city walls when one of its young men got the Olympic wreath because with men like him it did not need walls to protect it. What would I not have given in those days to be in the company of the Nobel Laureates (or the Olympic athletes, for that matter) for a few days? Lindau does just that for a few hundred lucky young people. 

Lindau succeeds in breaking the shyness barrier between young people still struggling with degree studies and silver-coloured gentlemen.


Sir Christopher Pissarides during a Press Talk at the 6th Lindau Meeting on Economic Sciences. Picture/Credit: Julia Nimke/Lindau Nobel Laureate Meetings

Sir Christopher Pissarides during a Press Talk at the 6th Lindau Meeting on Economic Sciences. Picture/Credit: Julia Nimke/Lindau Nobel Laureate Meetings

Of course, today, being on the other side of the fence, I also count myself lucky to be in the company of so many bright young people and so many of my fellow laureates. In Lindau, I enjoy most the quiet discussions around the dinner table or talking with a cup of coffee in hand until the coffee gets cold and undrinkable (please, next time hire an Italian barista!). Lindau succeeds in breaking the shyness barrier between young people still struggling with degree studies and silver-coloured gentlemen who have forgotten what it is like to study for a degree (regrettably, there are no living women laureates in economics), to the extent that the organisers feel they should set aside certain times where the laureates can be on their own. Credit should go to the organisers, Countess Bettina Bernadotte and the staff of the executive secretariat.

I decided to lecture about my more recent interests rather than the work that won me the prize: the future of work in the age of automation and robots. It is a fascinating topic, which has attracted a lot of attention on both sides of the argument – the doom and gloom scenario that there will be no meaningful work left for humans and all the profits from the robots will go to a few wealthy individuals and the optimists who claim that society as a whole will be better off and the sooner the robots take over the work the better off we will all be. I belong to the second category but not unconditionally. A lot of jobs will no doubt be taken over by robots but many more will be created, ranging from software engineers who will develop and feed the robots with data and instructions to carers who will look after the children and ageing parents of men and women engaged in the new economy. But inequality and the question of who will get the rewards from the robots’ work is a big unresolved issue; governments need to work hard to come up with credible policies for how to reduce poverty and achieve more equality if the optimistic scenario is to materialise. These last topics were hotly debated both at the side gatherings and in the final panel session of the meeting, of which I was fortunate enough to be a member, on a beautiful day in the lush gardens of Mainau Island.


Pissarides talking to young economists during the 6th Lindau Meeting on Economic Sciences. Photos/Credit: Julia Nimke/Lindau Nobel Laureate Meetings

Pissarides and young economists during the Lindau Meeting in 2017. Photos/Credit: Julia Nimke/Lindau Nobel Laureate Meetings


Lindau has been going on for a long time but it is an evolving organisation. This year, we had several 5-minute presentations by graduate students, which are much better than poster sessions where you wander around a room with posters hanging on its walls and students standing by them in the hope that someone will pay attention. The 5-minute presentations put laureates and student participants into the picture, enabled the students to say what their research objectives were and generated lively discussions afterwards in the gardens and coffee rooms of the island. If I have a grievance, it is that despite the length of the meeting (arrived Tuesday and left Sunday) there was still no time to visit the other attractions of Lindau Island, including, from what I am told, a wonderful old library. A free afternoon would have been welcome! This year, there were also more journalists with requests on one’s time for interviews, which interfered with participation in other laureates’ presentations, which is a shame given how much you learn from them. Journalists can reach many more people than can be present in Lindau so their presence should be welcome, but where one strikes the balance between time taken up in interviews with them and attendance at the scheduled events is something not easy to resolve. 

Overall, this was an excellent meeting; regretfully, we have to wait three whole years for the next one.



More reviews and highlights of the 6th Lindau Meeting on Economic Sciences can be found in the Annual Report 2017.

A Symphony of Science, Peace and Education

In his speech at the presentation of Peter Badge’s ‘Nobel Heroes’ on 22 September 2017 at the Nobel Peace Center in Oslo, Bishop emeritus Gunnar Stålsett stressed the importance of science in times of global tensions. The former Vice Chair of the Nobel Peace Prize Committee was appointed a member of the Honorary Senate of the Foundation Lindau Nobel Laureate Meetings in 2013.


Gunnar Stålsett at the Lindau Meeting in 2016. Photo/Credit: Julia Nimke/Lindau Nobel Laureate Meetings

Gunnar Stålsett at the Lindau Meeting in 2016. Photo/Credit: Julia Nimke/Lindau Nobel Laureate Meetings


“The Nobel Peace Center is like the eye of the storm. Irma, Maria, Kim Jong-un and Donald Trump: in diverse ways they all wreak havoc for millions of people, and threaten disaster for our entire human habitat. To stem these destructive tides of extreme weather and human folly, we need the wisdom of science and the calm of common sense. Against hatred and intolerance we need education and civil courage. This is what Nobel science and Nobel peace is about. This is what we celebrate today: a confluence of academic knowledge and moral conviction. This is wisdom. This makes peace great again.

Every day, we are reminded of great threats to the human family and to our entire habitat. We are on the brink of a nuclear war. Hundreds of millions of lives are threatened by starvation and climatic catastrophes causing mass migration. National, ethnic and cultural extremism affect every region of the world. Violent religious extremism is seen in every religion. Hatred defeats the love of neighbours.

Against hatred and intolerance we need education and civil courage. This is what Nobel science and Nobel peace is about. 

The will of Alfred Nobel emphasised fraternity, not enmity, between nations, the reduction of standing armies, not an escalation in the development of weapons of mass destruction, peace congresses, not unilateralism. His are practical steps even in the 21st century. The concerted efforts of people of good will across social, ethnic, cultural and religious divides, from one generation to the next, are what will bring about a better tomorrow. In a vulnerable world, there are victims and there are heroes. Sometimes heroes sadly fail. Sometimes victims win the day.

In the eye of the storm, it is still but not silent. Peace is dissent, expressed in loud protest. I believe we are all grieved by the tragic onslaught on the Rohingya Muslim population of Myanmar, not forgetting the tragedies of Syria and Yemen – to name but a few of the places where death and destruction reign.

Alfred Nobel wanted to strengthen those who conferred the greatest benefit to mankind. That is his legacy. That is our privilege. Here, in this centre, in the spirit of Nobel, we humbly affirm a foundation of shared human values on which to build the future. Peace is personal. The great Swedish humanist, Dag Hammarskjøld, the General Secretary of the United Nations who died in the pursuit of peace, speaks in words of prayer of the inner challenge we all face: “If only I may grow: firmer, simpler, quieter, warmer.”

Thank you, Countess Bettina Bernadotte, for inviting me to offer a few remarks on this special occasion. I have been greatly inspired by your leadership of the Council for the Lindau Nobel Laureate Meetings. You have continued the wise direction of your predecessors, your father Lennart and your mother Sonja. With eminent supporters and co-workers, such as Professor Wolfgang Schürer and Nikolaus Turner, the Lindau Nobel Laureate Meetings and its institutions have become the most significant academic encounter worldwide between Nobel Laureates and the new generations of scientists.

Whether Lindau or Stockholm or Oslo, we are united at the crossroads of human endeavour for peace and justice.

The occasion here today, the launching of Peter Badge’s ‘Nobel Heroes’, connects Lindau, Stockholm and Oslo as different members in one Nobel family, all dedicated to promoting the will of Alfred Nobel, through a symphony of science, peace and education. Peter has used his personal and professional skills to promote the Nobel legacy. No one has met more laureates literally, face-to-face, than he has. Through his photographic genius, we are brought closer to personalities who have contributed to fulfil the vision of Alfred Nobel. Life itself makes it impossible to isolate academic, scientific dedication from the challenges of responsible citizenship. I share the wish of Nobel Laureate in Physics Steven Chu when he says “I hope you, the young Lindau scientists, will be moved to use your considerable talents to help enrich and save the world.” In a nutshell, this is what science is about. This is what peace is about. This is the highest aspiration of the human intellect and the shared yearning of humanity. Whether Lindau or Stockholm or Oslo, we are united at the crossroads of human endeavour for peace and justice.

Let one example suffice: in the history of the Nobel Peace Prize, the abolition of weapons of mass destruction has most frequently been highlighted by the Prize Committee. The Lindau Nobel Laureate Meetings in 1955 issued the Mainau Declaration against the use of nuclear weapons. In 2015, Nobel Laureates initiated the Mainau Declaration on Climate Change. Both were signed by many laureates from all sciences. And both issues are shaping the agenda of heads of states this week at the United Nations.

Tawakkol Karman, Nobel Peace Laureate 2011, at the presentation of Peter Badge's 'Nobel Heroes' at the Nobel Peace Center in Oslo

Tawakkol Karman, Nobel Peace Laureate 2011, at the presentation of Peter Badge’s ‘Nobel Heroes’ at the Nobel Peace Center in Oslo. Photo/Credit: Nobel Peace Center

The presence here today of one of the Nobel Laureates of 2011, Tawakkol Karman, reminds us of the importance of women for peace and in the struggle for freedom of thought, freedom of expression, freedom of faith and freedom from fear caused by oppression and war. Again, by bringing all laureates together, Peter Badge’s work helps us to transcend the categories of sciences, literature and peace and to see ourselves as one mankind in one global community with one mission.

Through the images of Nobel Laureates of all prizes, Peter Badge conveys a message without words. Through his lens we sense the greatness of the human mind and the depth of the human heart. I see his work, in the words of St. Francis of Assisi, as an instrument for peace.

Congratulations on your message of hope, your testimony of perseverance and not least, your trust in the human genius for good. Your interpretation of the past offers healing for the future.”


Steidl Nobel Heroes







In summer 2017, renowned German publisher Gerhard Steidl released the coffee table book ‘Nobel Heroes’ (ISBN 978-3-95829-192-8). It compiles 400 portraits of Nobel Laureates by German photographer Peter Badge. The project, commissioned by the Lindau Nobel Laureate Meetings, is supported by the German foundation Klaus Tschira Stiftung.


This speech and other highlights of 2017 can be found in the Annual Report.