Organs-On-A-Chip: The Future of Drug Testing?

In 2006, six healthy men who participated in a seemingly harmless clinical trial for a new drug were instead left fighting for their lives with multiple organ failure. At Northwick Park Hospital in London, each had received an intravenous infusion of an antibody called TGN1412. German pharmaceutical company TeGenero initiated the first-in-human trial of TGN1412, developed to direct the immune system to fight cancer cells or reduce arthritis pain, after successful tests in animals.

Within 90 minutes of the infusion, all six developed a potentially fatal immune reaction with early symptoms like diarrhea, severe headache and vomiting. The volunteers became critically ill over the next 24 hours and were transferred to the intensive care unit. Episodes of respiratory and renal failure occurred in the following days, and they required mechanical ventilation, dialysis and plasma infusion.

Even though the men fortunately survived, the TGN1412 tragedy shocked the scientific world and revealed the true risks of human clinical trials. The infusion that brought them to the brink of death contained a dose 500 times less than the amount that had been safe in animal studies. In an investigative report, the United Kingdom’s Medicines and Healthcare Products Regulatory Agency failed to find any flaw in trial procedure or in manufacture of drug and concluded that the severe reactions were as a result of “unpredicted biological action of the drug in humans.”

The disastrous TGN1412 trial emphasizes the lack of physiologically relevant preclinical models that can predict human responses to new drugs. But what if an entire human lung, heart or kidney — or even the whole body — could be mimicked by a microchip for this sole purpose?


Researchers at the Massachusetts Institute of Technology have developed a microfluidic platform that connects engineered tissues from up to ten organs. Credit: Felice Frankel

Recent research has led to the development of what are called “organs-on-a-chip” (OOCs) to test the effects of drugs as an alternative to animal experiments, which may produce results inconsistent with human trials. These 3D microfluidic cell culture chips — about the size of a USB stick — are made of a translucent, flexible polymer lined with living human cells that can simulate the mechanics and microenvironment of an organ.

While initially developed to create computer chips, microscale engineering technologies like replica molding, soft lithography and microcontact printing allow researchers to manufacture these tiny model organs. The base material is polydimethylsiloxane, a biocompatible silicon-based organic polymer, which serves as a support for tissue attachment and organisation. From there, a biomaterial such as collagen acts as the extracellular matrix and scaffolding for the organ cells.

Since 2010, OOC models have been developed for the brain, lung, skeletal muscle, heart, skin, kidney, liver, gut and bone. But the goal for many laboratories is a full “human-on-a-chip” that would bring together several organs of the body for drug and therapeutic studies. Last year, a team of researchers announced the production of an integrated three-tissue OOC system comprised of liver, heart and lung. The resulting drug responses depended on interactions among the three tissue types, which reinforces the need to combine multiple tissue or organ types within a single microfluidic device. And earlier this year, engineers from the Massachusetts Institute of Technology created a microfluidic platform that connects engineered tissues from up to ten organ types.

Although the OOC remains an in vitro platform, it aims to replicate the in vivo environment of an organ with fabricated microwells, fluidic channels and porous membranes. It goes far beyond the conventional 2D cell culturing platforms, which feature a monolayer of cells to which a drug is then added. Such oversimplified models fail to capture the 3D cellular microenvironments in the human body, making it difficult to know how the drug will act once inside a person.

While some 3D cell culturing platforms have now been developed, such as cell aggregates and spheroids, they cannot capture any mechanical or electrical changes in cells. For instance, lung cells experience significant mechanical stress during inhalation, and the changes in surface-area-to-volume ratio that result play a critical role in gas exchange.

The first lung-on-a-chip, reported in 2010, had three embedded channels: two side channels and one main channel divided in half lengthwise by a membrane. The suction pressure within the two side channels could be controlled to stretch the centre membrane and mimic the mechanical stress during inhalation. Meanwhile, the porous membrane was seeded with cells to replicate the alveolar-capillary barrier in the lungs. In a toxicology experiment, the researchers added various nanoparticles to the alveolar microchannel and found that cyclical mechanical strain can accentuate the toxic and inflammatory response of the lung.


The placenta-on-a-chip, developed by a team at the University of Pennsylvania, contains two microfluidic channels that represent the maternal and fetal circulatory systems, and a membrane with living cells from the placental barrier that separates the two. Picture/Credit: University of Pennsylvania


Several laboratories have created hearts-on-a-chip to study different aspects of the cardiovascular system, such as the effects on drug delivery when stimulated with electrical and mechanical forces. A study that focused on liver-on-a-chip combined it with two other organ modules — uterus and breast — to test the toxicity of a chemotherapy drug after being metabolised by the liver. And a gut-on-a-chip has been developed to recreate the intestinal human environment in order to test the absorption of oral drugs. The applications for OOCs seem endless, as the field continues to expand and evolve. Lastly, a lab at the University of Pennsylvania recently made a placenta-on-a-chip to mimic the placental barrier between the maternal and fetal circulatory systems.

Then, of course, there are the practical advantages of OOCs such as saving on cost, time and labour. Spending on drug development over the last two decades has increased in the U.S., while the number of new drugs approved annually by the Food and Drug Administration has declined. Today, the cost of developing a new clinically applicable drug is nearly $2.5 billion U.S. dollars, and the drawn-out process itself takes 10 to 12 years on average. To curb the cost and time of drug development, some experts believe that OOCs could eliminate ineffective drug candidates as early in the process as possible.

While the research on OOCs continues to progress, much work is still required before a human-on-a-chip system becomes standard for drug development. Current models, even those that incorporate multiple tissue types, remain too simplified for researchers to rely heavily on. In the future, some experts foresee a modularised approach with libraries covering all levels and types of organs, and laboratories can then easily combine prebuilt modules for a specific application.

CRISPR-Cas: The Holy Grail Within Pandora’s Box

Zur deutschen Version


“With great power comes great responsibility” – although most often attributed to the Marvel comic Spider-Man, this phrase has since become synonymous with new discoveries and techniques that harbour great potential but could also go terribly awry if not met with enough care. What might happen if the novel gene-editing tool CRISPR-Cas was employed without the greater good in mind, can be seen in the new Hollywood movie ‘Rampage’. This movie adaptation based on a video game portrays the disastrous, albeit not scientifically accurate, consequences of reckless gene editing, which results in the hero having to fight former friendly pet animals that have been turned into monsters with the help of CRISPR. Undoubtedly, CRISPR-Cas is one of the most ground-breaking scientific developments in recent years, but it is also still heavily debated among scientists and the public. So much so that even Hollywood took note.

But what exactly is CRISPR-Cas and how does it work? CRISPR (pronounced “crisper”) stands for Clustered Regularly Interspaced Short Palindromic Repeats, which are part of the bacterial “immune” system. Jennifer Doudna and Emmanuelle Charpentier, together with their postdocs Martin Jinek and Krzysztof Chylinski, developed the gene-editing tool, which uses RNA guide sequences specific to the target gene as well as the bacterial enzyme Cas9 to cut the target sequence out of the genome. If different Cas-enzymes accompany the RNA guide sequences, the system can also be programmed to insert or replace specific gene sequences. With these systems, researchers are able to permanently modify genes in most living cells and organisms. The break-through discovery was published in a Science paper in August 2012. The beauty of the technique: it is fast, cheap and relatively simple. As a consequence, the scientific community has seen a massive increase in CRISPR-related projects, papers and patents in the years since 2012.

In particular the medical research community adopted this new tool early on in the hope that precise DNA editing could eliminate genetic diseases and allow for targeted gene therapies to treat genetic causes of disease. In fact, recently, researchers managed to reduce the disease burden of such debilitating diseases as Huntington’s disease and muscular atrophy via targeted gene editing in mice. What is especially impressive about these first successful trials, is that the researchers have found a way to introduce the CRISPR-Cas system directly into the body. Previous approaches to improve immune therapies for certain cancers followed a different approach: blood or immune cells were isolated, their genetic code altered to better fight the tumour cells and then reintroduced to the body. For most tissues, however, this approach was not feasible. Thus, both guide RNA and enzyme had to be introduced directly into the body and aimed at their target tissue. However, both RNA and enzyme are large molecules, that don’t diffuse into cells easily and don’t survive well in blood. Therefore, some pharmaceutical companies have successfully started pairing the two components to fatty particles in order to facilitate their uptake into the cells. If these trials prove feasible, virtually any disease – from hepatitis B to high cholesterol – could potentially be eradicated by the CRISPR-Cas system.


Illustration of genetically modified lymphocytes attacking a cancer cell. Credit: man_at_mouse/


Moreover, the CRISPR system not only proves tremendously useful for treating diseases, it is also being used as a promising new diagnostic tool: SHERLOCK – another pop-culture icon, but here the acronym stands for Specific High Sensitivity Reporter unLOCKing.

In 2017, a team of researchers in Boston first described an adapted CRISPR-system in which the guide sequence targets RNA (rather than DNA) as a rapid, inexpensive and highly sensitive diagnostic tool. The end product is a miniature paper test that visualizes the test result via a colourful band – similar to a pregnancy test. According to the scientists, SHERLOCK can detect viral and bacterial infections, find cancer mutations even at low frequencies, and could even detect subtle DNA sequence variations known as single nucleotide polymorphisms that are linked to a plethora of diseases.

In a new study, published earlier this year, the researchers used SHERLOCK to detect cell-free tumour DNA in blood samples from lung cancer patients. Moreover, their improved diagnostic tool is supposed to be able to detect and even distinguish between the Zika and the dengue virus.

The detection rather than editing of genetic information is based on the Cas13a enzyme, a CRISPR-associated protein, which can also be programmed to bind to a specific piece of RNA. Cas13 could even target viral genomes, genetic information underlying antibiotic resistance in bacteria or mutations that cause cancer. After Cas13 has cut its target, it will continue cutting additional strands of synthetic RNA, which are added to the test solution. Once these additional strands are cut by Cas13, they release a signalling molecule which finally leads to the visible band on the paper strip. The researchers have developed their diagnostic test to be able to analyse and indicate up to four different targets per test.

However, it is also this seemingly tireless activity of the cutting enzymes of the CRISPR toolbox that have led many researchers to question just how targeted and controlled this enzymatic reaction can be. In May 2017, a paper in Nature Methods reported a huge number of unexpected off-target effects, essentially labelling the gene-editing tool as unsafe. The paper brought the heated debate surrounding the predictability and safety of this new tool to the forefront of the field; however, in March 2018, the paper was retracted, after several researchers called the methods and in particular the controls of the first paper into question. The off-target effects observed earlier might have been due to different genetic backgrounds of the mice rather than the employed CRISPR-Cas method.

Nevertheless, many researchers caution their colleagues that the CRISPR-system and its possible side-effects are not yet fully understood. Another piece of the puzzle could be a natural off-switch for Cas9 that has been found in another bacterium and that could help control the enzymatic gene-editing system in the future.

However, possible side- or off-target effects are by no means the only fodder for heated debates surrounding the CRISPR tool box: in 2015, Chinese scientists reported that they had edited the genome of a human embryo. Although the embryo was not viable, it sparked a heated ethical discussion and conjured up many negative connotations regarding genetically engineered humans.

Leaving possible medical applications aside, CRISPR-Cas also holds great potential for agriculture. The gene-editing tool could help to generate plants that are resistant or at least more tolerant concerning fungi, insects or extreme weather phenomenon such as heat, drought or massive downpour, which occur more often in recent years due to climate change and can wreck entire harvests. The plants could also be made to produce higher yields or provide certain vitamins (e.g. golden rice), thus providing a huge relief in the fight against world hunger. However, the debate is still ongoing whether CRISPR-induced genetic modifications result in a genetically modified product (GMO) that should be labelled as such. GMO products are viewed critically by many customers and will thus be difficult to market.

The fact remains that the CRISPR-Cas system is a significant milestone in modern science with seemingly endless potential from diagnostic tests to curing nearly any disease there is, including a shortage of transplant organs, to alleviating world hunger. And yet, a system as complex and complicated as CRISPR-Cas needs to be met with due diligence and care in order to minimise risks and side effects. Moreover, the decision of altering the genetic code of (human) embryos requires in-depth ethical and moral deliberations.



Read More

Visualising the Genome’s 3D Code

Zur deutschen Version

The genetic code is a sequence of letters spelling instructions for a cell’s normal growth, repair and daily housekeeping. Now, evidence is growing for a second code contained in DNA’s tangled structure. The location and packing density of nucleic acids may control which genetic instructions are accessible and active at any given time. Disrupted genome structure could contribute to diseases such as cancer and physical deformities.

To fit inside a cell, DNA performs an incredible contortionist feat, squeezing two metres of material into a nucleus only a few micrometres wide. DNA compacts itself by first wrapping around histone proteins, forming a chain of nucleosomes that looks like beads on a string. Nucleosomes then coil into chromatin fibres that loop and tangle like a bowl of noodles.

To reveal the structural genetic code, researchers examine chromatin from its sequence of nucleotides to the organisation of an entire genome. As they develop microscopy techniques to better visualise the details of chromatin structure, even in living cells, they’re better able to explore how structural changes relate to gene expression and cell function. These developing pictures of chromatin structure are providing clues to some of the largest questions in genome biology.


Chromatin compartments

A prevailing theory about chromatin structure is that nucleosomes coil into 30 nm fibres, which aggregate to form structures of increasing width, eventually forming chromosomes. The evidence for this comes from observing 30 nm and 120 nm wide fibres formed by DNA and nucleosomes purified from cells.

A team led by Clodagh O’Shea, at the Salk Institute for Biological Studies wondered what chromatin looked like in intact cells. In 2017, the researchers developed a method to visualise chromatin in intact human cells that were resting or dividing. The researchers coated the cells’ DNA with a material that absorbs osmium ions, enabling the nucleic acid to better scatter an electron beam and by doing so appear in an electron micrograph. Next, they used an advanced electron microscopy technique that tilts samples in an electron beam and provides structural information in 3D. The researchers noticed that chromatin formed a semi-flexible chain 5 to 24 nm wide that was densely packed in some parts of the cell and loosely packed in others.


New method to visualise chromatin organisation in 3D within a cell nucleus (purple): chromatin is coated with a metal cast and imaged using electron microscopy (EM). Front block: illustration of chromatin organisation; middle block: EM image; rear block: contour lines of chromatin density from sparse (cyan and green) to dense (orange and red). Credit: Salk Institute

“We show that chromatin does not need to form discrete higher-order structures to fit in the nucleus,” said O’Shea. “It’s the packing density that could change and limit the accessibility of chromatin, providing a local and global structural basis through which different combinations of DNA sequences, nucleosome variations and modifications could be integrated in the nucleus to exquisitely fine-tune the functional activity and accessibility of our genomes.”

Along with packing density, location is another component of chromatin structural organisation. Researchers have known for three decades that chromatin forms loops, drawing genes closer to sequences that regulate their expression. Biologist Job Dekker, at University of Massachusetts Medical School in Worcester, and his colleagues have developed several molecular biology-based techniques to identify neighbouring sections of chromatin 200,000 to one million bases long. One of these techniques, called Hi-C, maps chromatin structure using its sequence.

In Hi-C, researchers first chemically crosslink the nucleic acid to join portions of chromatin that are near each other. Then they use enzymes to cut the crosslinked chromatin, label the dangling ends with a modified nucleotide, and reconnect only crosslinked fragments. Finally, the researchers isolate the chromatin fragments, sequence them, and match the sequences to their position in a cell’s whole genome.

In 2012, Bing Ren, at the University of California, San Diego School of Medicine,and colleagues used Hi-C to identify regions of chromatin they called topologically associating domains (TADs). Genes within the same TAD interact with each other more than with genes in other TADs, and domains undergoing active transcription occupy different locations in a nucleus than quiet domains. Altered sequences within a TAD can lead to cancer and malformed limbs in mice.

The basic unit of a TAD is thought to be loops of chromatin pulled through a protein anchor. Advanced computer models of chromatin folding recreate chromatin interactions observed using Hi-C when they incorporate loop formation. But genome scientists still aren’t sure which proteins help form the loops. Answering that question addresses a basic property of DNA folding and could point to a cellular mechanism for disease through mutations in a loop anchor protein.


Super resolution microscopy

Advanced optical microscopy techniques, based on a method recognised by the 2014 Nobel Prize in Chemistry, are also providing information about how regions of chromatin tens of bases long could influence cell function. Super-resolution fluorescence microscopy enhances the resolution of light microscopes beyond the 300-nm diffraction limit. This technique uses a pulse of light to excite fluorescent molecules, and then applies various tricks to suppress light shining from those molecules not centred in the path of the excitation beam. The result is the ability to image a single fluorescent molecule.

Biological molecules, however, can carry many fluorescent labels, making it difficult to localise a single molecule. Using fluorescent labels that switch on and off, researchers activate and deactivate fluorescent molecules in specific regions at specific times. Then they stitch the images together to capture the locations of all the fluorescent tags.

Xiaowei Zhuang, at Harvard University, and colleagues used super resolution microscopy to follow how chromatin packing changed based on its epigenetic modifications. Their method provided images on a scale of kilobases to megabases, a resolution between that of pure sequence information and large-scale interactions available through Hi-C. Information about gene regulation and transcription happens on this scale. This technique also offers the potential of imaging nanometre-scale structures in live cells.


Structural dictionary

 Using a variety of methods to capture static and dynamic cellular changes, researchers around the world are working to write a dictionary of the structural genetic code throughout space and time. The 4D Nucleome Network, funded by the National Institutes of Health, and the 4D Genome Project, funded by the European Research Council, are identifying a vocabulary of DNA structural elements and relating how that structure impacts gene expression. They’re also curious about how chromatin structure changes over the course of normal development as well as in diseases such as cancer and premature aging. With many basic questions outstanding, much remains to be discovered along the way.

Cryptocurrencies and the Blockchain Technology


During the late 1990s, investors were eager to invest in any company with an Internet-related name or a “.com” suffix. Today, the word “blockchain” has a similar effect. Like the Internet, blockchains are an open source technology that becomes increasingly valuable as more people use it due to what economists call “the network effect”. Blockchains allow digital information to be transferred from one individual to another without an intermediary. Bitcoin was the first use of the blockchain technology. However, the volatility, transaction fees, and uncertain legal framework have stalled Bitcoin’s widespread adoption.

The creator of Bitcoin, Satoshi Nakamoto, combined several ideas from game theory and information science to create Bitcoin. The basic idea for the blockchain technology originated with two cryptographers named Stuart Haber and Scott Stornetta. Their research focused on how to chronologically link a list of transactions. Today, when people refer to a blockchain, they are referring to a distributed database that keeps track of data. The type of data that the Bitcoin blockchain tracks is financial. Bitcoin users can send accounting units that store value from one user’s account to another user’s account without intermediaries. Since the Bitcoin Blockchain sends financial data and relies on cryptography, the accounting units in the blockchain are referred to as cryptocurrencies. The accounting units are stored in digital wallets, which are like bank accounts.

As a cryptocurrency, Bitcoin was designed to be a store of value and a payment system combined in one. Bitcoin has a fixed supply capped at 21 million and the currency’s inflation rate is programmed to decrease by half about every four years. Since Bitcoin was launched in 2009, the transactions on the network have doubled every year and the value of Bitcoin has increased by 100,000 percent. The current market price of approximately $11,500 is the result of the cryptocurrency’s limited supply and increasing demand.

The blockchain is a distributed database that stores a continuously growing list of all the transactions between the users. Imagine a Google Drive Document that has thousands of collaborators around the world that are constantly updating the information in the document. Like Google Docs, each editor sees the same information in the document, and when updates are made, each editor’s Google Doc shows the new changes. Like Google Docs, the Bitcoin blockchain stores the same duplicate database in thousands of locations throughout the world. This ensures that the database and the network cannot be easily destroyed.

When your hard drive crashes right before your doctoral dissertation is due, you are in big trouble. If you had used Google Docs or Overleaf instead, your data would be easily recoverable. To destroy an open source software, every single computer that has downloaded the software must be destroyed. This feature of the blockchain technology makes it the best method for preserving important information.

In addition to being hard to destroy, Bitcoin is a major technological breakthrough because Bitcoin solves the double-spend problem. Double-spending is the digital version of counterfeiting fiat currency or debasing a physical commodity money, such as gold. To solve the double-spend problem, Bitcoin relies on the “proof-of-work” consensus mechanism that I explained in my last article for the Lindau Nobel Laureate Meetings Blog. Proof-of-work is an incentive structure in the Bitcoin software that rewards Bitcoin users who make successful changes to the database. The users that are responsible for these changes are called “miners”. These individuals or groups of individuals listen to new incoming Bitcoin transactions using special hardware. Miners create blocks containing a list of the newest transactions that have been broadcast to the network by users. After approximately ten minutes, the transaction will be confirmed by all of the computers in the network. Next, blocks are added one after the other in a chronological order, creating a chain, hence, the name, blockchain. Each miner stores a copy of the entire Bitcoin blockchain and can see all changes that are being made as new transactions are settled on the network. Transparent accounting ensures that users cannot double-spend the same Bitcoin or create new bitcoin out of thin air.

Advancements in technology are a constant factor of the world around us. Artificial Intelligence (AI), Internet of Things (IOT), and Geolocation are just some of the buzzwords that we must add to our vocabulary. Bitcoin and Blockchain are two more terms to add to the list of potentially life-changing technologies. Whether the cryptocurrency market’s value will follow the same trajectory as the dot-com stocks is yet to be seen; however, the blockchain technology, like the Internet, is a revolutionary technology that is most likely here to stay.


Further reading:

Demelza Hays publishes a free quarterly report on cryptocurrencies in collaboration with Incrementum AG and Bank Vontobel. The report is available in English and in German.

The Ageing Brain

Zur deutschen Version

Ageing seems to be an inevitable part of life, in fact, every organism appears to have a pre-set limited life span, sometimes this covers several decades and sometimes merely weeks. Over the course of this life span, one cornerstone of the ageing process is the so-called “Hayflick limit”, named after Leonard Hayflick, who in the 1960s discovered that cultured normal human cells have a limited capacity to divide. Once that limit is reached, cell division stops and a state of replicative senescence is entered – a clear cellular marker for ageing. On a molecular level, this limit is due to shortening telomeres. Telomeres are specific regions at the end of chromosomes, and with each cell division, and thus genomic replication, this region gets shortened until the replication can no longer be completed.

This process explains the basic molecular ageing process for most cell types, not, however, for neurons. Because brain cells do not divide at all; therefore, the Hayflick limit cannot be the reason for their demise. Thus, the brain and its function should remain stable until the end of our lives. And yet, a major hallmark of ageing is loss of brain matter volume and loss of cognitive abilities – even in the absence of clear-cut neurodegenerative diseases, such as Alzheimer’s disease. Moreover, not everyone seems to be affected by age-related cognitive decline – there are several examples of cognitively sharp and highly functioning individuals well into their 80s and 90s and beyond, while other, seemingly heathy, seniors show severe cognitive impairments at the same age. What causes this difference? What causes our brains to stop functioning, and can we prevent this?

Let’s start at the volume loss: under “healthy” ageing conditions, i.e., without the occurrence of neurodegenerative diseases, the brain volume loss is due to a loss of connections rather than due to a loss of cells. In other words: imagine flying in a helicopter over a thick, green, leafy forest – you can barely see the ground underneath the treetops; this is your young, healthy brain. Years later, you’re flying over the same forest again. The number of trees has remained roughly the same, but now many of them have lost a few branches and leaves and now you can see the ground below.

A loss of synapses and dendrites would account for the structural (and functional) changes that occur in the aging brain. But why are they lost? Recently, several molecular changes that have long been used as senescence markers in dividing cell lines have also been found in aging neurons. For instance, an increase in senescence-associated beta-galactosidase activity and an age-dependent increase in DNA damage have been observed in aged mouse brains. Under healthy cellular conditions beta-galactosidase is an enzyme that catalyses sugar metabolism and thus plays a pivotal role in the cellular energy supply. Although the mechanisms behind it are still unclear, the enzyme accumulates in aging cells and is a widely used molecular marker for senescent cells.  However, when it comes to its accumulation in neurons, there is some debate whether these changes are truly age-dependent. The mechanisms behind the accruing DNA damage also remain unclear since it couldn’t occur during cell division.


During ageing, neuronal connections are lost. Picture/Credit: ktsimage/


Leaving the cause of such changes aside – what are their consequences? Could they be the reason for age-related cognitive impairments? At least for the changes in galactosidase activity there could be a connection: increased activity of this enzyme results in a lower pH-level within the cells. This in turn affects the functionality of lysosomes – small vesicles with a very acidic pH that function as a “clean-up-crew” for used or malfunctioning proteins in the cell, which were first discovered by Nobel Laureate Christian de Duve. If the pH of the entire cell drops, the function of the lysosomes could be disturbed, leaving unwanted proteins to aggregate within the cell. If the cell is “preoccupied” with internal protein aggregates, outward functions such as signal transmission and the likes suffer, eventually leading to phenotypic changes such as cognitive decline. In a similar manner, accumulating DNA damage could also lead to functional changes.

Another reason why many synaptic connections are lost with age could be that as we get older, we learn and experience fewer new things. Neuronal connections, however, must be used to stay intact, otherwise they degrade.

As with many age-related ailments, life experiences and exposure to toxins also seem to affect our cognitive abilities in old age. For instance, according to a recent study, even moderate long-term alcohol abuse can negatively affect cognition in later years. However, an ongoing study at the Albert Einstein College of Medicine in New York also highlights the importance of ‘good’ genes. The Longevity Genes Project follows more than 600 so-called super-agers aged over 90 and is aiming to identify certain genes that promote healthy ageing. According to the lead investigator, Nir Barzilai, the goal is to develop specific drugs based on these genes and thereby halt or at least slow down the aging process.

Aside from certain genes that seem to positively affect the way we age, there is something else that has been shown to even reverse the aging process and its unpopular companions such as hair loss, decreasing muscle tone and cognitive decline: the blood of the young. In a much-hyped paper from 2014 researchers from Stanford University show that infusing old mice that are physically and cognitively impaired, with the blood of younger mice at least partially reverses the effects of ageing. After treatment, the old mice solved cognitive tasks faster and more accurate, their muscle tone improved and even their coats looked better again. Ever since then, Tony Wyss-Coray, the senior researcher of the paper, and his colleagues have been trying to identify the specific component that drives this improvement. With his startup company Alkahest he even ran a first very small human trial in 2017, which – if nothing else – proved that the treatment with young blood was safe. For this trial the researchers infused plasma (blood without the red blood cells) from young donors for four weeks into patients with mild to moderate Alzheimer’s disease. Although there were no apparent adverse effects of the treatment, the patients also did not improve when undergoing cognitive testing. However, the mechanisms underlying Alzheimer’s dementia are distinct to those underlying cognitive decline in healthy ageing individuals. Hence, cognitively impaired but otherwise healthy elder individuals might in fact benefit more from such infusions.  

While we now know a lot more about age-related structural, cellular and molecular mechanisms that could lead to cognitive decline, a specific and unifying culprit has not yet been identified. Nevertheless, Wyss-Coray, Barzilai and others are currently working on finding a cure for age-related cognitive and physical decline, thereby hoping to turn aging from an inevitability of life into a minor error that could be cured.


Read More

Topic Cluster: Why Do We Get Old?

Molecules at Near-Atomic Resolution

The Nobel Prize Award Ceremony is traditionally held on 10 December, the day Alfred Nobel died in 1896, and the Nobel Week in Stockholm is arranged around this date. The three new Nobel Laureates in Chemistry are all expected to attend: Jacques Dubochet, Richard Henderson and Joachim Frank. They are honoured for their contributions to the development of cryo-electron microscopy, or cryo-EM. Besides their scientific achievements, it is also worth looking at their respective personalities – they are all real characters. And although all three are very different indeed, they have one thing in common: They were all asked countless times by their colleagues why they were pursuing a seemingly ‘hopeless topic’. As we can see this week, sometimes it pays off to work on subjects not many people are interested in.

The Francophone Swiss Jacques Dubochet describes himself on his website as the “first official dyslexic in the canton of Vaud – this permitted being bad at everything.” Apparently, his reading problems caused his school grades to slide so much that his parents sent him to a boarding school to pass his federal maturity exam. “I was a terrible student and now I’m a Nobel Laureate: any questions?” he says and smiles – Dubochet is always good for a joke.

Another funny incident: A few weeks after the Nobel Prize announcement in early October, Dubochet was at a conference at EMBL in Heidelberg, the very institute where he developed his famous method to produce vitrified water for cryo-electron microscopy. Now as he entered one of the labs, he saw a cryo-electron microscope standing there and commented: “Now, this is a wonderful machine, but, unfortunately, I forgot what it can do.” Everybody laughed, because he is one of the main pioneers in this field.


Jacques Dubochet (centre) with Gábor Lamm (left) and Gareth Griffiths at the 2015 Lennart Philipson Award Ceremony at EMBL in Heidelberg. Photo: EMBL Alumni Association, Lennart Philipson Award

The Swiss biophysicist Jacques Dubochet (centre) with Gábor Lamm (left) and Gareth Griffiths at the 2015 Lennart Philipson Award Ceremony at EMBL in Heidelberg. Photo: EMBL Alumni Association


A long-time challenge for cryo-EM had been the fact that the natural environment for most molecules is water, but water evaporates in the microscope’s vacuum. Freezing is one solution, but then the water crystallises, distorting both the sample and the picture. Now Dubochet came up with an innovative approach: He would cool the samples so rapidly that the water molecules had no time to crystallise. This left the molecules in a ‘glass pane’, freezing them in time, direction and in their natural shape. These cells weren’t alive anymore, but in a close-to-living state.

Though he’s an excellent scientist, Jacques Dubochet is also a man of many talents. “He has a unique power to pull his audience in,” says Marek Cyrklaff, Dubochet’s former EMBL colleague and long-term friend. “And he has a special structure of thinking, like being a dual person in one body,” Cyrklaff continues. “On the one hand, he’s a hardcore physicist, on the other, a top philosopher. The latter helps him to see far ahead, have visions, the former allows him to approach these goals in a structured way.”

He is also spontaneous, unconventional and “against all dogmas, in science and politics alike.” During his twenty years at the University of Lausanne, he has “devoted a lot of effort to the curriculum ‘biology and society’, and Lausanne at that time was unique in developing this curriculum for all our students,” as Dubochet himself explains on the telephone with Adam Smith from the Nobel Foundation. “It was not the kind of additional piece of education, it was a core programme in the study of biology. (…) The idea of this course is to make sure that our students are as good citizens as they are good biologists.” He still teaches in this programme, and he says that it’s very close to his heart. He’s also a member of the local council of Morges, where he lives with his wife. The day he learned that he was now a Nobel Laureate, he went to a council meeting in the evening.


Richard Henderson has worked at the MRC Laboratory of Molecular Biology for over 50 years. Photo: MRC-LMB

Richard Henderson is originally from Scotland and has worked at the MRC Laboratory of Molecular Biology now for over 50 years. Photo: MRC-LMB

Richard Henderson attended the same EMBL conference in November 2017 as Dubochet.  In a pre-dinner speech, Werner Kühlbrandt, Max Planck Director in Frankfurt, described his time in Cambridge where he received his PhD at the MRC Laboratory of Molecular Biology – and where Henderson had his own research group and later became director. “Richard never missed any of these occasions to meet and ‘have a chat’, as he would put it,” Kühlbrandt says, and “the discussions would continue at the lunch table, usually with Richard scribbling diagrams and quick calculations on the paper napkin.” Marek Cyrklaff also remembers Henderson never sitting in his office, but usually standing in the hallway and discussing projects with his colleagues.

If people now start to wonder when the LMB researchers ever finished their ground-breaking work, “the answer is very simple,” Kühlbrandt explains: “They talked to each other most of the day, but then worked doubly hard for long hours into the night.” He describes Henderson as “kind and helpful,” if strict and straight, as well as “a great optimist” – and to develop high-resolution cryo-EM, he needed to be an optimist.

Electron microscopes were believed to be suitable mostly for dead matter, because the powerful electron beam destroys biological material, and the specimens inevitably dry out in the microscope’s vacuum. Henderson started to tackle these problems in the early 1970s: He used a weaker electron beam and glucose solution to prevent the samples from drying out. In 1975, he was able to publish a low-resolution model of bacteriorhodopsin, a membrane protein. But that wasn’t good enough for him. Fifteen years later, in 1990, he succeeded in generating a three-dimensional atomic model of bacteriorhodopsin.

Richard Henderson describes himself as a ‘Scottish country lad’. When asked about “the biggest misconception” about his field of study, he replies: “That it is a boring technique rather than a minor art form.”


Joachim Frank is a German-American physicist. He was born in Germany during World War II, received his education in Germany, and moved to the US in the 1970s. Photo: Columbia University

Joachim Frank is a German-American biophysicist. He was born in Germany during World War II, received his education in Germany, and moved to the US in the 1970s. Photo: Columbia University

Talking about art forms: Joachim Frank is not only a world-class scientist, he is also a published author. He has written numerous poems, short stories and three (to date unpublished) novels. On his website ‘Franx Fiction’, there is a selection of his published works. Under ‘Nobel Prize‘, he writes how a stranger recognised him in the New York City subway and asked: “How come you still take the subway?“ According to this blogpost, the most important perk for Frank is the fact that he doesn’t need to write any more review articles: “Assignments like this make sense if you want to add an epsilon increment to the chance of winning the Nobel Prize. But, as I said, I’m already here.”

In an interview with the Austrian newspaper Der Standard, published three days before the Nobel Prize in Chemistry was announced, Frank explains his motivation to write fiction: “It’s all about balance. Without my writing, I would feel very isolated. The world is such a beautiful and complex place, and science only has limited access to its wonders. Science is dominated by strict rules which preclude emotions, and I would never allow my emotions to influence my research. So, to balance out my life, I write fiction and take photographs.”

In his research, Joachim Frank developed an innovative image processing method, between the mid-1970 to mid-1980s, in which an electron microscope’s fuzzy two-dimensional images of many molecules are analysed and merged to reveal a sharp three-dimensional structure. For this purpose, his team at Wadsworth Center in Albany, New York, devolped the image processing programme SPIDER. With this tool, the researchers were able to generate very detailed images of ribosomes, and Frank studied these, among other proteins, for three decades.

In 2014, Frank was awarded the Benjamin Franklin Medal in Life Science, and a video from the Frankling Institute explains how his science knowledge also adds to his artistic appreciation of the world. In the video, Frank explains how once, driving through the woods, “this idea occured to me, that in every cell of every leaf of every tree, there are ribosomes doing this thing,” and he shows with his hands the ratched-like motion of ribosomes that he discovered. “And it made me realise that I’m the only one who has this epiphany right now, because nobody drives around with this kind of appreciation.”

Frank, Henderson and Dubochet will meet in Stockholm on 8 December for their Nobel Lectures and two days later for the Nobel Prize Award Ceremony. They have known each other for years, and now they will share this memorable week in Stockholm.


Electron microscopes' resolution has radically improved in the last few years, from mostly showing shapeless 'blobs' to now being able to visualise proteins at near-atomic resolution. This is why the science magazine Nature chose cryo-EM as the Method of the Year 2015. Image: Martin Högbom.

The resolution of electron microscopes has radically improved in the last few years, from mostly showing shapeless ‘blobs’ to now being able to visualise proteins at near-atomic resolution. This is why the magazine Nature chose cryo-EM as the Method of the Year 2015. Illustration/Credit: Martin Högbom/The Royal Swedish Academy of Sciences

In Sync: Gut Bacteria and Our Inner Clock

Inner clock feature with credit


Already in the 18th century, the astronomer Jean Jaques d’Ortous de Mairan found that plants continue to follow a circadian rhythm even when placed into a dark room overnight, suggesting the existence of an inner clock that was independent of the perception of the environmental cues that differentiate day from night. Later, researchers found that not only plants but also other organisms including humans have a circadian rhythm. Nobel Laureates in Physiology or Medicine 2017 Jeffrey C. Hall, Michael Rosbash and Michael W. Young deciphered the cellular mechanisms that make the inner clock tick using the fruit fly as a model organism.

Although these findings were underappreciated at the time, we now know that our circadian rhythms exert a profound influence on many aspects of our physiology. Our inner clock regulates our sleep pattern, eating habits, hormone levels, blood pressure and body temperature at different times of the day, adapting to concurrent changes in the environment as the Earth rotates about its own axis. Circadian clock perturbations have been linked to higher risks of cancer and cardiovascular disease. Intriguingly, recent research has shown that the adaptation to a 24-hour cycle is not restricted to species that are exposed to the drastic light and temperature changes during the day, and extends to the microscopic organisms that live deep within us. 

We live in close symbiosis with trillions of microorganisms: Our microbiota plays an important role in many bodily functions including digestion, immune responses and even cognitive functions – processes that follow a circadian rhythm. The majority of our microbiota is found in the gastrointestinal tract. It turns out that these bacteria living in the depths of our gut themselves follow a circadian rhythm, and, further, that disruption of this rhythm also has negative consequences for our health. What’s more, perturbations of our inner clock affect the function of these bacteria and vice versa: the gut microbiome influences our circadian rhythm.

In 2013, French scientists demonstrated that gut microorganisms produce substances that stimulate the proper circadian expression of corticosterone by cells in the gut. Loss of bacteria from the intestine resulted in mice with several profound defects including insulin resistance. In a particularly eye-catching study from 2014, meanwhile, researchers based at the Weizmann Institute of Science in Rehovot, Israel, including Lindau Alumnus 2015 Christoph Thaiss, observed a diurnal oscillation of microbiota composition and function in mice as well as in humans and found that this oscillation was affected and disturbed by changes in feeding time as well as sleep patterns, i.e, perturbations of the host circadian rhythm. As a highly relevant example, they found that jet lag, for example, in people travelling from the USA to Israel, disturbed the rhythm of the microbiota and led to microbial imbalance, referred to as dysbiosis.

In sync: gut bacteria and our circadian clock. Picture/Credit: iLexx/

The bacteria in our gut also follow a circadian rhythm. Picture/Credit: iLexx/

It is not only the timing of meals that affects the circadian clocks of our resident bacteria, but also what we eat. Thus, while a high-fat, Western diet naturally has direct effects on our bodies, a proportion of these effects is also mediated by the impact that such a diet has on our microbiota, which in turn acts to alter the expression of circadian genes in our bodies and disturb our metabolism. Further, a recent study showed that bacteria in the gut, through affecting our circadian rhythms, also influence the uptake and storage of fats from the food that we eat.

The circadian clock plays a critical role in immune and inflammatory responses, and it is thought that perturbations in the circadian rhythm make the gastrointestinal tract more vulnerable to infection. It has been shown in mice that a perturbed circadian rhythm indeed affects immune responses, suggesting that the time of the day as well as circadian disruption, such as jet lag or shift work, may play a role in the susceptibility to infections. In fact, the immune response of mice to bacterial infection with Salmonella is determined by the time of day, and disruption of the host circadian rhythm may be one approach that bacteria employ to increase colonisation.

These observations highlight once more the intimate relationship that we enjoy with our gut microbiota and the importance of circadian rhythms for both us, our bacteria and the relationships that bind us together. It is likely that the seminal findings of Hall, Rosbash and Young will continue to form the basis for further important insights for human health and for our relationships with other organisms. What we now know already has important and intriguing implications for human health. Indeed, it appears likely that a better understanding of the bidirectional relationship between the circadian clock and the gut microbiota may help to prevent intestinal infections. Further, they may allow us to determine optimal times of the day for the taking of probiotics or for vaccination against gut pathogens. It is also reasonable to assume that antibiotics have a markedly negative impact on the circadian clock of the gastrointestinal tract. Taken together, therefore, these findings offer a compelling scientific basis for the importance of regular sleep patterns and meal times in keeping us healthy.



Continue reading

Amazing Gravitational Wave Astronomy

The very first gravitational waves measured directly came from two merging massive black holes – of all things!? Massive black holes were thought to be few and far between – and now when can ‘see’ them merge, in ‘real time’, just as the LIGO observatory becomes sensitive enough? The characteristic signal from 14 September 2015 was detected during a test run, even before Advanced LIGO started its formal observations four days later. Understandably, the researchers had to check and double-check to make sure that the signal wasn’t a secret test signal. The 2017 Nobel Prize in Physics honours this earth-shaking detection, which is the result of decades of intense research.

In a nutshell: The observatory had just been switched on moments ago, with only about one-third of its planned sensitivity – and finds an event that is expected to be extremely rare. And it is getting even better: A detailed analysis of the signal revealed that these two merging black holes each had masses higher than thirty times the sun’s mass, putting them squarely in the category of massive black holes, not to be confused with supermassive black holes.


The very first detection of gravitational waves on September 14, 2015. Signals received by the LIGO instruments at Hanford, Washington (left) and Livingston, Louisiana (right) and comparisons of these signals to the signals expected due to a black hole merger event. Credit: B.P. Abbott et al. (LIGO Scientific Collaboration and Virgo Collaboration) CC BY-SA 3.0

The very first detection of gravitational waves on 14 September 2015: Signals received by the LIGO instruments at Hanford, Washington (left) and Livingston, Louisiana (right) and comparisons of these signals to the signals expected due to a black hole merger event. Credit: B.P. Abbott et al. (LIGO Scientific Collaboration and Virgo Collaboration) CC BY-SA 3.0


The black holes that are believed to be most common in the universe are so-called stellar black holes. They are end-products of massive stars that exploded in supernovae after their lifespan and then collapsed to become stellar black holes. The resulting object is thought to have a mass of lower than twenty times the sun’s mass. Thus, the merging black holes from GW150914 were unusually large.

To complicate things more: They were also unusually close together, otherwise they couldn’t have merged. And in less than two years since this measurement, which shook the scientific community, already five black hole mergers have been recorded, and half of these black holes had been larger than twenty solar masses.

True, these findings are exciting, but they also pose some urgent questions: Do scientists need to change the theory of black holes to accommodate their large sizes, their large numbers as well as their proximity to each other? An interesting theory from early 2015, before the first black hole merger signal had been detected, drafts a compelling scenario, formulated by Madrid professor Juan Garcia-Bellido and postdoc Sebastien Clesse from RWTH Aachen University: maybe the universe is crowded with black holes of various sizes, remnants of large density fluctuations during the so-called inflation phase of the Big Bang.


Nobel laureate Brian Schmidt explaining Einstein's theory of general relativity #LiNo14. In this famous theory, Einstein predicted gravitational waves - but never expected that they could be measured. Photo: Lindau Nobel Laureate Meeting/Rolf Schultes

Nobel laureate Brian Schmidt explaining Einstein’s theory of general relativity at #LiNo14. In this famous theory, Einstein predicted gravitational waves – but never expected that they could be measured. One hundred years after this prediction, their signal was recorded by the two LIGO Observatories. Photo: Lindau Nobel Laureate Meeting/Rolf Schultes

As Nobel Laureate Brian Schmidt explains in his 2016 lecture at the 66th Lindau Nobel Laureate Meeting: According to the standard model of cosmology, the universe is thought to have expanded exponentially right after the Big Bang and “things at the quantum scales were magnified to the universal scales, quantum fluctuations were expanded to the scale of the universe.” He continues: “The magnification of the universe from the subatomic to the macro scales seems kind of crazy, but it keeps on predicting the things that we see in the universe.” This short period of rapid expansion is called inflation.

And already in 1971, the famous British physicist Stephen Hawking introduced the idea of ‘primordial black holes’. In the model of Garcia-Bellido and Clesse, these extremely old black holes would have formed in clusters – making it much more likely for surviving specimens to meet and merge. The authors even propose in their recent article for Spektrum der Wissenschaft, the German version of Scientific American, that these ubiquitous black holes might account for part of the mysterious Dark Matter.

The standard model assumes that the universe consists of roughly 69% Dark Energy, 26% Dark Matter, and less than 5% atoms – that ultimately provide for stars, galaxies, the earth, humans and everything we know. Without this hypothetical dark matter, the galaxies we can observe would be ripped apart: they’re simply too fast not to lose most of their stars. That this Dark Matter cannot be observed has been a dogma for several decades, but of course scientists are extremely unhappy with anything they can neither observe nor understand. (There have been many attempts to ‘see’ Dark Matter that I cannot describe here.)

What is so fascinating about the primordial black hole theory of Garcia-Bellido and Clesse is that it will be tested with current and future instruments. Finding many black hole mergers in the next few years will be a strong indicator that black holes are not few and far between but many and close together. Garcia-Bellido and Clesse conclude: “Already, the first observations show that black holes are binary more often than expected, and their masses are highly diverse.”

But there is still no proof that these black holes are primordial. The ‘smoking gun’ would be if a black hole in a merger was smaller than 1.45 solar masses: Below this so-called Chandrasekhar limit, no black holes can form after a stellar explosion – it would have to form in another process, making it more likely to be primordial. Unfortunately, many of the very small black holes are thought to have evaporated due to the so-called Hawking radiation.


Artist's impression of two merging and exploding neutron stars. Such a very rare event is expected to produce both gravitational waves and a short gamma-ray burst, both of which were observed on 17 August 2017 by LIGO–Virgo and Fermi/INTEGRAL. Subsequent observations with numerous telescopes confirmed that this object, seen in the galaxy NGC 4993 about 130 million light-years away, is indeed a kilonova. Such objects are the main source of very heavy chemical elements, such as gold and platinum. Credit: ESO/L. Calçada/M. Kornmesser CC BY-SA 4.0

Artist’s impression of two merging and exploding neutron stars. Such a rare event is expected to produce both gravitational waves and a short gamma-ray burst, both of which were observed on 17 August 2017 by LIGO–Virgo. Subsequent observations with numerous telescopes confirmed that this object, seen in the galaxy NGC 4993 about 130 million light-years away, is indeed a so-called kilonova. Such objects are the main source of very heavy chemical elements like gold and platinum. Credit: ESO/L. Calçada/M. Kornmesser CC BY-SA 4.0


Also, the Square Kilometre Array SKA, the largest-ever radio telescope being built in South Africa and Australia, will look for characteristic helium radiation from the very early universe that is expected to be found around primordial black holes. And the European LISA space mission will start searching for the characteristic gravitational wave background noise of merging black holes. And more missions are being planned to come to grips with the Dark Matter and Dark Energy problems, among others.

No matter whether the observed black holes are primordial or not: “If LIGO finds that large black holes are far more common than expected, they could help explain the elusive Dark Matter,” says Karsten Danzmann, Director of the German Albert Einstein Institute, which is part of the LIGO Scientific Collaboration. So even if the theory of Garcia-Bellido and Clesse is not confirmed in every detail, the Dark Matter mystery could be about to be solved.

Yes, gravitational wave astronomy is like opening a new window into the universe, enabling researchers to finally witness binary black hole mergers. But these instruments also open new windows in other fields of astronomy: For instance, on 17 August 2017, LIGO found gravitational waves from a rare neutron star merger. The researchers immediately alerted the astronomical community, resulting in one of the largest observation campaigns ever with 70 participating telescopes. With the help of these other instruments, the exact location of the merger could be determined – but, incidentally, another long-time astronomical mystery was solved: The recorded gamma rays reveal that at least some gamma-ray bursts are caused by merging neutron stars. This was expected theoretically, but now researchers can finally test their theories.

In the next few years, more gravitational wave events will be observed, and they will reveal astounding details about massive object. Moreover, other fields of astronomy will profit on a scale that we cannot foresee today.

The Hungry Brain

Gut brain Axis Feature Credited


Under normal, healthy conditions we eat whenever we are feeling hungry. In addition to the feeling of hunger, we also often have an appetite for a specific kind of food, and sometimes we simply crave the pleasure a certain food like chocolate or pizza may provide us. This pleasure is part of the hedonic aspect of food and eating. In fact, anhedonia or the absence of experiencing pleasure from previously pleasurable activities, such as eating enjoyable food, is a hallmark of depression. The hedonic feeling originates from the pleasure centre of the brain, which is the same one that lights up when addicts ‘get a fix’. Hedonic eating occurs independently of the gut-brain axis, which is why you will keep eating those crisps and chocolate, even when you know, you’re full. Hence, sayings like “These chips are addictive!” are much closer to the biological truth than many realise.  

But how do we know that we are hungry? Being aware of your surrounding and/or your internal feelings is the definition of consciousness. And a major hub for consciousness is a very primal brain structure, called the thalamus. This structure lies deep within the brain and constantly integrates sensory input from the outside world. It is connected to cognitive areas such as the cortex and the hippocampus, but also to distinct areas in the brainstem like the locus coeruleus, which is the main noradrenergic nucleus in the brain and regulates stress and panic responses. Directly below the thalamus and as such also closely connected to this ‘awareness hub’ lies the hypothalamus.

The hypothalamus is a very complex brain area with many different functions and nuclei. Some of them are involved in the control of our circadian rhythm and internal clock – the deciphering of which was awarded the 2017 Nobel Prize in Physiology or Medicine. But the main task of the hypothalamus is to connect the brain with the endocrine system (i.e. hormones) of the rest of the body. Hormones like ghrelin, leptin, or insulin are constantly signalling your brain whether you are hungry or not. They do so via several direct and indirect avenues, such as blood sugar levels, monitoring energy storage in adipose cells, or by secretion from the gastrointestinal mucosa.

There are also a number of mechanosensitive receptors that detect when your stomach walls distend, and you have eaten enough. However, similarly to the hormonal signals, the downstream effects of these receptors also take a little while to reach the brain and be (consciously) noticeable. Thus, the slower you eat, the less likely you will be to over-eat, because the satiety signals from hunger-hormones and stomach-wall-detectors will reach your consciousness only after about 20 to 30 minutes.

Leaving the gut and coming back to the brain, the hypothalamus receives endocrine and neuropeptidergic inputs related to energy metabolism and whether the body requires more food. Like most brain structures, the hypothalamus is made up of several sub-nuclei that differ in cell-type and downstream-function. One of these nuclei, the arcuate nucleus of the hypothalamus, is considered the main hub for feeding and appetite control. Within it there are a number of signalling avenues that converge and that – if altered or silenced – can induce for instance starvation. Major signalling molecules are the Neuropeptide Y, the main inhibitory neurotransmitter GABA, and the peptide hormone melanocortin. The neurons in the arcuate nucleus are stimulated by these and other signalling molecules in order to maintain energy homeostasis for the entire organism. There are two major subclasses of neurons in the arcuate nucleus that are essential for this homeostasis and that, once stimulated, cause very different responses: activation of the so-called POMC neurons decreases food intake, while the stimulation of AGRP neurons increases food intake. And this circuit even works the other way around: researchers found that by directly infusing nutrients into the stomach of mice, they were able to inhibit AGRP neurons and their promotion of food intake.

Given this intricate interplay between different signalling routes, molecules, and areas it is not surprising then that a disrupted balance between all of these players could be detrimental. Recent studies identified one key player that can either keep the balance or wreak havoc: the gut microbiome


Bacteria colonising intestinal villi make up the gut microbiome. Picture/Credit: ChrisChrisW/

Bacteria colonising intestinal villi make up the gut microbiome. Picture/Credit: ChrisChrisW/


The gut microbiome is the entirety of the microorganisms living in our gastrointestinal tract, and they can modulate the gut-brain axis. Most of the microorganisms living on and within us are harmless and in fact are very useful when it comes to digesting our food. However, sometimes this mutually beneficial symbiosis goes awry, and the microbes start ‘acting out’. For instance, they can alter satiety signals by modulating the ghrelin production and subsequently induce hunger before the stomach is empty, which could foster obesity. They can also block the absorption of vital nutrients by taking them up themselves and thereby inducing malnutrition. A new study which was published only last month revealed that Alzheimer patients display a different and less diverse microbiome composition than healthy control subjects. Another study from Sweden even demonstrated that the specific microbiome composition occurring in Alzheimer’s patients induces the development of disease-specific amyloid-beta plaques, thereby establishing a direct functional link between the gut microbiome and Alzheimer’s disease – at least in mice. Similarly, the composition and function of the microbiome might also directly affect movement impairments in Parkinson’s disease. In addition, there is also mounting evidence that neuropsychiatric diseases such as anxiety or autism are functionally linked to the microbiome

Moreover, even systemic diseases such as lung, kidney and bladder cancers have been recently linked to the gut microbiome. Albeit, in this case, not the disease development and progression seem to be directly related to our gut inhabitants. Instead, the researchers found that if the microbiome of the cancer patients was disrupted by a recent dose of antibiotics, they were less likely to respond well to the cancer treatment and their long-term survival was significantly diminished. It seems that the treatment with antibiotics disrupts specific components of the microbiome, which then negatively affects the function of the entire composition. 

While the cause or consequence mechanisms between these different afflictions and an altered microbiome have not been solved yet, it seems certain that it is involved in more than digestion. Hence, the already intricate gut-brain axis is further complicated by the gut microbiome, which not only affects when and what we eat, but can also determine our fate in health and disease.  

Immunotherapy: The Next Revolution in Cancer Treatment

Over the past 150 years, doctors have learned to treat cancer with surgery, radiation, chemotherapy and vaccines. Now there is a new weapon for treatment: immunotherapy. For some patients with previously incurable cancer, redirecting their immune system to recognise and kill cancer cells has resulted in long-term remission, with cancer disappearing for a year or two after treatment.


Lymphocytes attacking cancer cell. Credit: selvanegra/

Lymphocytes attacking a cancer cell. Credit: selvanegra/


Cancer immunotherapy has been used successfully to treat late stage cancers such as leukaemia and metastatic melanoma, and recently used to treat mid-stage lung cancer. Various forms of cancer immunotherapy have received regulatory approval in the US, or are in the approval process in the EU. These drugs free a patient’s immune system from cancer-induced suppression, while others engineer a patient’s own white blood cells to attack cancer. Another approach, still early in clinical development, uses antibodies to vaccinate patients against their own tumours, pushing their immune system to attack the cancer cells.

However, immunotherapy is not successful, or even an option, for all cancer patients. Two doctors used FDA approvals and US cancer statistics to estimate that 70 percent of American cancer deaths are caused by types of cancer for which there are no approved immunotherapy treatments. And patients that do receive immunotherapy can experience dramatic side effects: severe autoimmune reactions, cancer recurrence, and in some cases, death.

With such varied outcomes, opinions vary on the usefulness of immunotherapy. Recent editorials and conference reports describe “exciting times” for immunotherapy or caution to “beware the hype” about game-changing cancer treatment. Regardless of how immunotherapy could eventually influence cancer treatment, its development is a new revolution in cancer treatment, building on detailed biochemical knowledge of how cancer mutates and evades the immune response. Academic research into immunotherapy is also being quickly commercialised into personalised and targeted cancer treatments.


T-cells (red, yellow, and blue) attack a tumour in a mouse model of breast cancer following treatment with radiation and a PD-L1 immune checkpoint inhibitor, as seen by transparent tumour tomography. Credit: Steve Seung-Young Lee, National Cancer Institute\Univ. of Chicago Comprehensive Cancer Center

T-cells (red, yellow, and blue) attack a tumour in a mouse model of breast cancer following treatment with radiation and a PD-L1 immune checkpoint inhibitor, as seen by transparent tumour tomography. Credit: Steve Seung-Young Lee, National Cancer Institute\University of Chicago Comprehensive Cancer Center

Checkpoint inhibitors

Twenty years ago, James Allison, an immunologist at MD Anderson Cancer Center, was the first to develop an antibody in a class of immunotherapy called checkpoint inhibitors. These treatments release the immune system inhibition induced by a tumour. The drug he developed, Yervoy, received regulatory approval for the treatment of metastatic skin cancer in the US in 2011. By last year, Yervoy and two newer medications had reached 100,000 patients, and brought in $6 billion a year in sales.

In general, immunotherapy tweaks T-cells, white blood cells that recognise and kill invaders, to be more reactive to cancer cells. Tumours naturally suppress the immune response by secreting chemical messages that quiet T-cells. Cancer cells also bind to receptors on the surface of T-cells, generating internal messages that normally keep the immune system from attacking healthy cells.

One of those receptors is called CTLA-4. Allison and his colleagues blocked this receptor on T-cells with an antibody, and discovered that T-cells devoured cancer cells in mice. Since then, other checkpoint inhibitors have been developed and commercialised to block a T-cell receptor called PD-1 or its ligand PD-L1, present on some normal cells as well as cancer cells.

In the US, PD-1 and PD-LI inhibitors have been approved to treat some types of lung cancer, kidney cancer, and Hodgkin’s lymphoma. And the types of potentially treatable cancers are growing: Currently, more than 100 active or recruiting US clinical trials are testing checkpoint inhibitors to treat bladder cancer, liver cancer, and pancreatic cancer, among others.



Another type of cancer immunotherapy, called CAR-T, supercharges the ability of T-cells to target cancer cells circulating in the blood. In August, the first CAR-T treatment was approved in the US for children with aggressive leukaemia, and regulatory approval for a treatment for adults came in October.

To produce CAR T-cells, doctors send a patient’s blood to a lab where technicians isolate T-cells and engineer them to produce chimeric antigen receptors, or CARs. These CARs contain two fused parts: an antibody that protrudes from the surface of a T-cell to recognise a protein on cancerous B-cells (commonly CD-19) in the blood and a receptor inside the T-cell that sends messages to cellular machinery. When the antibody binds to a tumour cell, it activates the internal receptor, triggering the CAR T-cell to attack the attached cancer cell.

In clinical trials, some patients treated with CAR T-cells for aggressive leukaemia went into remission when other treatments had failed. But several high-profile trials had to be suspended because of autoimmune and neurological side effects, some leading to patient deaths.

To improve the safety of CAR-T treatment, researchers are now engineering “suicide switches” into the cells, genetically encoded cell surface receptors that trigger the cell to die when a small molecule drug binds them. If doctors see a patient experiencing side effects, they can prescribe the small molecule drug and induce cell death within 30 minutes.

Other safety strategies include improving the specificity of CAR T-cells for tumour cells because healthy cells also carry CD-19 receptors. To improve CAR-T tumour recognition, some researchers are adding a second CAR, so that the engineered cell has to recognise two antigens before mounting an attack.


As seen with pseudo-coloured scanning electron microscopy, two cell-killing T-cells (red) attack a squamous mouth cancer cell (white) after a patient received a vaccine containing antigens identified on the tumour. Credit: Rita Elena Serda, National Cancer Institute\Duncan Comprehensive Cancer Center at Baylor College of Medicine

As seen with pseudo-coloured scanning electron microscopy, two cell-killing T-cells (red) attack a squamous mouth cancer cell (white) after a patient received a vaccine containing antigens identified on the tumour. Credit: Rita Elena Serda, National Cancer Institute\Duncan Comprehensive Cancer Center at Baylor College of Medicine


 A third type of immunotherapy aims to target mutated proteins that are a hallmark of cancer. Cancer cells display portions of these mutated proteins, called neoantigens, on their surface. Researchers are studying how to use tumour-specific neoantigens in vaccines to help the body mount an immune response targeted at the cancer.

Results from two recent small clinical trials for patients with advanced melanoma suggest that neoantigen vaccines can stop the cancer from growing, or in some cases, shrink the tumours with few reported side effects. But it’s too early in clinical development to know if the vaccines will extend the lives of cancer patients.

There are two steps to making a neoantigen vaccine: first, identify mutated proteins unique to most of a patient’s cancer cells and second, identify portions of those proteins that could most effectively stimulate an immune response.

To identify mutated proteins, researchers sequence the genome of cancer cells and compare it to the sequence in healthy cells. Next, they identify which mutations lead to the production of altered proteins. Finally, they use computer models or cellular tests to identify the portions of proteins that could be the most effective neoantigen.

This last step of predicting neoantigenicity is the most challenging part of developing a new neoantigen vaccine. Lab experiments to confirm the activity of multiple neoantigens are time consuming, and current computer models to predict antigenicity can be inaccurate due to low validation.

A few principles of cancer biology also make developing neoantigens for long-lasting treatment difficult. Some cancers may have too many mutations to test as potential neoantigens. Cancer cells also continue to mutate as tumours grow, and some cells may not display the neoantigens chosen for a vaccine. Finally, cancer cells may naturally stop displaying antigens on their surface, as part of their strategy for evading an immune response.

However, identifying neoantigens can still be useful as cancer biomarkers. Or if used in a vaccine, they may be most effective in combination with other drugs: a few patients in the small clinical trials whose cancer relapsed after the trials responded to treatment with a checkpoint inhibitor.

Cancer has been a common topic in Nobel Laureates’ lectures at many Lindau Meetings. Learn more about these lectures, as well as Nobel Prize winning research related to cancer, in the Mediatheque.