Published 4 July 2021 by Andrei Mihai
Artificial Intelligence Can Change the World, but It’s up to Us to Use It for Good
To say that the past decade has been a good one for computing would be a big understatement. We’re undergoing a revolution in computation and algorithms, and we’re already starting to see this technology applied in a multitude of fields, from healthcare to agriculture. Given its growing importance, AI couldn’t be absent from the schedule at LINO70.
In the past decade or so, Machine Learning (ML) and Artificial Intelligence (AI) have gone from relatively obscure fields of research to almost ubiquitous.
Jeffrey Dean has been a key figure in this development, which is why he was awarded the 2012 ACM Prize in Computing. Dean is the lead of Google AI, one of the major companies driving innovation in the field. In his lecture, Dean started off by explaining how much the field has taken off with an intriguing comparison: the field of machine learning is growing faster than processing power (represented by Moore’s Law).
This explosive growth has enabled a massive range of applications. Companies like Google have already been incorporating AI in their day-to-day operations for the better part of a decade, says Dean, but that’s just the tip of the iceberg.
Dean mentions an example from the Netherlands, where people are using machine learning to track cow fitness and see if their behavior is indicative of any problem. In fact, machine learning can be put to great use in agriculture, especially in the developing world. For instance, using a single phone app, people can take photos of some crops and see what disease the crops are suffering from and what the recommended treatment is.
AI can also do some pretty crazy things, like take text input and produce image output. For instance, Dean mentions, you can tell some algorithms to synthesise an image of a giraffe in a funny hat “and the system will produce a lot of images of giraffes wearing funny hats.” Simply put, algorithms are increasingly better at understanding different types of inputs. Here’s an example of OpenAI’s algorithm doing a similar thing.
Open Science
Part of the beauty of the field is that it is so democratic and open. Machine Learning has developed a vibrant open-source community (the importance of open science was discussed in a separate session), and many researchers prefer to upload their work to open repositories like arXiv, instead of a conventional (and often paywalled) journal. This approach seems to have worked wonders – everywhere you look, ML is taking strides.
“We are on the cusp of something we have been dreaming about for decades – fully autonomous vehicles,” says Dean. “Robotics is also something that will be making significant advances in the next few years because we can sense the world around us much better.” Dean says one such robot is able to learn how to pour liquids into a cup just by watching videos (although it’s only at the precision level of a four-year-old for now).
In healthcare, the potential is also tremendous. While algorithms won’t replace clinicians anytime soon, they can help clinicians make better decisions and reduce their workload, for instance by analyzing medical imagery or analyzing large sets of patient data. No longer restrained to the lab or online companies, ML seems ready to take on the real world.
“Deep neural networks and machine learning are helping make headway on some of the world’s grand challenges,” says Dean. “The last decade has really shown remarkable progress in a number of different areas, computers can now perceive the world, see around them, and that has dramatic implications.”
That’s exactly why some people are a bit worried.
The other side of the coin
“I am in awe by what AI and machine learning has achieved but also concerned that they don’t always work the way we expect them to,” says Vinton Cerf, who is widely regarded as one of the fathers of the internet. The panel discussion also featured Bernhard Schölkopf, a leading researcher in the machine learning community, Nobel Laureate Michael Levitt, PhD candidate Marco Eckhoff, Doctoral researcher Dina ElHarouni, and Post-Doc Eleni Karatza.
For many researchers, ML is essentially a time machine, says ElHarouni. Anything that was taking a researcher two or three weeks can now be automated and done not just faster, but at a fraction of the cost.
In healthcare, the area Karatza is focused on, AI and ML are particularly useful for finding connections and variability between patients – something which is very useful in pediatrics, Karatza notes. Michael Levitt also seems to agree. “Medicine is a large subject and AI is great at dealing with very large subjects. I think the next big thing is gonna be AI in medicine.”
But for all its potential, there are also concerns and threats surrounding the emergence of AI. For starters, while AI is excellent at finding correlations, it cannot help us understand and interpret these correlations, and as we all know, correlation does not imply causation. “They cannot ascribe any meaning to the results,” Levitt comments. Secondly, AI works with real-life data, which means it can feature real-life biases, and even accentuate these biases (or produce biases of its own). This is why, both Karatza and Dean explained, special measures are taken to address these problems. Clinical trials are designed by large and diverse teams, but it’s not always clear if this is sufficient to rule biases out.
But the societal implications and potential threats of AI extend far beyond that. Vint Cerf foresees “a lot of interesting court cases” in the coming decade, especially with autonomous cars around the corner. Schölkopf also sees problems of a different nature.
“For me, the most dramatic change has been that my best students used to go to academia. Now, all my best students want to go to the top companies.”
“I think we need to change something about that,” he adds.
All participants stressed the importance of not just using AI to aid society but using it in a way that is responsible and sustainable. How can that be done? According to Dean, it’s up to all of us to ensure that AI is used for good.
“I’m concerned about autonomous weapons, an application that is pretty scary, we need to establish societal norms for what kind of things it could be used for, we can draw a societal line and say this is not what we want,” Dean concludes. “I think it’s up to all of us to collectively make sure that this is a tool that is used in the best possible ways.”