Published 16 October 2025 by Karsten Lemm
Security in the Age of Artificial Intelligence
Paul Romer is no fan of chatbots. Who knows what ChatGPT and similar artificial intelligence models might say about him? That Romer is an American economist at Boston College, best known for his work on economic growth and innovation, which earned him the 2018 Nobel Memorial Prize in Economic Sciences, along with William D. Nordhaus – this much they will probably get right. But beyond that? What about his opinions, his lectures, his essays? How can anybody be sure that Romer actually said or wrote what generative AI systems claim he did?
“Increasingly, all of our messages are going to get sucked up by Meta and Google and OpenAI and Microsoft”, Romer told his audience at the 8th Lindau Nobel Meeting in Economic Sciences. “And it’ll get munged around with a bunch of other stuff, and then people will get back, ‘Well, here’s what Romer thought about innovation.’”
The answer might be factually correct – or full of errors, given that these types of AI systems are known to fabricate information, depending on their training data and statistical uncertainties. Such “hallucinations”, as they’ve become known, keep plaguing even the most advanced Large Language Models, the algorithms powering ChatGPT and its competitors.
This prompted Romer to call current AI systems of this nature “a disaster”, making him wish that their development could be undone. “We should have strangled them in the crib”, he said. “And I’m worried about how you get a true message across as a scientist.”
AI’s Twin Security Threats
Romer’s anger is a sign of cracks appearing in the foundation of the digital world we’ve come to know – and trust. A vast ecosystem of technologies and service providers makes sure that we typically don’t have to worry whether online banking is secure and that our text messages stay private, or that our favourite shopping website really belongs to our favourite shop, rather than an imposter.
But technological advances are threatening the status quo. Artificial intelligence in particular worries computer scientists and economists alike, as automated disinformation campaigns and “deepfakes” sow doubt about what is real anymore, and who can be trusted. At the same time, AI adoption by companies large and small is creating widespread insecurity about potential job losses.
There is no easy fix for this problem, but there are measures that politics and businesses can take to address the growing anxiety over AI, as several talks at #LINOecon 2025 showed.
Digital Proof: It’s Really Me!
Signatures and seals have been used for centuries to verify the authenticity of documents and protect against fraud. The main difference between ancient times and today is technology. In early Mesopotamia, clay tablets were stamped with cylinder seals which left a unique impression that identified the owner. Today, digital signatures guarantee that documents are genuine and haven’t been tampered with.
The basic principle relies on a technique called “asymmetric encryption”, which combines two lengthy, seemingly random strings of letters and numbers. Both are the result of complex mathematical calculations and act as “keys”. One key is published to the public, the other is known only to the creator of the document – the person verifying its authenticity.
“The secret key is the important part here”, Romer explained in his lecture. “If you know my secret key, you can pretend to be me, and I’m in deep trouble.” The public key, however, “is the corresponding part that I can show anybody.” It works only in combination with Romer’s secret key and verifies that it was truly him who signed a certain document. This could be an e-mail, a Word file or a blog post published on his website. “If you check the signature, and the following equation holds, then you’re most of the way towards knowing: Romer said this”, he explained. “And nobody’s changed a bit since he said it.”
For maximum security, the public key must come from a trustworthy source – that’s why Romer posted it on the contact page of his website as well as his GitHub repository. These two independent sources should give recipients confidence that the public key is genuinely his, Romer argued. “This is a really powerful way to take back control from all the intermediaries”, he added. “Somebody can change this message, but it won’t verify if they make a change even a single bit – and it’s over.”
Economists in particular should get familiar with this form of data protection, Romer suggested – not just to prevent fraud, but to prepare for a future in which coding skills will be as essential to their job as knowing math. “I think it’s inevitable that code becomes as important of a domain-specific language that we use to communicate with each other as math has already become”, Romer said. Consequently, documents would increasingly contain code and that could become “a huge security risk”, he argued. “The new equilibrium should be that you don’t even open the file unless it’s signed and you know who the sender is.”
Growing Challenges
In the world of Yael Tauman Kalai, trust is hard to come by. A gifted mathematician since her childhood in Israel, the MIT researcher has found her talent to be particularly useful in the field of cryptography. In 2022, Tauman Kalai received the prestigious ACM Prize in Computing for her contributions to verifying the correctness of computations in more efficient ways. This has become increasingly important in the world of big data, cloud computing and AI, as she explained in her #LINOecon Heidelberg Lecture, the Heidelberg Laureate Forum guest lecture at the Lindau Economics Meetings.
“Verification is really hard – especially today where systems are growing in complexity”, Tauman Kalai said. “The computations we are doing are huge, the data that we’re dealing with is massive, and verification became really, really costly and slow.” This is a challenge for the digital economy, which relies on data to be encrypted and protected from attackers. Every transmission, every transaction requires proof that no information has been compromised.
Yael Tauman Kalai’s talk illustrates in detail how the worldwide community of mathematicians and cryptographers found a way to simplify the process through a method called succinct proofs. This makes it possible to show beyond doubt “that the result of some massive computation is correct” without a need to run the computation itself, the researcher explained. “What we can do”, she said, is condense cryptographic proofs “to very easy to verify, succinct certificates of correctness.” The concept has seen rapid adoption in the blockchain and cryptocurrency ecosystem, for example, which is built on peer-to-peer transactions that need to be verified with minimal effort.
A New Adversary
The MIT researcher, meanwhile, has already moved on to other challenges. The first are quantum computers, which could break the most widely used forms of encryption once they are powerful enough. “In the last several years, we’ve been all working really hard to upgrade all our cryptographic schemes to be based on what we call post-quantum assumptions”, Tauman Kalai said. “This requires moving from a number theory, such as hardness of factoring, to problems that are based on very different types of mathematics.”
That effort has already yielded success, with several post-quantum encryption standards being available now. The bigger issue is “the new kid on the block, AI”, Tauman Kalai told the audience. “I’m actually terrified with the release of this technology”, she admitted. “We don’t know how to verify that it’s doing what it’s supposed to – that it’s safe.”
In a backstage conversation with journalists, she described how she sees AI through the lens of a security researcher. “Cryptographers are used to thinking adversarially. We always think there’s an adversary that tries to attack our system.” Artificial intelligence, given its current lack of transparency, should be seen in the same way, she argued.
“We need to think of it as an adversarial entity for the sake of protecting [ourselves] in case it behaves maliciously”, Tauman Kalai said. One example are hallucinations, even if the AI does not deliberately try to mislead users. “I think of it as malicious because it gives me the wrong answers”, Tauman Kalai explained. “So I want protect the users to make sure that they understand they’re getting incorrect information.”
Her solution resembles Paul Romer’s approach. “What I would like to have”, she said, is a certificate that allows the user’s device to “verify that this AI answer is correct, and if not, put something like a red X on it, so people can know, ‘Okay, it was hallucinating.’”
The Big Divide
Simon H. Johnson addressed another concern that many observers share about AI: “What exactly will the effects of artificial intelligence be on jobs?”, the MIT economist asked in his lecture “Technology and Global Inequality in the Age of AI”. Answering this question is a focus of Johnson’s current work with his colleague Daron Acemoglu. Both received the 2024 Sveriges Riksbank Prize in Economic Sciences, along with James A. Robinson of the University of Chicago.
“We do think AI is going to have transformative effects”, Johnson said. “We do think it’s going to change who gets what kind of job, and I think there’s an enormous field to be built on understanding that and measuring that.”
The main concern the MIT researchers have is that AI might be used mostly to eliminate jobs, which could lead to “a lot more job market polarisation”, he warned. “People with less formal education, and in less advantaged parts of the world, are probably going to struggle. So, the better off do well, the less well-off feel a lot of pressure.”
In the United States, five decades of digital transformation have already had this effect. Between 1963 and 2017, men and women with a college degree saw their earnings grow, while men, in particular, who dropped out of high school lost ground, as David Autor – a collaborator of Johnson and Acemoglu – showed in his paper “Work of the past, work of the future”.
Should AI increase this divide, social tensions might rise to the point where democracy itself is in danger, Johnson worries. “Will any of us have a democracy in our countries?”, he asked. “Will the world have a democratic, deliberative process? Or will we have a lot more polarisation and a lot more anger?”
Playing with Galaxies
When the economist had a chance to discuss his concerns with Sam Altman, the CEO of OpenAI, he got the impression that the developers of this transformative technology might not be willing to accept the consequences of their actions. “I said, ‘Sam, what will happen to the jobs? What happens to the workers in your maximal version of AI?’”, Johnson recounted. “And he said, ‘Simon, don’t worry about that. You and I will be playing with galaxies. We will all be gods.’”
This, to Johnson, was a sign of hubris and an “unwillingness to grapple with some of the harsh realities” that the world could face if artificial intelligence was used primarily to eliminate jobs and maximize profits. “Technology is always a choice”, Johnson emphasized. “How we develop technology, who gets to use technology, what kind of jobs you get from technology – these are choices we make at the social level.”
Any disruptive technology that can automate tasks will inevitably result in displacement of workers, the economist, noted. “The key point though is, does the technological transformation also create new tasks for humans, particularly tasks that require expertise?”
When Henry Ford introduced the moving assembly line to car production, which used to be a largely artisanal effort, he did exactly this, Johnson pointed out. The production process became much more efficient, making cars cheaper to produce and to buy. At the same time Ford’s innovation gave millions of workers a chance to earn “good wages doing things that humans, for the most part, had not done before”, Johnson said.
“It’s the creation of the new, that’s what we need to be pursuing”, he demanded. “That’s what we need the tech industry to think about; that’s what we need public policy to focus on.”
Automation was inevitable, Johnson argued, but designing AI systems that would benefit everyone, including people with little school education – this required determination and effort, something that was currently lacking. “I think the very real danger is that the way in which AI is being envisaged and designed and implemented by the tech industry is exactly going to reinforce these polarizing trends”, Johnson warned.
Where do humans still have an advantage over algorithms? For now at least, it’s in our ability to “think outside the box”, the economist believes. “It’s being able to do things that humans and machines have never done before”, he told journalists at the Meeting. Since current AI systems typically learn from large amounts of training data, they are by definition backward-looking rather than innovative, Johnson reasoned.
“I think AI could easily lead to stagnation”, he said. “The way that humans make progress is pluralism, is allowing more voices to speak up and listening to more perspectives, more different views.” Often these views come from outside the mainstream – before the majority embraces successful fringe ideas as the best way forward.
Comic books and science-fiction novels once lived outside of the mainstream as well. Johnson is a fan of both. He himself writes science-fiction because “it forces you to think about the holistic system – if technology does this and this, how will that affect social structure? How will that affect what people believe, how will that affect the economy?” And he has already turned his best-selling book Power and Progress, co-authored with Daron Acemoglu, into a mini comic book, free to read online. “Consider turning whatever work you do, your books, your papers, whatever, into graphic novels”, Johnson told his colleagues in the audience. “It’s great fun. And I can recommend some wonderful people to do it.”
“People.” Not algorithms.