IA y desinformación

Tu teléfono vibra con una alerta de noticias. ¿Pero qué pasaría si la hubiera escrito una IA y no fuera cierta?, escribió un editor de The Guardian . La desinformación no es precisamente un invento moderno, pero con la IA como amplificador, ahora se propaga con mayor rapidez, se adapta con mayor inteligencia y, posiblemente, impacta con más fuerza que antes. Este auge se produce en un momento en que el periodismo independiente —el contrapeso tradicional a la falsedad— se enfrenta al declive económico, la reducción de las redacciones y la erosión de la confianza pública.


As a journalist, I’ve spent years reporting on artificial intelligence. I’ve traveled to four continents to interview headline-making AI luminaries and unsung researchers doing vital work, as well as ethicists, engineers, and everyday people who have been helped or harmed by these systems. Along the way, I’ve been a journalism fellow focused on AI at Columbia, Oxford’s Reuters Institute, and the Mila-Quebec AI Institute.

And still, I find myself unsettled. Headlines about AI in journalism swing between clickbait panic and sober alarm. They can feel speculative, even sci-fi—but also urgent and intimate:

“Your phone buzzes with a news alert. But what if AI wrote it—and it’s not true?” an editor at The Guardian wrote.

“It looked like a reliable news site. It was an AI chop shop,” two reporters at the New York Times wrote.

“News sites are getting crushed by Google’s new AI tools,” two reporters at the Wall Street Journal wrote.

Misinformation is hardly a modern invention, but with AI as an amplifier, it now spreads faster, adapts smarter, and arguably hits harder than before. This surge comes as independent journalism—the traditional counterweight to falsehood—faces economic decline, shrinking newsrooms, and eroding public trust.

Every time a falsehood is shared in outrage or belief, it signals demand, and the information marketplace may respond with even more invented nonsense. On the supply side, bad actors who leverage misinformation to widen societal and political divides have emerged as “the most severe global risk” in the years ahead, according to a 2024 World Economic Forum Global Risks Report. You may or may not agree with that last statement. But most can agree: AI is disrupting the world’s information ecosystem.

So, it bears asking: Is AI-enabled fake news a problem of supply or demand? And who has the power to diminish or increase it?

On supply. Before the internet era, mounting a fake-news operation was resource-intensive, relying on either deep pockets or an army of low-paid workers. Today, generative AI folds that overhead into relatively cheap keystrokes, and the result is, as some researchers warn, a “chain reaction of harm” with potential to deepen public health crises, hinder disaster responses, and undermine democracy.

NewsGuard, a firm that rates the reliability of online sources, has a count that is sobering: By May 2025, more than 1,200 AI-generated news and information sites with seemingly legitimate names—such as iBusiness Day and Ireland Top News—were publishing in 16 languages, churning out false claims and operating with little to no human oversight. This represented a more than a 20-fold jump in AI-generated “news” sites in the past two years.

But fringe sites of dubious quality are not the only concern. Prominent news organizations—including the Washington PostPolitico, and The Guardian—have linked to stories written by chatbot-run outlets that have little regard for accuracy, according to the New York Times.

Some legacy journalistic outlets have also used AI in ways that blur journalistic standards. Bloomberg debuted AI-generated news summaries this year—a move that required issuing dozens of corrections. Technology outlet CNET and Gannett, the largest US newspaper chain, have also experimented with using AI to write news stories, resulting in embarrassing errors. Some such efforts claim to have good intentions, as when hyperlocal news outlet Hoodline sought to leverage AI to deliver stories in news deserts. But the publisher’s bots were given human personae—via fake headshot photos and human-seeming bios—and the experiment largely served to erode public trust in journalism.

Meanwhile, the push alerts on our phones are also experiencing AI-enabled trouble. Apple’s iPhone news alerts, decorated with logos from the BBC and New York Times, sometimes summarize those outlets’ works—but with hallucinated details added. Apple’s suggested remedy? Let falsehoods stand but clarify that the summaries come from AI, which can often be wrong or simply make things up.

“It just transfers the responsibility to users who, in an already confusing information landscape, will be expected to check if information is true or not,” Vincent Berthier, head of Reporters Without Borders’ technology and journalism desk, told the BBC.

Boston University economist Marshall Van Alstyne asks me to picture the information sphere as the sky above a soot-stained mill town. Misleading headlines and posts billow like pollution from a factory; yet instead of urging the factory to cut down on its emissions, town managers hand out gas masks and hope every passerby remembers to breathe through the filter.

The metaphor frays, Van Alstyne concedes. Smoke can be reduced with taxes or fines—what economists call a “centralized” solution. In the information ecosystem, such a centralized solution would bump up against expectations and laws concerning free speech. That’s why Van Alstyne and his team are researching decentralized solutions—the only other option, according to economic theory—that incentivize accuracy in information landscapes. Picture a marketplace of ideas where honesty earns interest and exaggeration or lies exact a cost.

“We should be able to reduce the flow of misinformation with no censorship at all and no central authority judging truth at all,” Van Alstyne said. “This means not government; not a powerful individual like [Elon] Musk or [Mark] Zuckerberg; not even I as a designer could bias the solution, because it’s totally decentralized. That’s the goal.”

But progress on solutions to the misinformation tsunami has stalled. In January 2025, a Trump administration Presidential Action, allegedly aimed at protecting free speech, prompted the National Science Foundation to halt grant funding for research projects that have “the goal of combatting misinformation, disinformation, and malinformation.” Van Alstyne’s team lost its federal lifeline.

“We were at the level of empirically testing whether this works, and so far our preliminary tests suggest that it does,” Van Alstyne said. “But we need funding to continue the research. We’re still working on trying to do as much as we can on zero funds.”

Supply-side reformers reach for studier guardrails around production. Some push to strengthen adherence to journalistic standards, including truthfulness and accuracy. Others, urge platforms to hold  misinformation super-spreaders accountable with “clear, transparent, and consistently applied policies that enable quicker, more decisive actions and penalties, commensurate with their impacts,” as an Aspen Institute report recommends. The Defense Advanced Research Projects Agency (DARPA) funds forensic systems that spot inconsistencies in deep fakes, hoping to stay a step ahead of the forgers. Yet the game feels rigged in favor of the bad actors, as every new falsehood detector becomes another pattern to dodge.

“If you’re trying to detect fake news or misinformation,” Van Alstyne sighs, “then there’re always ways to detect the detector and avoid detection.”

Also, an exclusive focus on the technology that enables fake-news suggests that the problem is tractable, Matthew Leake at the Reuters Institute observes. That distracts from larger questions about why people are willing to accept information at odds with the truth.

On demand. According to researchers writing in the journal Nature last year, three common misperceptions cloud the debate over AI-enabled fakery: that average citizens drown in it; that platform algorithms are largely responsible for this exposure; and that social media alone is the cause of broader social problems such as polarization. Instead, exposure to false and inflammatory content tends to be limited to a narrow fringe whose members have strong motivations to seek out such content, these researchers argue.

Similarly, those who bypass high-quality news that adheres to journalistic standards—truthfulness, accuracy, objectivity, and impartiality—in favor of fake news often have low trust in institutions or hold strong partisan views, researchers found in a study of American social media users who shared more than 500,000 news story headlines.

And during the 2024 elections—when more than 2 billion people voted worldwide, in many contests—AI-generated false content “did not fundamentally change the landscape of political misinformation,” according to Princeton University researchers who conducted an analysis of cases in the WIRED AI Elections Project.

Collectively, these studies suggest that AI-enabled mis- or dis-information largely succeeds with those who already agreed with the broad intent of the false message—and often leaves others unconvinced.

“Increasing the supply of misinformation does not meaningfully change the dynamics of the demand for misinformation, since the increased supply is competing for the same eyeballs,” the Princeton researchers wrote after considering empirical evidence. Sasha Altay, who co-authored an article in the Harvard Kennedy School’s Misinformation Review about “overblown fears” concerning the impact of generative AI on misinformation, agrees.

“People are not very gullible,” Altay said. “They turn to mainstream news that they trust to learn about the world.”

When I asked whether botched AI experiments at traditional news organizations shoulder some blame for the rise of misinformation, Altay’s co-author Felix Simon offers the press some grace—and a valuable reminder about some democratically elected powerbrokers.

“News organizations do make errors. They cite someone who’s maybe less trustworthy than another source… In some cases, an individual journalist didn’t do due diligence. It’s also the case, unfortunately, that news organizations give voice to public figures who are less than trustworthy… That’s the biggest problem when it comes to misinformation writ large. It is not necessarily AI. It’s not necessarily some fake news website. It’s very powerful people or politicians who willingly make false and misleading statements who are cited or given space in traditional news media … in some cases for good reasons. These voices have the right to be heard and ultimately people should decide. But that also enables someone who’s very powerful, someone like US President Donald Trump, to voice things that then get reported on as news which are just blatantly false.”

From this perspective, teaching people to navigate a sea of misinformation may prove more effective than playing whack-a-mole with every new AI-generated headline. In other words, shrink demand. The Knight Commission on the Information Needs of Communities in a Democracy urges exactly that, recommending that schools “embed digital and media literacy” in their curriculums and that libraries and community centers become hubs for adult learning.

“There are particular reasons around identity, around partisanship, around socialization that make people demand misinformation, and that’s usually concentrated in a very particular subset of people,” Simon said. “While generative AI makes it possible to create misinformation at scale, you don’t need all that to serve the existing need some people have for false information.”

Understanding why people seek out or fall for misinformation can illuminate which populations are most vulnerable and how falsehoods spread, and that can inform interventions. Educating news consumers to engage more critically with content is one approach. They can learn to ask: Is the source credible? Why did the source share it? Am I willing to learn something that challenges what I want to believe?

Platform design also plays a role. YouTube’s information panels, Facebook’s third-party fact-checker labels, and WhatsApp’s forwarding limits all aim to add friction and offer users a cognitive check before believing or sharing content. Still, research studies that focus on exposure often measure just that—exposure, Van Alstyne cautions.

“That’s a different question from, ‘Did you vote differently?’” he said. “Small margins tip elections in close cases.” Van Alstyne emphasized that the harms of online misinformation often play out in the real world, often with grave consequences. Misleading narratives can fuel political violence, disrupt public health responses to pandemics, enable exploitation, and derail climate policy. The stakes, in other words, are not just informational; they can be existential.

How to disrupt the misinformation feedback loop. Even if the root of the misinformation problem lies in human psychology or cultural division, the tech-enabled system that responds to—and reinforces—impulses to engage is accelerating. Every share becomes fuel for algorithms that learn not what is true, but what is sticky. Misinformation thrives because people unknowingly train systems to serve them more of it. In that sense, AI does not just respond to demand for misinformation. It sharpens it. It shapes it. Increasingly, it manufactures it.

And so news consumers are left in a positive feedback loop: the more people engage with misinformation, the more persuasive and personalized it becomes. The loop is indifferent to whether it began with human bias or machine suggestion. What matters is how fully the loop sustains itself.

We’ve been here before. In the climate crisis, positive feedback loops have played a starring role: Arctic ice melts, exposing darker ocean beneath, which absorbs more heat, which melts more ice. Warming releases methane from permafrost, which accelerates warming. But alongside these physical processes, cultural and psychological loops took hold. As evidence of crisis mounted, so too did resistance fueled by identity, ideology, and the comfort of denial. The more people dismissed the science, the easier it became to keep dismissing it—not because the facts changed, but because belief had hardened into social bond.

In response, climate scientists have built models to simulate emissions and sea-level rise—technical tools for technical problems. But they also learned—are still learning—to navigate emotional terrain: the fear, denial, and political polarization that stalls action. On the supply side, interventions such as carbon taxes, clean energy transition, and global accords target the emissions themselves. But on the demand side, many are working to erode the appeal of climate denialism. Scientists have turned outward to speak directly to the public. Journalists have come to see that facts alone are not enough; stories must reach not just minds, but emotions. Activists work to meet people not just with evidence, but with empathy. Climate literacy campaigns, youth movements, and citizen assemblies are all attempts to shift the narrative, to make truth feel not just urgent, but livable. The lesson is this: No feedback loop breaks without pressure on both the supply and demand ends.

What would it mean to approach our polluted information ecosystems in the same way?

AI-enabled misinformation, like climate denialism, is not merely a failure of facts. Regulation can help slow the supply. But unless we address the demand side—the human hunger for stories that soothe or confirm—we will be fighting a shadow. Combatting misinformation means designing friction not only into share buttons but also in human psyches. A human pause, a question, or a trace of doubt can keep the loop from closing too quickly—or even accelerating.

As with the climate, we are not merely victims of runaway algorithmic processes. We are their engineers, their accelerants, and their possible correctives. People do not abandon misinformation because they are corrected. They abandon it when they are invited into something better—a more coherent narrative, a more trustworthy source, or a place where identity and truth are no longer at odds.

That’s the work ahead. Not only slowing the supply, but softening the demand. Not just telling the truth, but making it a place where people can live.

Fuente: https://thebulletin.org