Francesco Bailo. The Canberra Times. 22 December 2025.

The following is a pre-print and unedited version of the article published by the Canberra Times.


In the hours following the Bondi Beach shooting on 14 December, people in Australia and around the world turned to digital platforms (social media and search engines but since recently also AI chatbots) seeking information about what had happened. In Australia, social media is the second most popular source of news after TV and as popular as online news sources. This is a routine response to crisis events: the public reaches for the nearest source of information, expecting answers.

What they encountered instead was a familiar pattern of chaos. As news outlets have since then documented, misinformation spread rapidly across platforms in the immediate aftermath. This is not a failure unique to this crisis, nor is it simply a matter of bad actors exploiting a tragedy. It reflects a structural vulnerability in how information circulates during crises, one that emerges from the interaction between public demand, digital platform infrastructure, and the time required for verification.

When a crisis strikes, the public presents a cup and demands it be filled. The size of the cup is determined by the scale and nature of the event. A terrorist attack at one of Australia's most iconic locations, during a Hanukkah celebration, is an enormous cup.

This creates an “epistemic gap”: the space between what the public urgently wants to know and what authoritative sources can responsibly confirm. Journalists verify before publishing. Officials coordinate before speaking. These processes take time. But the cup is already there, demanding to be filled.

Digital platforms are the infrastructure engineered to fill that cup as quickly as possible, drawing from an almost infinite supply of content. Their algorithms are attuned to thirst, not truth. They do not ask whether information is verified, only whether it is engaging.

This marks a fundamental difference from news media. When a major event occurs, news organisations can increase their output. They can reassign journalists, extend bulletins, and publish more frequently. But their capacity remains finite, constrained by the number of reporters they employ, the verification processes they follow, and the editorial judgments they make about what is ready to publish, as their credibility and jobs clearly depend on following these processes scrupulously. A newsroom cannot produce more content than its staff can responsibly create.

Platforms face no such constraint. Traditionally, they did not produce content; they distributed it. And the pool of content available for distribution is essentially limitless, encompassing everything from verified reporting to unsubstantiated claims to deliberate fabrications. Social media platforms have cultivated vast networks of paid creators, numbering in the millions, who are incentivised to produce content that maximises engagement rather than accuracy. When a crisis generates a surge in attention, recommender systems signal demand, and creators race to meet it. With the integration of generative AI, digital platforms have gone further still: they now directly produce content on demand, generating answers to user queries in real time. In the Bondi case, X both distributed misinformation and, through its AI chatbot Grok, generated it. When public attention surges, platforms can always meet the demand.

These dynamics are not new, but digital infrastructure has transformed their consequences. Michael Golebiewski and danah boyd talked about "data voids", search terms for which little high quality content exists, making them vulnerable to exploitation by those who rush to fill the vacuum. The same logic applies to digital platforms during crisis events. When public attention surges around a breaking story, the information environment resembles a freshly created void: demand is immediate, authoritative supply is scarce, and the infrastructure will surface whatever content is available.

Research I conducted with colleagues examined Australian Twitter activity during the 2019–2020 bushfires and the early months of the COVID-19 pandemic. We found that these conditions produce measurable information disorder: a proliferation of competing sources and a decline in the share of content from authoritative outlets such as news organisations and government agencies. During COVID-19, accounts sharing low credibility content that had been peripheral during the bushfire crisis moved to more central positions in the information network, achieving influence comparable to professional journalists. The epistemic gap had widened, and they filled it.

Importantly, this was not primarily the result of coordinated or automated behaviour. The accounts we studied appeared to be genuine users, acting independently but benefiting from an information environment that rewarded speed over accuracy. The problem is not just bad actors. It is an infrastructure that systematically advantages whoever can pour fastest when the cup appears.

The public appears to recognise this dynamic. According to the 2025 Digital News Report from the University of Canberra, only 36 per cent of Australians who get their news mainly from social media trust the information they receive, the lowest level among all news sources. Yet in moments of crisis, when demand for information is most acute, these remain the channels many people check most often, because every time they hit refresh new content will always appear at the top of their screens.

Generative AI has added a new dimension to this problem. Creating plausible news content at scale no longer requires significant resources. Research from the Citizen Lab at the University of Toronto has documented networks of at least 123 fake local news websites generated at scale, cross referencing each other to create an appearance of legitimacy. The barriers to filling epistemic gaps with fabricated content have fallen considerably.

More concerning still is what happens when AI generated misinformation is then ingested by other AI systems. In the Bondi case, false claims originating from a fake news website were subsequently repeated by AI chatbots when users asked basic questions about the event. These systems, possibly not instructed to distinguish between authoritative and non authoritative sources, simply surfaced what was available and potentially engaging for their audiences.

The problem is structural, which suggests that responses must be too. Digital platforms that benefit from surges in attention during crises bear responsibility for the information disorder that results. Rather than monetising the traffic that accompanies public tragedy, they could implement mechanisms to slow the flow of unverified information when events are still unfolding and communicate uncertainty more transparently, acknowledging when information remains unconfirmed rather than presenting answers with false confidence.

Such measures would not eliminate misinformation. The demand for information during crises is real and legitimate, and the epistemic gap cannot be closed. But platforms could do more to acknowledge that gap rather than obscure it, and take responsibility for how they fill it.

Francesco Bailo is a Senior Lecturer in the School of Social and Political Sciences at the University of Sydney and deputy director of the Centre for AI, Trust and Governance.