2 minute read

As AI systems become more powerful and accessible, we’re witnessing a surge in unintended, fringe uses that current regulatory frameworks can’t keep up with. In our new article with Milly Stilinovic and Jonathon Hutchinson in New Media & Society and its companion piece in The Conversation, we introduce a new concept: the Undersphere.

What is the Undersphere?

We define the Undersphere as a networked space of creative experimentation that exists outside formal markets and institutions — places like GitHub, Reddit (e.g., r/StableDiffusion), and Hugging Face. Here, AI isn’t just consumed; it’s remixed, repurposed, and pushed in unexpected directions. These communities aren’t necessarily political, but their outcomes often are.

A notorious example is the rapid rise of deepfake pornography, which emerged not from corporate labs, but from creative subcultures operating in these loosely regulated, experimental spaces. Today, more than 98% of all deepfake videos online fall into this category.

What’s the Risk?

The risks posed by the Undersphere aren’t just technical — they’re democratic. Generative AI used in these contexts challenges core principles like consent, privacy, and truth. Yet existing regulatory responses (like the EU AI Act) are structured around intended use, with little room to manage the messier, more creative misuse of these tools.

We argue that risk is not static or predictable. Generative AI’s diffusion across social media and developer communities creates complex, cascading, and systemic risks — exactly the kind of challenges traditional frameworks struggle to contain.

Climate Governance as a Model for AI

To better respond to these challenges, we propose that AI governance take inspiration from climate governance — which has evolved to deal with uncertain, distributed, and long-term risks. That means:

  • Shifting from rigid regulation to adaptive governance
  • Accepting uncertainty and incomplete knowledge as a feature, not a bug
  • Focusing on networks of risk, not isolated use cases
  • Engaging diverse voices to anticipate unintended harms

This approach, we argue, is more likely to protect the public and the democratic fabric of our societies than reactive bans or usage-based categorisations.

From LOAB to Legislation

One striking case we analyse is the viral AI creation LOAB — a haunting, unintended output of text-to-image models, discovered and shared through Reddit. The community around LOAB exemplifies the Undersphere: highly creative, technically proficient, yet operating entirely outside institutional oversight.

When these practices intersect with social platforms and public discourse, the results can be transformative or catastrophic — and governance must prepare for both.