This submission to the 2025 Review of the Australian Code of Practice on Disinformation and Misinformation recommends a fundamental reconceptualisation distinguishing between individual-level harm from exposure to dangerous content and societal-level harm from systemic information quality degradation. The report proposes that platforms maintain accountability for preventing individual harm through traditional content moderation whilst contributing to a novel persona-based ecosystem monitoring system for addressing collective harm. This system would establish real-time measurement of information quality by defining representative user personas (average Australians, female teenagers, elderly users) and tracking their content exposure across critical topics (elections, health information, financial products) through both incidental and intentional browsing modes. The submission recommends maintaining coverage of both disinformation and misinformation with differentiated interventions, shifting transparency reporting from quantitative content removal metrics towards platform contributions to persona-based monitoring, policy evolution, and API-enabled data accessibility for independent researchers. The report strongly supports an ecosystem approach requiring coordinated responses from multiple actors, recommending strengthened laws governing political advertising truth, elimination of dark patterns and unfair trading practises, and updated legislation addressing AI-generated deepfakes. The framework aims to protect epistemic rights—individuals’ rights to access sufficient information for informed decision-making—whilst respecting freedom of expression and acknowledging platforms as one actor within a complex information system.