Submission to the Productivity Commission: Harnessing Data and Digital Technology & Building a Skilled and Adaptable Workforce
The Centre for AI, Trust and Governance has challenged the Productivity Commission’s approach to AI regulation and data sharing, arguing that current proposals overlook critical issues of power, competition, and genuine trust.
Trust Is More Than Security
The Commission frames trust narrowly as “secure and responsible data handling.” Drawing on Professor Terry Flew’s research on Mediated Trust, our submission argues that public trust requires fairness, transparency, accountability, and governance structures that respect diverse interests—not just security.
A Changed Innovation Environment
Unlike the open, decentralized early internet, modern AI is dominated by closed, proprietary systems controlled by a few large companies with vast data advantages. Permissive regulation and data sharing won’t foster Australian innovation—without attention to competition and interoperability, it may simply entrench existing tech monopolies.
When Power Dynamics Matter
Our submission uses rental housing as a case study. The Commission suggests tenancy data sharing could benefit consumers, but University of Sydney research shows tenants lack meaningful choice about what data they share. In tight housing markets, people cannot negotiate data terms. Automated systems using banking data cannot predict employment changes or accurately assess ability to pay rent. Data sharing in contexts of power asymmetry can entrench disadvantage rather than empower consumers.
Key Recommendations
Australia needs regulatory architecture that:
- Addresses market concentration and enables genuine competition
- Invests in local AI research along alternative pathways
- Supports careful, context-specific deployment
- Builds adequate regulatory capacity (Australia’s Privacy Commissioner has a fraction of comparable regulators’ budgets)
- Maintains alignment with international privacy frameworks
The responsible growth of AI depends on getting these fundamentals right—not just moving fast and hoping trust will follow.
Cite: Flew, T., Chesher, C., Hutchinson, J., Stilinovic, M., Bailo, F., Gray, J., Lumby, C., Stepnik, A., Goggin, G., & Humphry, J. (2023). Safe and responsible AI in Australia: Submission paper. https://ses.library.usyd.edu.au/handle/2123/31527