RESPONSE TO THE PRODUCTIVITY COMMISSION INTERIM REPORT ON HARNESSING DATA AND DIGITAL TECHNOLOGY
Warwick Holt, Media Empire
September 15, 2025
I welcome the opportunity to provide a response to the Productivity Commission’s Interim Report on Harnessing data and Digital Technology.
I’ve come from a background in programming, web design and IT and have an M.Sc. from the University of Melbourne, majoring in Applied Mathematics and researching chaos theory.
For the last 20 or so years I have been a working screenwriter and have written for over 1900 episodes of TV. I was Head Writer for The Project and have written for shows including Good News Week and The Glass House, as well as writing screenplays, many online news and satirical articles and contributing to books. I currently run Media Empire, a business specialising in creative development, writing and production across film, television and advertising.
This background straddling technology and the creative arts has given me an unusual perspective on the development, usage and dangers of Artificial Intelligence. I am driven to respond to the Interim Report due to my concerns that Australia stands on the cusp of going down a road regarding AI that, rather than improving Australia’s standard of living, may instead embrace the philosophy of US AI giants, complete with their worst features: dubious ethics, neglect of safety, leeching off culture and undermining humanity.
I am hopeful that Australia can instead choose a path for AI development that embraces and enhances our own culture, that creates tools which are of benefit to both our productivity and our society.
The unique challenge of AI
I have always embraced using the tools of technology, indeed I have thrilled at the way technology has enabled an explosion of extraordinary cultural content, particularly in the world of film and media.
However, generative and agentic AI are fundamentally different forms of technology from those that have come before. Unlike every previous technological revolution, the results from these LLMs are unpredictable and chaotic, and their operation is designed to be beyond human understanding, which makes comparisons to innovations such as the printing press, the steam engine or even the computer chip superficial and misleading. AI technology is not “just a tool”, and reckless, slipshod or misguided evolution of that technology brings enormous risks. Sam Altman himself compares the work of OpenAI to developing the atomic bomb.
While AI is already contributing to many Australians’ working life, already we have seen a long list of practical negative impacts of AI, including but not limited to:
- unreliable and inaccurate responses and “hallucinations”;
- dangerously persuasive mis- and disinformation;
- massive environmental impacts;
- redundancies and job displacement;
- systemic biases and prejudices;
- deepfakes including sexually exploitative material;
- targeted manipulation;
- de-skilling and degradation of creativity, critical thinking, and cognitive ability;
- mental health crises, even suicides.
What AI means today is vastly different from how we understood it two years ago, and further rapid paradigm shifts should be expected. In time, AI will likely become at least as dominant a part of our lives as the Internet is today. These present and future challenges should be approached deliberately, interrogated rigorously, and, where appropriate, strongly regulated.
Australians’ sense of social responsibility – the legendary “fair go” – should be the guiding light for our decision-making around AI. Like education, like health, like defence, AI is too important a part of the future of our society to be left for the market to resolve on its own.
Australia’s response to AI
Australia could position itself as a leader in the AI world. But in contrast to what many in the AI industry suggest, this doesn’t have to be through trying to beat massive US-based companies at their own game. Regardless of our Australian ingenuity, that is a game rigged by who has the most money and resources; in the West, we already know where the winners in that world will come from.
The unfettered capitalism favoured by Silicon Valley is producing chaotic and destructive results. Ethics, safety and the public good are being neglected in this technological arms race. The companies embody “move fast and break things” philosophy, seeking primarily to maximise profit and speed of development and seemingly more than willing to exploit and even endanger people to those ends.
It seems a poor approach to emulate, even if the goal is as narrow as improved productivity, but even more so if we are wanting AI to be an enhancement to our country rather than a toxic parasite.
There is more than one way to skin the robot. For instance, the EU has introduced the AI Act, and we know China are not letting the market decide their important decisions around AI. There is potentially a lesson in the way that the DeepSeek LLM was able to produce comparable results to ChatGPT at a fraction of the development price and with far smaller training libraries.
In a time when the culture of US politics and the tech industry has taken a distinct turn towards the immoral, protecting Australia’s unique cultural voice and perspective is more important than ever.
The Australian government has shown its willingness to stand up to the tech industry over social media access for children. This issue is an even more important one for Australia to take a moral stand and help establish international principles which prioritise AI ethics over profiteering. Indeed, AI technologies present at least as great a threat to Australia’s children as social media, one which will only grow going forward.
I propose Australia set out to have our country build world’s best practice AI, grounded in ethics, and with the buy-in of our people and cultural institutions.
We should approach AI in a highly regulated way specifically designed to reflect Australia’s interests and Australians’ needs and wants. That reflects Australia’s values, enhances our culture here and abroad; that is designed to improve life of citizens rather than enrich billionaires.
We should use government and independent oversight to build purposeful AI that Australians will want to use in preference to international LLMs such as ChatGPT or Grok. Technologies that countries who share our values will look at as an ideal to buy in to.
Utilising our unique ingenuity to build AI smarter would result in a true increase in Australian productivity, the Australian way.
Response to Information Request 1.1
a) Are reforms to the copyright regime (including licensing arrangements) required?
The existing Copyright Act makes it clear that the use of Australian IP to train AI models requires permission and compensation. Licensing is the fair and sustainable solution for how to provide IP for AI training. Licensing markets for AI are already being developed and are continuing to evolve; it is clearly not just possible but practical to develop AI that is trained on licensed material and respects copyright.
As noted in the Interim Report, thus far the big global AI companies have chosen to ignore copyright laws across the world, forcing copyright holders to pursue litigation. This is a problem created by AI companies and their unjustified appropriation of material to which they hold no rights. Any attempts to modify the copyright regime should target the entities who are breaching it, not those whose rights are being violated.
The Interim Report notes that work is currently underway to determine additional regulatory measures to determine an effective copyright enforcement regime in light of the Attorney-General’s Copyright Enforcement Review. It also notes that collecting societies are working to streamline licensing arrangements by being able to act on behalf of multiple copyright holders.
Keeping copyright policy jurisdiction with the Attorney-General’s Department is the best approach, reflecting the legal basis of copyright.
Australia should look to create a fully transparent system where the value of IP in the context of AI use is explicit and agreed to, where creatives can opt-in to providing their materials. There certainly can be improvements to the mechanisms by which licensing takes place, but they should be determined in consultation with Australia’s creative community.
I support calls from the Australian Writers Guild (AWG), Media Entertainment and the Arts Alliance (MEAA), Australian Society of Authors (ASA), Copyright Agency and others for the Australian Government for standalone AI legislation that requires AI developers, as a condition of doing business in Australia to:
- Disclose all data sources for AI training, so that copyright holders can confirm whether their work is included.
- Disclose copyright works used for AI training (and for what purpose) to minimise copyright infringement and/or bias.
- Obtain consent from creators for use of copyright material in AI training.
- Give credit (attribution) to creators for use of their work.
- Pay reasonable compensation to creators for use of their work in AI training.
- Identify Indigenous Cultural and Intellectual Property (ICIP) and comply with cultural protocols before making use of such material for AI training.
- Remove from their systems any existing content for which they do not have a licence.
- Clearly label AI-generated content, so that it is not confused with genuinely human-created works.
b) How would an exception covering text and data mining affect the development and use of AI in Australia? What are the costs, benefits and risks of a text and data mining exception likely to be?
I am alarmed at the proposal to amend the Copyright Act to allow Australia’s culture to be taken out of the hands of human beings and assigned without permission or payment to machines in the control of any tech entity. As a member of the Australian Writers’ Guild and Screen Producers Association I stand with them and allied creative bodies in strongly opposing the introduction of a Text and Data Mining exception to copyright.
Draft recommendation 1.2 of the Interim Report call for AI-specific regulation to only be considered as a last resort, yet there is no “last resort” that calls for this AI-specific regulation that benefits the AI industry alone.
Productivity Costs & Risks
It is unclear and unexplained what productivity benefits will be delivered by this change to legislation which operates on the supply side of LLMs.
The umbrella term “AI” covers a vast array of different types and aspects of machine learning. To consider AI as a single entity with an implied uniform productivity impact is a massive oversimplification which papers over the distinctions not only between different model types (such as between medical, research or other targeted task-specific models and LLMs like ChatGPT, Claude and similar systems), but also between inputs and outputs.
The difference between training a model on original IP and the end use of that model is equivalent to the difference between mining ore and driving a car.
The productivity impacts of model training and end usage must not be conflated, and the regulatory frameworks for each need to be considered separately. The Interim Report’s estimates of productivity impact (which I note the “considerable uncertainty” around) does not distinguish between these very different fields, and yet it seems likely that even if we accept the uncertain premise of an overall productivity benefit, the bulk of improvements in productivity will come from end usage of AI rather than model training.
And yet the key legislative proposal suggested only applies to training, even while stating that it is unclear whether it would result in any change at all to Australian training of large AI models. There is therefore no justification for opening up this back door for cultural theft.
AI companies are shameless in stating that their business models only work if the costs of AI are socialised and the benefits privatised, in the process transferring the ability to be compensated for culture from the original creators to them, at no cost and without consent.
For instance in the Australian Institute for Machine Learning’s response to the Interim Report, they say both that ingesting “the order of 1 trillion words” of copyrighted content is essential for the construction of LLMs that will enable the theoretical flourishing of “extremely profitable” new companies of which we can’t yet conceive, based on “AI-enabled business models” – and yet they also say that “the cost of the transaction required to pay a creator for their content will be far greater than the value of the content itself”.
This outrageous statement says it all. Even when talking about mythical companies, they propose that the fantasy profit has to be built on free and unapproved provision of cultural content, with no regard for the impact on the culture itself. No price is small enough.
Creatives should be expected to give away all their knowledge, skills and expression, all of their labour, all of their productivity, for the tech industry to exploit.
The net result of this is not likely to be productivity improvements, but serious productivity and cultural harms, putting at risk the $63.7 billion (2.5 percent) of GDP contributed by Australian cultural and creative industries.
Cultural Costs & Risks
The existing fair dealing exceptions for copyright (research or study, criticism or review, parody or satire, reporting news, and enabling a person with a disability to access the material) all enhance the cultural and creative landscape, allowing for Australian citizens to more freely create and critique. In contrast, this proposed exception is extractive, diminishing human culture to benefit tech companies and unregulated content-spewing human-impersonating machines with no demonstrated broader productivity improvement.
Indeed, a Text and Data Mining exception will likely lead to a hollowing out of Australian culture, undermining authentic Australian voices and placing Australia at a competitive disadvantage in our own cultural landscape.
Our culture comes from the stories, art and voices of the full diverse community of Australian human beings. Our cultural sector has in recent years undergone an overdue reckoning regarding authenticity of storytelling – of the need for representation when telling stories grounded in a culture. This is codified in the policies of government organisations such as the ABC and Screen Australia.
This is particularly clear in the case of the cultures of our indigenous and First Nations peoples, but also applies to the lived experience of people of specific gender, race, sexual identity and different ability. More fundamentally we all understand that authentic Australian stories, songs and artworks are created by Australian people.
AI systems act in direct contravention of these principles, for there is no culture that an LLM text, image or film generator can authentically represent (beyond arguably an actual AI character). Any representation that an IP-ingesting AI makes of a specific culture is built on the exploitation of that culture. Such usage should only be made with explicit permission and appropriate, agreed-to compensation.
This is all part of a “steal first, ask for a copyright exception later” approach to Intellectual Property and culture that a TDM exception would endorse.
The more power we grant these programs to generate our cultural content, the more we give up the power of artists and creatives to communicate authentically from our shared humanity and our unique backgrounds, instead choosing a highly-efficient digital simulacrum whose outputs resemble those of humans as a stand-in for creativity.
Given that there is no provision preventing overseas users of AI to freely make use of models trained on Australian IP, a TDM copyright exception creates a transfer of Australian property to the international marketplace. Indeed it means that any person around the world using this model would be as able to generate material in a specific or generic “Australian voice” as an Australian user. Anyone around the world would be legally free to generate the next Peter Carey novel, Midnight Oil song or Aboriginal dot painting.
Any automation benefit is incurred by all users of the generative AI system, no matter where they are based. The original Australian cultural creators are penalised, and the value of their work debased.
The vast majority of Australian creatives have always struggled to make a living; this exception would eat further into their ability to do so. It’s not fanciful to imagine an artist or writer having to compete with any number of simulated versions of themselves, which have been legally allowed to be trained on their life’s work for no compensation and without their permission, the profits from these ersatz works flowing not to them but instead to tech companies and parasitic users.
Legal Costs & Risks
The Interim Report quotes a 2013 Australian Law Reform Commission report on the case for a TDM exception. This is a decade prior to the emergence of ChatGPT and similar LLMs and clearly does not consider AI. The quoted statement that “data and text mining should not be infringement because it is a ‘non-expressive’ use” does not appear to apply to LLMs, which are explicitly used to create expressive materials based on copyrighted input.
Similarly, the rundown of “Text and data mining around the world” in Box 1.7 references regulations which were brought in prior to the AI boom. These exceptions were not framed with generative AI in mind, and to bring in such a regulation in the current environment would be a radical legal reform.
The consideration that “large AI models are already being trained on unlicensed copyrighted materials” is no justification – it is perverse reasoning to suggest that because laws have been broken, the laws need to be changed.
The companies behind major LLMs are currently facing dozens of major legal challenges for copyright infringement and at the time of writing we have just seen the largest copyright recovery in history when Anthropic agreed to pay authors $1.5 billion and remove their works from training data.
Giving a copyright exemption to AI companies would effectively be handing Australian-generated material to international companies built on its theft simply because “the genie is out of the bottle”. To do so is to effectively give in to the blackmail of a cartel of tech titans and to further entrench their already disproportionate power.
I note that it is hardly an impediment if global tech companies have to establish a shell company in Australia to take advantage of the copyright exception – to do their copying of Australian material within Australia’s boundaries. If Australia’s copyright is weakened to the extent where this is the most cost-effective way to proceed, this will be a very possible outcome, with no benefit to Australians.
The report notes that the exception would not be a ‘blank cheque’, yet it is not clear why any cheque should be written at all. Existing copyright mechanisms are able to be leveraged for AI usage and the focus should be on enabling that.
Suggesting that any use would also have to be ‘fair’ doesn’t take into account the behaviour of the AI industry thus far. The tech titans have already shown that fairness is not a consideration in whether they ingest materials. Further, the burden of proof of unfairness and the accompanying legal cost will inevitably shift to the original IP creators.
The Interim Report calls for “proportionate AI policy”, yet such a carve out of copyright seems to be hugely disproportionate. In no other industry would a blanket exemption be granted for international companies and overseas users to freely exploit Australian material for their own profit. That the material in this instance would be the actual culture of this country is doubly galling.
Response to Draft Recommendation 1.1
Productivity growth from AI will be built on existing legal foundations. Gap analyses of current rules need to be expanded and completed.
& Draft Recommendation 1.2
AI-specific regulation should be a last resort
I welcome the steps by the Australian Government to assess gaps in the regulatory framework posed by AI and agree with the recommendation that this should occur as soon as possible. Making the most of current legislation and regulation is clearly of benefit in ensuring AI development proceeds in a way that is consistent with Australia’s current legal system.
However, considering AI-specific regulation as only as a “last resort” is an unhelpful framing – for when a last resort is reached we may have missed the opportunity for legislation. Looking at AI regulation primarily in terms of minimising the “burden” on private companies prioritises the economic desires of a small group of AI businesses over the social imperatives of the users and Australians more broadly.
While it is a rapidly changing landscape, it is the government’s role to take social interests into account and regulate in a way that mitigates the risks of this technology and protects people from harm.
It is true that many of the risks of AI are variants on existing issues that have existing approaches to tackling them. But there are risks of AI that are unique to AI, both because AI systems are trained on such an enormous quantity of IP, and because the way the systems arrive at outputs and decisions is poorly understood.
Some AI-specific regulation regarding copyright as proposed by the creative and cultural bodies is listed above. A quick brainstorm raises other questions that may possibly require AI-specific regulation, most likely just the tip of the iceberg:
- What underlying ethical boundaries should be encoded in AI systems and what penalties should apply if an AI is found to breach them?
- Who is responsible if criminal behaviour is committed by an AI agent?
- Is there any accountability if an AI provides misleading information in critical settings?
- How can government ensure that Australian AI systems minimise systemic bias?
- What rights if any should an AI entity be entitled to?
- How does legislation deal with a person who leaves their estate to an AI partner?
The International AI Safety Report 2025 concluded:
“Since the impact of general‑purpose AI on many aspects of our lives is likely to be profound, and since progress might continue to be rapid, there is an urgent need to work towards international agreement and to put resources into understanding and addressing the risks of this technology. Constructive scientific and public discussion will be essential for societies and policymakers to make the right choices.”
It is clear that going into such discussions pre-determined to avoid regulation would limit the ability to create international frameworks for the benefit of all.
Response to Draft Recommendation 1.3
Pause steps to implement mandatory guardrails for high-risk AI
High-risk AI is not a prospect to be taken lightly. Beyond the many practical risks including those identified above, many key AI developers themselves, including two of the three so-called “godfathers of AI”, Yoshua Bengio and Geoffrey Hinton, have stated that AI is a serious existential threat to our species. Hopefully this is overly pessimistic, though with the underlying “thought processes” of AI systems being poorly understood, people can only provide guesses of their “probability of doom” or “p(doom)”. In that context, “moving fast and breaking things” is both foolhardy and morally wrong.
It is understandable to want to deal with outcomes-based regulations, but it is important to understand that it is not just in the outcomes that risk exists in the world of AI. Rather than pausing the implementation of mandatory guardrails until gaps in regulation are identified, the prudent course is to do the reverse – implement mandatory guardrails and only relax them when it is shown that other existing regulations can adequately cover the risks.
Guardrails are essential to the development of ethical AI, particularly in a world where foundational AI companies have shown scant regard for ethics, as exemplified around issues of copyright. The guardrails provide transparency and can be used to give the public confidence that we are implementing international best practice, such as that being done in the EU.
Conclusion
Looking at AI regulation through the lens of productivity is a very narrow focus, given the very real and unaddressed questions of public safety, national security and ethical deployment of machine intelligence that are recognised around the world. It is an area where far more government oversight is called for, rather than less, and where high risks should be taken seriously rather than brushed off as a cost of doing business at the behest of the tech industry and to the detriment of Australia’s cultural capital.
I support calls for the Australian government to:
- Work with international organisations and partners to create regulatory AI frameworks that prioritise human interests.
- Mandate guardrails that prioritise ethics, transparency and safety.
- Carefully regulate the development and use of AI in Australia to prioritise safety and ethics.
- Ensure that government provides oversight and guidance of local AI development.
- Support productivity growth in the Australian culture and creative sector.
- Ensure that Australian creative works are protected by copyright and appropriate AI-specific measures.
- Safeguard the rights of Australian creators to ensure sustainable careers.
Thank you for reading my submission, I hope you consider it carefully and wish you well in the next phase.
Warwick Holt

