Deloitte, one of the world’s largest professional services firms, has agreed to issue a partial refund to the Australian government following widespread criticism over errors in a government-commissioned report that was partially drafted using generative artificial intelligence (AI). The incident has sparked national debate over the role of AI in official reporting, government accountability, and the standards expected of major consulting firms.
The report in question, commissioned by the Department of Employment and Workplace Relations (DEWR), was intended to evaluate the government’s controversial Targeted Compliance Framework—a system used to automate decisions related to welfare penalties. The consultancy was awarded a contract worth nearly half a million dollars to conduct the review, which was positioned as an “independent assurance” of the program’s legality, fairness, and performance.
Soon after the report’s public release, academics and legal experts raised concerns over the document’s accuracy. Specific errors included misattributed legal quotations, references to academic sources that could not be verified, and numerous incorrect or nonsensical footnotes. Further scrutiny revealed that a portion of the report had been generated using an AI tool, raising questions about the extent to which its content had been reviewed by human experts prior to submission.
While Deloitte initially defended the report’s conclusions, the firm later acknowledged the presence of multiple errors and issued a corrected version. In the revised report, all AI-generated references and quotations that could not be substantiated were removed or rewritten. A formal disclosure was added to acknowledge the use of generative AI tools in the drafting process—something that had not been clearly stated in the original release.
As part of its response, Deloitte has agreed to forgo the final payment due under its contract, effectively issuing a partial refund to the Australian government. The exact amount of the refund has not been publicly disclosed, but it is understood to represent the final installment of the consultancy’s fee.
The Department of Employment and Workplace Relations has stated that, despite the need for corrections, the overall conclusions and recommendations in the report remain unchanged. The department emphasized that its decision to accept the revised version was based on the belief that the core findings were still valid and useful for policymaking purposes.
Nevertheless, the incident has triggered a wave of criticism across political, academic, and public sectors. Several lawmakers have questioned whether Deloitte’s partial refund goes far enough, given the nature of the errors. Critics argue that the use of AI in such a sensitive context—without sufficient transparency or quality control—undermines the credibility of the entire review process.
Some have called for a full refund, describing the initial document as unfit for a university assignment, let alone a government review. Legal professionals have expressed particular concern over the inclusion of a fabricated quote attributed to a federal court judge—a detail that many see as especially damaging to the report’s credibility.
In Parliament, opposition parties and crossbench senators have demanded greater oversight of how consultancy firms use AI in government work. They argue that the government should implement stricter standards for how external contractors handle data, draft reports, and validate findings when using emerging technologies.
The broader implications of the Deloitte incident extend beyond a single contract. In recent years, the federal government has spent billions of dollars on consultancy services, relying on external firms to assist with everything from IT upgrades to policy evaluation. With AI increasingly integrated into the workflows of these firms, questions are emerging about accountability, transparency, and ethical use.
Experts warn that while generative AI tools can be powerful in assisting with large-scale document drafting, they are also prone to a phenomenon known as “hallucination,” where the AI fabricates information that sounds plausible but is not factually correct. Without rigorous human oversight, they say, such tools can introduce errors that are difficult to detect but potentially damaging when used in official reports.
Deloitte has not publicly disclosed the exact method or model it used to generate the AI-assisted content, but the firm stated that internal procedures have since been updated to improve quality control and reduce the risk of similar incidents occurring in future projects.
For its part, the Australian government has pledged to review its procurement guidelines to ensure that consultancies adhere to strict accuracy standards, particularly when using AI tools. Officials have hinted that future contracts may include clauses requiring full disclosure of AI use and mandatory human review before submission.
The episode comes at a time of increased scrutiny over the relationship between government agencies and major consulting firms. Earlier in the year, several inquiries were launched into government spending on external advisors, with critics arguing that the public service should be strengthened rather than increasingly outsourced.
As Deloitte works to repair its reputation, the firm is likely to face ongoing questions about the quality and oversight of its work, particularly in contexts where AI is involved. Meanwhile, other professional services firms may be watching closely, aware that how this case unfolds could shape public expectations and regulatory responses in the era of AI-assisted consulting.
The Deloitte refund may resolve the immediate contractual issue, but the broader conversation about how generative AI fits into public sector decision-making is only just beginning.