Home / Artificial Intelligence (AI) Trends / How to learn from your mistakes – Deloitte doubles down on AI

How to learn from your mistakes – Deloitte doubles down on AI

“That had to hurt” having to provide a partial refund to the Australian federal government after it’s independent assurance review was found to contain multiple errors, including non-existent references and citations. Despite this, or maybe even because if this, Deloitte is doubling down on AI across its services anyway. They’re not backing away from AI after a stumble, they’re operationalising it.

The headline here isn’t a single refund – it’s risk normalisation. Procurement and legal are elevating AI to a first-order commercial term, right alongside data privacy and cybersecurity. From what I’m seeing, that means explicit “AI-use” clauses in MSAs and SOWs: disclosure of when and where AI is used, client approvals before deployment, audit logs on demand, and remedies if outputs miss agreed standards.

That puts real pressure on service providers to harden their AI supply chains. Verification of model provenance, tighter prompt and data access controls, and documented human-in-the-loop checkpoints will move from “good practice” to a contractual obligation. The goal is simple: prevent reputational hits and revenue clawbacks before they happen—and if they do, have the audit trail to respond fast.

Deloitte’s stance also tracks a broader realignment. The Big Four and global SIs have poured capital and talent into AI for two years—Accenture committed $3B in 2023; PwC put in $1B the same year—and the model is shifting from pilots to products. Expect more packaged “AI accelerators,” sector-specific copilots, and managed AI platforms that promise speed with controls built in. Pricing will keep evolving from time-and-materials to outcome-based and per-seat models tied to measurable productivity, code velocity, or case throughput.

For developers and enterprise AI teams, the message is maturity over magic. Build evaluation harnesses that measure accuracy and bias by use case. Ground generative systems in enterprise data with strong retrieval strategies. Add content provenance and watermarking where feasible, and red-team for prompt injection and data leakage; vendors who expose audit-friendly logs, policy controls, and usage analytics will surface to the top rather than those selling brute force processing power alone.

The human impact is getting more concrete – and more practical. Prompt engineering is morphing into product and data roles; “AI controls” specialists are joining compliance; frontline staff are being upskilled to supervise co-pilots rather than be replaced by them. We’ll see tighter contractual frameworks, clearer client choice (including “no-AI” delivery options at a premium), and rising demand for third-party attestations of AI quality and safety. Responsible scale – not outright speed – will decide who wins the trust and wallet share of the market.

If you lead a services business, treat this as an implementation checklist. Update MSAs and SOWs with AI-use disclosures, client approval gates, and audit logging obligations. Create governance, detailing model provenance, documenting human review, and tracking outcomes tied to pricing. Budget as much for controls and observability as you do for models; the cost of governance will be cheaper than the cost of refunds.

Are we already on the road where “human-only” delivery becomes a premium luxury, or will clients insist on AI in the loop—so long as it’s auditable? If Deloitte can stumble publicly and still lean in, perhaps the real competitive edge isn’t AI itself, but how transparently and contractually you use it.

Tagged:

Leave a Reply

Your email address will not be published. Required fields are marked *

Newsletter