You have 0 free articles left this month.
Powered by MOMENTUM MEDIA
lawyers weekly logo
Advertisement
Compliance

APRA demands ‘step change’ in financial sector AI controls

7 min read
Share this article on:

The prudential regulator has warned that AI risk management is lagging behind fast‑evolving adoption, flagging cyber and governance vulnerabilities.

The Australian Prudential Regulation Authority (APRA) has warned banks that their safeguards around artificial intelligence have fallen behind a rapid rollout of new tools – urging a decisive lift in how AI‑driven risks are governed and controlled.

APRA’s intervention follows a supervisory review it ran late last year across the main segments of the financial sector, examining where firms were using AI and how those systems are overseen.

The review found that algorithms are no longer confined to pilots, with many institutions now weaving AI into core operations and customer interactions.

 
 

APRA concluded that key risk disciplines – including governance, operational risk, and cyber security – had not evolved at the same speed.

It highlighted that AI capabilities were often hidden inside large software suites, meaning some organisations struggled to see exactly which models they relied on, how those models were trained or updated, and what that implied for risk.

APRA member Therese McCarthy Hockey said financial institutions needed to continually recalibrate their AI governance methods.

“The AI revolution presents tremendous opportunities for banks, insurers and superannuation trustees to deliver improved efficiency and enhanced customer services,” she said.

“But we cannot be blind to the risks of such powerful technology – whether in our own hands or the hands of those with malign intent.”

Frontier models raise the stakes on cyber

The letter also named frontier‑grade models as a specific source of prudential concern.

APRA warned that tools such as Anthropic’s Claude Mythos could be used by malicious actors to probe systems more effectively.

McCarthy Hockey said the tempo of defensive worked needed to accelerate.

“What we’ve observed from our supervisory engagement is that while AI adoption is continuing apace, the systems and processes required to safely govern its use aren’t keeping up,” she said.

“Likewise, the speed at which entities can identify and patch vulnerabilities needs to operate much faster, commensurate with the AI‑accelerated threat.”

Boardroom blind spots

The review focused heavily on how AI was handled in the boardroom, with the regulator finding that many boards did not yet have enough technical understanding to probe management on AI‑related decisions.

The letter also highlighted a build‑up of concentration risk where organisations had leant heavily on a small number of AI providers for multiple use cases.

APRA said boards needed to develop enough AI literacy to set a clear strategic direction and to offer a genuine challenge.

They are also expected to oversee a coherent AI strategy that fits within the organisation’s risk appetite, supported by reporting on how AI systems are performing.

No bespoke AI standard yet but expectations tighten

For now, APRA is not writing a stand‑alone AI prudential standard, yet made it explicit that existing requirements apply to AI‑enabled processes.

It reiterated that companies could not treat AI as a special case that sits outside the current regulations.

McCarthy Hockey said the package of findings in the letter was intended to clarify exactly how the regulator expected institutions to lift their game.

“The findings outlined in today’s letter emphasise our expectations for how entities should be managing these risks in alignment with our prudential standards in areas such as information security, operational risk management, governance and data risk,” she said.

“While we are not proposing to introduce additional requirements at this stage, we expect to see a significant improvement in how entities are closing the gaps between the power of the technology they are using and their ability to monitor and control it.”

Supervision roadmap and international context

APRA said it was finalising a multi‑year plan for how it would supervise AI‑related risks.

That roadmap is expected to combine targeted reviews at individual entities with cross‑industry work and more active engagement with AI suppliers.

The regulator also plans to increase its own use of AI‑based analytics to spot emerging prudential issues.

APRA said it was working with agencies and peer regulators offshore as authorities worldwide grapple with how models intersect with financial stability.

[Related: Brokers think AI will benefit their business but lack strategy]

Want to see more stories from trusted news sources?
Make The Adviser a preferred news source on Google.
Click here to add The Adviser as a preferred news source.

apra new ta tmrzg