AI-generated scams involving deepfake voices and videos are being used by criminals in ‘harder to detect’ scams, the major bank has warned.
Westpac Banking Corporation (Westpac) is warning Australians to prepare for a new wave of personalised scams harnessing artificial intelligence (AI) in the year ahead, with criminals increasingly exploiting new technology to target households and businesses at scale.
According to the bank, a new wave of AI-powered impersonation scams is unfolding and is expected to increase in prevalence this year.
Building on the ‘Hi Mum’ scams (where scammers impersonate a child and text their parents suggesting they need help and ask them to transfer them some money), the AI versions are utilising cloned voices, faces, and video calls created from seconds of publicly available content (such as social media).
Cloned voices and faces are also being used for business email compromise or invoice scams.
Other scams that Westpac expects to rise as AI is more broadly adopted include:
-
Rental bond scams: With housing pressures intensifying, scammers are creating fake listings and demanding upfront bond payments or harvesting identity documents from would-be renters.
-
Registered fake businesses: Criminals are increasingly registering companies that appear legitimate but actually front sophisticated investment and payment scams.
The major bank said that it also expects investment scams to increase (such as those involving fake term deposits, bonds and precious metals, and crypto scams) to take advantage of economic uncertainties, posing as fake charity or crisis organisations seeking urgent donations, among others.
As such, Westpac is urging Australians to stay alert, with its head of fraud prevention, Ben Young, stating that scammers are deliberately making scams harder to spot and easier to spread.
“Scammers are weaponising AI in their attempts to steal from hardworking Australians. They’ve moved beyond generic phishing into highly targeted scams that feel more personal and are harder to detect,” Young said.
“From deepfake voices and videos to registering fake companies with ASIC, scammers are evolving quickly in their efforts to rip Australians off.
“Social media has become one of the most powerful distribution channels for scams. These scams don’t arrive as suspicious messages anymore, they are showing up as ads, posts and pages.”
Young said while banks continue to invest heavily in fraud and scam detection, social media companies must take greater responsibility for stopping scams before they reach Australians.
“These are organised criminal operations, not isolated incidents. Platforms that profit from advertising need to do more to prevent scam content from spreading in the first place,” Young said.
He suggested that consumers take time to independently verify any requests for money or personal information.
“If something or someone creates urgency or pressure, that is a red flag and should be your cue to stop. It’s really important to stop, check and verify through a trusted channel before you take any action,” Young said.
The Westpac warning comes hot on the heels of a similar information from the Australian Federal Police (AFP) and the Commonwealth Bank of Australia (CBA) that has revealed that scammers are refining their tactics when it comes to impersonating banks, including copying bank hold music and co-ordinating calls to bypass security checks.
As reported by Broker Daily, consumers are increasingly being tricked into thinking that the scammer is their actual banking institution due to them increasingly knowing personal information, such as their date of birth, account details, and bank balances, which have been acquired through previous cyber attacks.
These bank impersonation scams aim to frighten bank victims by suggesting that there have been pending unauthorised payments, which can only be reversed or cancelled once banking details or codes are shared, or by telling customers that their bank account is locked and can only be unlocked if acted upon immediately.
Lenders have also been increasingly concerned about how AI is being used to conduct fraud, following recent cases allegedly involving forged income documents and AI‑driven scams in mortgage lending.
Banks are providing the following tips to consumers to protect themselves from scams:
-
Don’t pay under pressure – stop and reassess.
-
Independently verify requests for money or information – including from businesses – by checking they are legitimate.
-
Be wary of impersonations from friends and family. If a call, message, or video appears to be from a loved one, pause and verify through another channel that it is them. This could include establishing a family code word that can help confirm someone’s identity.
-
Be wary of offers that seem too good to be true (such as high returns, low risk, or ‘guaranteed’ outcomes).
-
Never accept a role or a request that asks you to move or receive money on someone else’s behalf. Never share personal or financial information.
-
Act quickly if something feels wrong, and if you think you’ve been scammed, contact your bank immediately.
[Related: Open banking pitched as antidote to AI-fuelled loan fraud]
Want to see more stories from trusted news sources?
Make The Adviser a preferred news source on Google.
Click here to add The Adviser as a preferred news source.