Prefer to talk? Get in touch with our experts today on 0151 236 5656

Have you reviewed your fraud controls to counter AI?

8 August 2024

Artificial Intelligence (AI) driven deepfake technology is increasingly posing a significant threat to fraud risk prevention measures by delivering highly convincing digital forgeries of an individual’s image, voice or video.

As the sophistication and accessibility of AI models increases, forgeries are becoming ever more commonplace and difficult to detect. The proliferation of organisational and individual social media content also provides a wealth of source material to allow AI models to apply their learning algorithms to successfully imitate individuals. There have already been several high-profile reports of frauds leveraging AI-based technology to produce convincing live audio and video, and it has recently been projected that AI deepfakes could end up costing the financial services sector a staggering $40bn by 2027 in the United States alone.

In light of this growing trend Griffiths & Armour recommend that organisations review their fraud risk assessment and management practices in key areas. Such key areas could include: financial payments and transfers; account creation; account login; and account changes, in particular in respect of banking and/or contact details. In all higher risk areas robust methods of validation via independent means should be in place. Dual sign off can also be an effective risk management control in some higher risk areas. Where current authentication methods include biometric elements, it is recommended that these be reviewed to allow for potentially fake AI-delivered content to be flagged. Fraud risk controls should ideally be designed to cover both external and internal sources, for example a request to release funds should be validated in some way irrespective of whether this appears to originate from a customer or a colleague.

The provision of training to staff in key positions can also assist. This can include a reminder on fraud risk control procedures and how to identify potentially AI-generated content. Very much in summary, areas to look out for when attempting to identify fake video and audio content can include:

  • Unusual eye movements or blinking
  • Unnatural head/body movement and facial expressions, in particular relative to what is being said
  • Abnormal skin colours, lighting and hair
  • Facial blurring and pixelation
  • Boxes or cropped effects around the mouth, eyes and neck
  • Poor lip syncing
  • Choppy or unusual verbal timing
  • Unusual phrasing
  • Use of out of character language
  • Varying tone or inflection in speech
  • Poorly rendered hands and gestures

Further fraud prevention guidance supplemented by a risk assessment format is available via RMworks, which is available to all Griffiths & Armour clients. Further information on RMworks is available here.

For further information and support, please please get in touch.

Author

Greg Street

Risk Management Managing Director

Contact