Gartner projects that by 2026, 30% of enterprises will deem standalone identity verification and authentication solutions unreliable in the face of AI-generated deepfakes.
The growing threat of AI-generated deepfake attacks targeting face biometrics has become a pressing concern for enterprises worldwide, prompting a significant shift in the reliability of traditional identity verification and authentication solutions. Gartner’s latest research projects that by 2026, a substantial 30% of enterprises will no longer view standalone face biometrics as dependable for identity verification due to the proliferation of AI-generated deepfakes.
Akif Khan, VP Analyst at Gartner, highlighted the transformative impact of AI advancements over the past decade, enabling the creation of synthetic images, or deepfakes, that convincingly mimic real individuals’ faces. These sophisticated deepfakes, according to the research major, pose a grave threat to biometric authentication systems, casting doubt on their efficacy in distinguishing between genuine users and malicious actors.
Khan emphasized the inadequacy of current presentation attack detection (PAD) mechanisms in identifying AI-generated deepfakes, which are rapidly evolving beyond traditional detection methods. Gartner’s research reveals a staggering 200% increase in injection attacks, underscoring the urgency for enterprises to bolster their defense mechanisms against such threats.
To effectively combat the rising tide of deepfake attacks, Gartner recommends that organizations must adopt a multifaceted approach that integrates injection attack detection (IAD) with comprehensive image inspection tools. Chief information security officers (CISOs) and risk management leaders are advised to collaborate with vendors capable of addressing the evolving landscape of deepfake-based threats and exceeding conventional standards.
According to Gartner, the establishment of a minimum baseline of controls, leveraging IAD and image inspection technologies is needed to fortify identity verification processes. Furthermore, the importance of augmenting existing security measures with additional risk signals, such as device identification and behavioral analytics, to enhance threat detection capabilities highlighted in the report.
By adopting advanced technologies that authenticate genuine human presence and implementing robust preventive measures, enterprises can fortify their defenses and safeguard against the pervasive threat of deepfake-induced identity fraud and account takeover.
Image Source: Freepik