UAE and Saudi Arabia Advance AI-Driven Fraud Prevention Amid Rising Threats
As fraudsters increasingly leverage artificial intelligence (AI) to enhance their schemes, organizations are finding it challenging to keep up. This reality is underscored by the latest findings from the Association of Certified Fraud Examiners (ACFE) and SAS, a leader in data and AI. Alarmingly, only 7% of anti-fraud professionals believe their organizations are more than moderately prepared to detect or prevent AI-driven fraud. This report highlights a critical gap as criminals exploit readily available AI tools for social engineering, digital forgery, and consumer scams, pushing these threats to unprecedented levels.
The 2026 Anti-Fraud Technology Benchmarking Report
The 2026 Anti-Fraud Technology Benchmarking Report, the fourth in a series initiated by ACFE and SAS in 2019, surveyed 713 fraud fighters across eight global regions. The data reveals a troubling trend: fraud is evolving at a pace that outstrips the defenses of many organizations. John Gill, President of the ACFE, emphasized the urgency of the situation, stating, “AI-powered threats aren’t on the horizon – they’re already here, and they’re accelerating quickly.” He cautioned that organizations failing to bolster their defenses against AI-driven fraud risk becoming prime targets.
The Strategic Position of the UAE and Saudi Arabia
In the face of rising global fraud risks, the UAE and Saudi Arabia are uniquely positioned to spearhead the next generation of fraud prevention. Their strong regulatory frameworks, government-led digital transformation initiatives, and modern financial infrastructures provide a structural advantage. Central banks, such as the Central Bank of the UAE (CBUAE) and the Saudi Arabian Monetary Authority (SAMA), are acting as ecosystem orchestrators, allowing these markets to bypass outdated methods and support a secure, seamless, and future-ready financial ecosystem.
Abed Hamandi, Senior Director of EMEA Consulting at SAS, noted, “Few regions combine high growth with such strong regulatory leadership. The UAE and Saudi Arabia are not constrained by legacy in the same way as many mature markets.” By adopting real-time, AI-driven, and identity-centric fraud prevention strategies, these nations can preemptively address fraud while providing the low-friction customer experiences demanded by modern digital economies.
Industry Insights: A Crossroads for Sectors
The survey respondents represent a diverse array of industries, with the government and public sector (26%) and banking and financial services (23%) being the most prominent. Other sectors include professional services, manufacturing, insurance, technology, education, energy, and healthcare. Key insights from the survey reveal several critical trends:
-
Fraudsters Gaining Ground: Every AI-powered fraud modality examined has seen an increase over the past two years. Notably, deepfake social engineering has surged, with 77% of respondents reporting a slight-to-significant rise. Other areas of concern include consumer fraud/scams (75%), generative AI document fraud/forgery (75%), and deepfake digital injection (72%). Looking ahead, 55% of respondents anticipate significant increases in deepfake social engineering and generative AI document fraud/forgery over the next 24 months.
-
AI and Machine Learning Adoption: While the adoption of AI and machine learning (ML) in anti-fraud programs is accelerating, it remains insufficient. Currently, 25% of organizations utilize AI/ML in their anti-fraud initiatives, up from 18% in 2024. Another 28% plan to adopt these technologies by 2028, highlighting a narrowing window for organizations to build AI competencies before competitors and criminals widen the gap.
-
Governance Challenges: Governance structures are lagging behind AI adoption. Nearly 90% of organizations consider the accuracy of results crucial when adopting generative AI, yet only 18% test AI models for bias or fairness. Furthermore, while 82% emphasize the importance of explainability, only 6% feel confident in explaining how their AI/ML models make anti-fraud decisions. This gap poses significant risks, particularly for banks and insurers, who may face regulatory consequences and reputational damage.
-
Budgetary Constraints: More than half of the respondents (55%) expect their organizations to increase anti-fraud technology budgets over the next two years. However, budgetary and financial constraints remain the primary barrier to implementation, cited as a major or moderate challenge by 84% of respondents.
The Role of Emerging Technologies
Emerging technologies, including physical biometrics, agentic AI, and even quantum AI, are rapidly maturing and transforming the landscape of fraud prevention. However, fraudsters are equally poised to exploit these advancements, creating a significant advantage for malicious actors.
Stu Bradley, Senior Vice President of Risk, Fraud and Compliance Solutions at SAS, remarked, “Cybercriminals don’t have governance committees, and they don’t wait for budget cycles or regulatory clarity – they just act.” He emphasized that every quarter spent evaluating technology is another quarter that criminals can weaponize it against unprepared organizations.
The study highlights several trends regarding emerging technologies:
-
Generative AI: While only 16% of respondents report using generative AI as an anti-fraud tool, 58% plan to adopt it in the future. Among current users, the top applications include phishing and scam detection (49%), risk identification/assessment (46%), and report writing (45%).
-
AI Agents: Nearly 10% of respondents currently use agentic AI for fraud prevention, with an additional 31% planning to deploy it by 2028, marking the highest near-term adoption expectation among emerging technologies.
-
Physical Biometrics: This technology has become the most widely adopted emerging technology in anti-fraud programs, utilized by nearly half of the organizations surveyed, up from one-third in 2022. In contrast, cloud-native fraud detection platforms and automation remain underutilized, with only 10% and 29% adoption rates, respectively.
-
Quantum Computing: A significant 62% of respondents expect quantum computing and quantum AI to materially impact fraud detection and prevention by 2030, with 11% asserting that it already is.
Conclusion
Organizations across various sectors face the same AI-accelerated fraud threats, regardless of their level of preparedness. The key differentiator lies in their ability to combat these risks effectively. Fraud fighters must be equipped with the right data, technology, speed, scale, and governance to address modern threats. For further insights into the prominent AI-accelerated fraud modalities and counter strategies, the ACFE and SAS have released the 2026 Anti-Fraud Technology Benchmarking Report.
For a comprehensive exploration of the survey data, including filters for region, industry, and respondent profile, the data dashboard is available for investigation.
Source: www.tahawultech.com
Keep reading for the latest cybersecurity developments, threat intelligence and breaking updates from across the Middle East.


