22 June
Exploring the Risks of AI and Social Engineering at the Security Middle East Conference
Exploring the Risks of AI and Social Engineering at the Security Middle East Conference
in News
At the recent Security Middle East Conference, the spotlight was on the implications of Artificial Intelligence (AI) in the realm of social engineering. The event featured a lively podcast session led by Alex Bomberg, Chairman of Intelligent (UK Holdings) Limited and Founder of International Intelligence Limited. He was joined by the engaging Daniel Norman, EMEA Regional Manager for the Information Security Forum, who served as the podcast host and Conference MC.
The discussion revolved around a pressing issue: the increasing risks that AI poses by enhancing the tactics employed in social engineering. With AI at the helm, malicious actors are now able to create more sophisticated scams that exploit human vulnerabilities more effectively than ever before.
Understanding Social Engineering in the Age of Technology
Daniel initiated the conversation by highlighting the growing number of AI-driven attacks, emphasizing how this technology is being harnessed to manipulate social engineering. Alex pointed out that social engineering is not a novel phenomenon, despite common assumptions. “Its roots date back long before the 21st century,” he explained, underscoring that the essence of social engineering lies in manipulating human behavior to achieve specific ends. It’s prevalent in everything from marketing practices to political strategies.
So, what triggered this shift toward more technologically advanced social engineering? According to the experts, the transformation became noticeable with the introduction of telephone banking and various forms of fraud. Alex explained that technology has undeniably simplified the execution of fraud, giving rise to more sophisticated methods of deceit.
The Role of Social Media in Amplifying Vulnerabilities
A significant factor that has driven the evolution of social engineering tactics is the data we freely share on social media platforms. Alex noted, “With the rise of social media, we’re generating vast amounts of data—thousands of data points each month—through our everyday activities.” The ease with which individuals can now be targeted is alarming. He illustrated this with a pointed remark about TikTok, arguing that while the platform isn’t to blame for incidents of social engineering, it contributes to a culture where users readily accept information as truth. “This is the new reality,” he affirmed.
A Real-World Example of Deepfake Technology
Alex painted a vivid picture of a recent incident involving a fraudulent Zoom call, where an individual was lured by fake executives presenting a lucrative deal. Through deepfake technology, their appearance and mannerisms were convincingly mimicked, leading the victim to transfer an astounding $26 million to an illicit account, believing he was acting correctly throughout the process.
Education as a Key Defense
When asked about potential countermeasures against such tactics, Alex emphasized the importance of education within organizations. He pointed out that maintaining a culture of awareness begins at the leadership level and cascades down throughout the staff.
The Impact of COVID-19 on Fraudulent Tactics
The conversation also touched on how the COVID-19 pandemic has reshaped the landscape for social engineering. Pre-pandemic, it was not uncommon for colleagues to resolve queries face-to-face, fostering a sense of trust. However, with the shift to remote work, that comforting dynamic has been disrupted, making it easier for malicious actors to exploit these changes.
Future Risks: Geopolitical Deepfakes
Looking ahead, Alex raised concerns about the potential ramifications of geopolitical deepfakes. “What happens if political leaders make false statements that appear valid? We could be facing significant threats in our futures,” he cautioned.
For those interested in further exploring this vital topic, the full discussion is available for viewing online.