Phishing has been a persistent security threat for years, but with the advent of generative artificial intelligence (AI) chatbots, the danger has multiplied. Threat actors are now using AI tools to launch sophisticated, multichannel attacks on employees, particularly through business email compromises (BEC) employing social engineering techniques.
AI-powered language models like OpenAI’s ChatGPT, Google’s Bard, and Microsoft’s AI-powered Bing have significantly accelerated access to information and content generation. Leveraging large datasets, these tools can quickly create new forms of content, enabling threat actors to automate tailored phishing messages and refine numerous versions to maximize success rates.
Generative AI in BEC attacks goes beyond traditional phishing by modifying malware code to bypass detection engines. AI-powered bots can draft thousands of scam email iterations, socially engineering unsuspecting victims effectively. They can also reproduce malicious voice and text messages, phony web links, attachments, and misleading video files, giving attackers a versatile toolkit to steal login credentials, financial information, and proprietary data.
BEC attacks are particularly insidious as they often target individuals using personal information across multiple messaging channels, making them highly convincing. A survey conducted at the RSA conference in April 2023 revealed that 77% of security professionals had experienced phishing attacks, with 47% of those being BEC attacks.
These phishing operations exploit human emotions, creating a sense of urgency when individuals are vulnerable or distracted. Once an employee falls victim to social engineering and divulges credentials, attackers gain a foothold to move laterally across an organization, seeking valuable assets.
Traditional security measures are inadequate to combat the rising tide of cybercrime driven by generative AI. To keep up, smart automation and AI-based security systems are required. AI cybersecurity uses data augmentation and cloning techniques to analyze incoming threats, reproducing thousands of clones of a core threat to anticipate variations. AI systems, trained through machine learning, automate defenses in real-time to prevent social engineering attempts and prepare for future attacks.
Computer vision plays a crucial role in AI cybersecurity, identifying visual patterns and clues to determine the authenticity of files and web pages. By recognizing exact layouts and color schemes of branded login pages, computer vision can block imposter pages that do not meet precise specifications. Natural language processing is another valuable tool, providing context to identify phrasing, accents, and verbal elements. It can review prior messages received by users to detect potential BEC attacks.
To adopt generative AI for BEC protection effectively, organizations must address people, processes, and policies affected by the transition. It’s crucial to assess cybersecurity readiness, given the expanding threat surface through SaaS apps, cloud storage, and third-party collaboration tools. Deploying security orchestration, automation, and response (SOAR) solutions can streamline data analysis from various security platforms and optimize security team efficiency.
Creating a culture of security for AI is essential to overcome implementation challenges and fully harness the benefits of AI while mitigating potential dangers. As generative AI continues to evolve, traditional human researchers are no longer sufficient in defending against rapidly mutating threats. The most effective strategy is to use AI and automation to respond to cyber threats more rapidly and accurately, allowing security teams to focus on strategic aspects of security operations.