Understanding AI Tool Usage and Cybersecurity Awareness in the Middle East
Kaspersky’s recent survey, titled “Cybersecurity in the Workplace: Employee Knowledge and Behavior,” highlights a significant trend among professionals in the Middle East regarding their use of Artificial Intelligence (AI) tools. The study reveals that an impressive 86% of those surveyed actively employ these tools in their work routines. However, a concerning statistic emerges: only 44.5% of respondents have received training focused on cybersecurity aspects related to using AI, which includes crucial protections against risks such as data leaks and prompt injections.
Familiarity with Generative AI
The survey shows that 92% of participants understand what “generative artificial intelligence” entails. For many, this knowledge has transitioned from theoretical discussions to practical application in everyday tasks. The most common uses of AI among these professionals include writing and editing text (65%), composing emails (56%), creating images or videos through neural networks (49%), and conducting data analytics (52%). This level of integration into daily workflows underscores the growing reliance on AI technology across various sectors.
Education Gaps in AI Training
Despite the widespread use of AI, a significant gap in training and preparation for its associated risks has been identified. Notably, 23% of professionals reported not receiving any AI-related training at all. Among those who did receive training, 57% focused on effectively utilizing AI tools and crafting prompts, while only 44% received instruction on the cybersecurity implications of AI usage. This disparity raises concerns about how well-equipped employees are to navigate the potential dangers posed by these technologies.
The Challenge of Shadow IT
As organizations increasingly adopt AI tools to streamline operations, many employees find themselves engaging in what is termed “shadow IT,” where individuals use these tools without explicit corporate oversight. The survey indicates that 75% of respondents confirmed that generative AI tools are permitted at their workplaces. In contrast, 19% stated these tools are not allowed, and a small fraction (6%) were uncertain about their status. This lack of clear guidelines can lead to inconsistent usage and increased vulnerability to security threats.
Implementing a Structured AI Policy
To mitigate the risks associated with AI usage, businesses need to establish comprehensive policies. Such policies should clearly outline acceptable use cases, prohibit AI applications in specific functions or with sensitive data, and specify which tools employees are allowed to access. A well-documented policy combined with robust training programs can significantly enhance overall security posture. Organizations should monitor AI usage to identify popular tools and services, leveraging this information to refine their security measures and policies.
Rashed Al Momani, General Manager for the Middle East at Kaspersky, emphasizes that a rigid ban or complete freedom regarding AI tools tends to be ineffective. Instead, a balanced policy that offers varying levels of access based on the sensitivity of departmental data is more advantageous. When paired with appropriate training, this strategy promotes flexibility while maintaining security standards.
Recommended Actions for Organizations
Kaspersky advocates several key actions for companies to enhance their AI security:
1. Employee Training on Responsible AI Use
Organizations should prioritize training employees on the safe and responsible use of AI. Incorporating specialized courses on AI security from platforms like the Kaspersky Automated Security Awareness Platform can enhance workforce awareness.
2. Empower IT Teams with Specialized Knowledge
IT specialists play a crucial role in safeguarding corporate environments. Providing them with training on exploitation techniques and practical defensive strategies, such as the ‘Large Language Models Security’ training offered by Kaspersky, can strengthen an organization’s cybersecurity framework.
3. Install Cybersecurity Solutions
Ensuring that all employees have robust cybersecurity solutions on their work and personal devices used for accessing business data is essential. Kaspersky Next products, for example, protect against threats such as phishing attacks and deceptive AI applications designed to exploit vulnerabilities.
4. Conduct Regular Usage Surveys
Regular assessments of how often AI tools are being utilized—and for what specific tasks—are important. This data can help organizations evaluate both the risks and benefits of these tools, informing necessary policy adjustments.
5. Implement AI Proxies for Data Security
Using a specialized AI proxy can help manage data queries effectively. These proxies can remove sensitive information like names and customer IDs and employ role-based access control to prevent inappropriate usage.
6. Develop a Comprehensive AI Policy
A thorough policy addressing the multitude of risks related to AI usage is essential. Companies can turn to Kaspersky’s guidelines for implementing secure AI systems for additional support in this area.
Kaspersky commissioned the survey, conducted by Toluna research agency, which included interviews with 2,800 employees and business owners across seven countries, including Türkiye, South Africa, Kenya, Pakistan, Egypt, Saudi Arabia, and the UAE.