Understanding the true potential of artificial intelligence (AI) in the realm of cybersecurity requires a clear definition of its terms. While AI refers to machines that can imitate human intelligence, machine learning (ML) involves teaching machines to perform tasks and recognize patterns. Unfortunately, some cybersecurity vendors capitalize on the AI buzzword without embodying its true capabilities, akin to passing fads or snake oil remedies.The ultimate goal of AI in cybersecurity is often envisioned as a fully automated system operating without human intervention. However, it is essential to recognize AI-generated outputs as starting points for human decision-making, rather than infallible truths. Combining human perspectives with AI is crucial to maintain ethicality and relevance in its outcomes.Currently, the use cases for AI in cybersecurity remain somewhat limited. For instance, GitHub Copilot excels at converting natural language prompts into coding suggestions, but it lacks broader capabilities. Deep learning, with its specialized expertise, can be compared to the training of a neurosurgeon, while wide learning encompasses a general practitioner treating various medical conditions. Copilot’s specialization suggests it may fall under ML rather than true AI, despite its impressive qualities.Although progress has been made with technologies like ChatGPT, achieving AI proficiency on both wide and deep levels remains a distant goal. In the realm of security, effectively utilizing AI is a challenge. It can serve as a foundation for security teams to consider and enhance their strategies, such as security controls and policy decisions. However, translating AI concepts into practical solutions for implementation remains a hurdle.One existing technique involves integrating AI technology into security operations, especially for managing repetitive tasks. AI can filter out noise, identify priority alerts, and capture data to spot anomalies. Established vendors offer similar capabilities. Yet, caution must be exercised to avoid mislabeling ML as true AI when rules trigger alerts in systems. True AI would involve enabling the system to autonomously identify genuine anomalies.Processing vast quantities of security logs poses a challenge for organizations to determine if these logs indicate malicious activity and what actions to take. AI chatbots and machine learning models (LLMs) can summarize large datasets, flag areas of concern, and help security analysts quickly digest and respond to information. These tools serve as a Human-Machine Interface, facilitating easier interaction with various security products.Another promising application of AI lies in attack surface management, where technologies detect, monitor, and oversee all internet-connected devices and systems for potential attack vectors. Given the ever-changing nature of attack surfaces, including the information we share as individuals, attack surface management becomes a valuable solution. While not a silver bullet, it adds another layer of defense. Promptly identifying weaknesses and remedying them via infrastructure-as-code can significantly reduce organizational risk.In cyber security, there is no perfect solution, but AI and ML promise faster, cost-effective, and more efficient operations. However, relying solely on AI in isolation is not prudent. The true power of AI lies in its integration with human expertise, allowing for mutual learning and collaboration, similar to that of a critical friend or colleague.As the cyber threat landscape evolves, organizations must harness the potential of AI while recognizing its limitations. By combining human intelligence and AI capabilities, the cybersecurity industry can move towards more effective and proactive defense strategies, ensuring a safer digital environment for all.