gpt]
Rewrite the content fetched from
As Generative AI fuels a surge in phishing, deepfakes and adversarial malware, Palo Alto Networks’ Chief Security Officer for EMEA and LATAM,Haider Pasha, shares how CISOs can stay ahead—with the right tools, strategies and mindset.

In November 2022, AI broke out of the lab and into the mainstream. What was once limited to coders and researchers became accessible to anyone with a browser. Almost instantly, Generative AI unleashed a wave of innovation—and exploitation.
By mid-2023, WormGPT surfaced: a Generative AI tool designed for cybercriminals. Trained on hacking data and stripped of ethical safeguards, it was followed by FraudGPT, marketed on the Dark Web as an all-in-one toolkit for phishing, malware and identity fraud.
These tools can now craft convincing phishing emails, generate undetectable malware and guide users through bypassing Two-Factor Authentication—all for under US$100 per month.
No coding skills. No broken English. Just AI-enabled cybercrime, faster, cheaper and at scale.
Faced with this escalating threat, the role of defenders is undergoing radical transformation. Haider Pasha, Chief Security Officer for EMEA and LATAM at Palo Alto Networks, believes the only way forward is through strategic consolidation, automation and a fundamental shift in how cybersecurity is understood.
“This is no longer a tools issue—it’s a mindset issue,” Pasha said in a recent conversation as part of the CXO Vision Series.He went on to discuss what AI means for both attackers and defenders: “Cybersecurity can’t be managed with 80 siloed tools. Defenders need unified, AI-powered platforms that think and act faster than the threats they’re facing.”
He explained that most people believe AI benefits attackers more than defenders, however, he disagrees. He believes this could be the case if we change how we approach security.
AI is accelerating the attacker’s playbook
From phishing to deepfakes, AI is proving to be a multiplier for threat actors—enhancing speed, scale and sophistication in equal measure.
“On the attacker side, they’re using AI,” Pasha explained. “Take phishing: a perfectly crafted email dramatically increases success rates. Now, attackers can scale those campaigns to thousands, quickly and efficiently.”
Pasha also highlighted deepfake threats, citing the growing concern surrounding video and voice impersonation. “We’ve seen this already—voice deepfakes being used to impersonate executives. People are asking, ‘Can we even stop these attacks?’ The answer is yes, but we need the right capabilities and mindset.”
Another key concern is AI-enhanced social engineering, where attackers aren’t just sending emails—they’re using cloned voices or audio prompts to manipulate victims via phone calls. “It’s not just deepfake anymore. It’s real-time, interactive deception. You get a call, you hear a voice you recognise—and it’s not who you think it is.”
Adversarial AI: Bypassing detection and poisoning models
Among the more advanced threats, adversarial AI stands out as a notable example. This refers to the use of AI to bypass or mislead existing detection systems.
“If you use AI the right—or wrong—way, you can evade AI-based security controls,” said Pasha. “Attackers can quickly analyse vulnerabilities and develop exploits designed to slip past even AI defences.”
He also highlighted the risk of automated, adaptive malware, explaining that attackers can now generate unique variants for each target—what he calls ‘100 different zero-day malware for 100 victims’.
“This level of personalisation wasn’t feasible before. Now it’s possible at speed—and that’s the game-changer.”
Why platformisation is the only way forward
For defenders, the traditional model—managing dozens of disparate security tools—is no longer sustainable. Pasha cited a recent joint study revealing that mid-to-large organisations run, on average, 83 security tools across 29 vendors.
“You can’t plug every hole with a different product,” he said. “You need to consolidate, integrate and move towards a platform-based approach—one that is natively intelligent, AI-powered and outcome-driven.”
Pasha outlines what this looks like in practice: not just stitching tools together but embedding Machine Learning and GenAI into a unified system that can analyse, prevent and respond in real time. “It’s about autonomous cybersecurity—real-time decisions, not reactionary workflows. The AI must be part of the platform, not an add-on.”
He references customers using Palo Alto Networks’ XDR and XSIAM platforms, particularly a major healthcare provider that reduced its mean time to detect and respond from days to just 14 minutes while drastically cutting operational overhead.
SOC modernisation and the skills gap
The challenge is not only technological but human. Security teams are overwhelmed, short-staffed and often fighting yesterday’s battles.
“Most traditional SOCs are still working through alerts manually, across too many tools,” said Pasha. “That doesn’t work when threats are automated.”
The key, he says, is to use AI not just for detection but across the full incident lifecycle—from proactive threat hunting to real-time threat response.
“Our customers using XIM have automated this entire process. What used to take two-to-three days now takes minutes.”
And in doing so, they’ve managed to reduce their headcounts—not to eliminate roles, but to reassign personnel to higher-value tasks. “They no longer need 30 people to monitor alerts, they can operate effectively with 10 to 12,” said Pasha.
Shadow AI and agentic threats: What’s coming next
Pasha also warns of emerging risks such as Shadow AI—the unauthorised use of GenAI tools by employees—and agentic AI, where models begin taking autonomous actions on behalf of users.
“Do you block GenAI? Allow it? It’s not that simple,” he said. “You need visibility into what’s being used and governance around how it’s used.”
To address this, Palo Alto has developed AI Access Security, a solution that allows organisations to monitor which AI tools are in use and what data they interact with.
But the bigger question lies ahead: what happens when AI models start making decisions?
“Agentic AI is different from LLMs. It doesn’t just give you answers—it acts on your behalf,” Pasha said. “If that model is compromised, it can cause real damage very quickly.”
A new cybersecurity mandate: Policy, training and trust
Beyond technology, Pasha calls on CISOs to prioritise policy, training and trust.
“The first thing I tell CISOs: have a clear AI policy. What do you allow? What don’t you? And is everyone in the organisation aware of the boundaries?”
He also urges teams to build internal AI skills, including the ability to develop and apply Machine Learning models to real-world use cases. Palo Alto’s XSIAM platform, for instance, allows users to import their own ML models—tailored to their threat environment. “Security teams must understand AI—not just use it. They need to know how it works and where it applies.”
Finally, he emphasises the importance of AI ethics and transparency, advocating for the establishment of internal AI trust committees and clear accountability from vendors.
“There should be no black boxes. If a vendor can’t explain how their AI works or where it’s applied, that’s a red flag.”
Conclusion: A winnable arms race
Despite the risks, Pasha remains optimistic. AI may be enabling attackers—but it also offers defenders a way to improve speed, scale and resilience radically.
“We’re in an arms race, yes. But it’s one we can win—if we act strategically,” he said. “The technology is here. The platforms are here. What’s needed now is leadership and mindset.”
His advice to CISOs: think long-term; build for autonomy; and treat AI not just as a tool, but as a foundational layer of modern security architecture. “This isn’t about reacting to what attackers do next, it’s about being three steps ahead,” he concluded.
into a completely fresh, human-written article that feels authentic and naturally written. The tone must reflect everyday human communication—professional, clear, and engaging without sounding like it’s generated by AI. Strictly avoid generic AI-style phrases, exaggerations, filler lines, or hallucinated content.
Structure the article with appropriate subheadings (H2, H3, etc.) and ensure it is *at least 500 words*. Each paragraph should be well-structured, focusing on a specific angle or detail from the source.
Incorporate *high-ranking SEO keywords* relevant to the topic where naturally appropriate—never forced. Prioritize keyword-rich phrases commonly searched online while maintaining readability and flow.
Use real-world phrasing, straight facts, and simple but intelligent language as used in human-authored blogs or news articles. Avoid summaries or conclusions; focus purely on rewriting the key points into a compelling narrative without inventing new ideas.
Do not add your own opinions or additional content—strictly rephrase and rewrite the original source material in a fresh, optimized, and human-sounding format.
[/gpt3]