LastPass Targeted by Deepfake Call Impersonating CEO: Cybersecurity Alert
LastPass, the password manager giant with over 25 million users, recently fell victim to a deepfake call impersonating the company’s CEO, Karim Toubba. In a blog post, LastPass disclosed that one of its employees received a series of calls, texts, and a voicemail featuring an audio deepfake from a threat actor posing as Toubba on WhatsApp.
The use of WhatsApp, a communication channel not commonly utilized by the company, raised suspicions, prompting the employee to report the incident to the security team. Fortunately, LastPass confirmed that the deepfake attack had no impact on the company’s overall security.
This isn’t the first time LastPass has faced security challenges. In 2022, the company admitted to being hacked, resulting in the exfiltration of internal data that was later used to access customer data.
Deepfake technology, which uses generative AI to create fabricated videos or audio, is a growing concern globally. A study by University College London revealed that humans struggle to detect these hoaxes, posing significant security risks.
In a separate incident in February, fraudsters used deepfake technology to orchestrate a fake video conference call, deceiving a finance worker into transferring $25 million.
Acknowledging the threat posed by deepfakes, major tech companies like Google, Meta Platforms, Microsoft, and OpenAI have joined forces to prevent the spread of deceptive AI content during the 2024 global election cycle.
As the prevalence of deepfakes continues to rise, it is crucial for companies to remain vigilant and implement robust security measures to protect against such sophisticated attacks.