NewsCWC 2023

Cornell Researchers Unveil Deep-Learning Model Capable of Deciphering Keystrokes from Audio Signals

A groundbreaking deep-learning model developed by researchers at Cornell University has the remarkable ability to decipher keyboard input solely from audio signals, raising significant implications for data security and the emergence of acoustic cyberattacks. Led by Joshua Harrison, Ehsan Toreini, and Maryam Mehrnezhad, the pioneering advancement introduces an artificial intelligence (AI) model capable of accurately predicting keystrokes by analyzing the unique acoustic signatures produced by different keys on a keyboard.In their recently published paper titled “A Practical Deep Learning-Based Acoustic Side Channel Attack on Keyboards,” the Cornell team demonstrates how the AI model is trained to associate specific audio patterns with corresponding characters, allowing it to virtually ‘listen’ to typing and transcribe it with astonishing accuracy.Unlike traditional cyberattacks that exploit software vulnerabilities or rely on phishing tactics, this new wave of attacks leverages the physical characteristics of keyboards, highlighting the significance of sound as a potential security vulnerability. The implications are far-reaching, as this method could compromise user passwords, conversations, messages, and other sensitive information.According to the researchers, the model achieves an impressive accuracy rate of 93% when trained using Zoom recordings, setting a new record for audio-based classification systems. The training process involves exposing the model to multiple instances of each keystroke on a specific keyboard. To build a comprehensive dataset for training, the researchers utilized a MacBook Pro, pressing each of its 36 keys 25 times.Despite its remarkable potential, the AI model does come with certain limitations and vulnerabilities. Changes in typing styles or the use of touch typing can significantly reduce the model’s accuracy, dropping it to a range of 40% to 64%. Additionally, countermeasures like introducing noise to the audio signal can obfuscate the keystrokes and diminish the model’s accuracy.However, the researchers emphasize that the model’s effectiveness is dependent on the specific keyboard’s sound profile. This dependence restricts the applicability of the attack to keyboards with similar acoustic characteristics, limiting its scope for widespread malicious use.As the digital landscape evolves, the arms race between cyberattacks and defense measures continues to escalate. The development of AI-based acoustic side-channel attacks underscores the need for enhanced security measures, including innovative noise-cancellation solutions like NVIDIA’s RTX Broadcast, which can counteract these types of attacks.To fully understand the Cornell team’s findings and methodologies, readers can refer to the official research paper (PDF) for further study. As AI and cybersecurity boundaries continue to blur, comprehending these advancements becomes crucial for individuals and organizations to stay ahead of potential threats.

Related Articles

One Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Cyber Warriors Middle East