


A team of researchers figured out how to train a deep learning model to record and steal users' data by listening to their keystrokes, showing the newest artificial intelligence threat to privacy.
The researchers from Cornell University released a paper on Aug. 3 describing how they used a deep learning model in conjunction with a recording of someone typing to determine what the user was typing with a startlingly high accuracy rate. The data-gathering method, known as a "sound-based side channel attack," would create a new way for hackers to record the personal information of users without them knowing.
HOW DEBATE CAN BE MAKE OR BREAK FOR CANDIDATES
"When trained on keystrokes recorded by a nearby phone, the [software] achieved an accuracy of 95%, the highest accuracy seen without the use of a language model," the professors wrote in the abstract of the paper.
The tests began with researchers first recording the keystrokes and using the sounds to train an algorithm to recognize and connect specific sounds to specific keystrokes. Once the strokes were recorded and connected to a data model, the reliability was surprisingly accurate. This accuracy improves if a user is using a mechanical keyboard, which is significantly louder than the average laptop keyboard.
The research included tests performed over online calling services such as Zoom and Skype. The algorithm had a 93% success rate if the audio of the typing was recorded over Zoom and a 91% accuracy rate with Skype.
CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER
Researchers recommended that anyone worried about acoustic side channel attacks should attempt to change up their typing styles or use randomized passwords to counter the recordings. They also note that users could use software to reproduce keystroke sounds, white noise, or audio filters.