![]() “When you record audio, every second of audio you analyze gives between 8,000 to 40,000 data points for your voice,” Balasubramaniyan says. “The machine doesn’t know any different, so all of that noise is packaged in as part of the voice.” Note the timing The reason, he explains, is that AI programs find it hard to differentiate between general noise and speech in a recording. “If you don’t have enough audio to fill out all of the different sounds of someone’s voice, the result tends to sound more whiny than humans are,” Balasubramaniyan says. ![]() When you’re listening for manipulated audio, here’s what to take note of: Listen for a whine And the audio can be so realistic, it’s difficult for the human ear to tell the difference. While some of these rely on basic tricks similar to cheapfake videos – manipulating pitch to sound like a different gender, or inserting suggestive background noises – Balasubramaniyan says running a few hours of someone’s voice through AI software can give you enough data to manipulate the voice into saying anything you want. Vijay Balasubramaniyan, CEO and Cofounder, Pindrop Security “Every year, we see about $470 million in fraud losses, including from wire transfer and phone scams. He says manipulated audio is the basis for a lot of scams that can ruin people’s lives and even compromise large companies. Vijay Balasubramaniyan is the CEO and co-founder of Pindrop, a company that creates security solutions to protect against the damage fake audio can do. But like fake video, fake audio has gotten very sophisticated via artificial intelligence – and it can be just as damaging. If you have a smartphone or have ever chatted with a virtual assistant on a call, you’ve probably already interacted with manipulated audio voices. That data, a black box of sorts, could offer clues to any manipulation. “This embedded data tells you more about the image or video, like when it was taken and what format it’s in,” Delp says. ![]() This is a more sophisticated strategy that Delp’s team uses, and it could be included in detection software employed in the future by, say, media outlets. “The problem is, the audio track would also slow down.” So not only was her speech slow – the other sounds in the video were, too. The Pelosi video “was played back at a slightly lower frame rate,” Delp explains. “Does it match, or is there a strange relationship between them?” Additionally: Does the lighting seem off? Does the person or some aspect of the scene appear “pasted on?” It could be manipulated. “If it’s a head and shoulders shot, look at their head and body and what’s behind them,” he says. ![]() “Also, by watching their head motion, you may be able to see if there is unnatural movement.” This could be evidence that the video and audio are out of synch, or that there were time-based corrections made to parts of the video. The technology used to make deepfake videos has a hard time replicating natural blinking patterns and movements because of the way the systems are trained. “Look at the person in the video and see if their eyes are blinking in a weird way,” Delp says. Here are some of his tips for spotting cheapfakes and deepfakes lurking around your social media feeds: Focus on the natural details “So they’ll believe even a poorly manipulated video if it’s about someone they don’t like or think of in a certain way.”ĭelp’s team develops ways to detect fake videos. “People will buy into things that reinforce their current beliefs,” he says. J Delp, Professor of Electrical and Computer Engineering, Purdue University And it doesn’t need to be sophisticated to be effective.Įdward. He says wider access to AI software and advanced editing tools means almost anyone can create fake content. Delp, a professor at the School of Electrical and Computer Engineering at Purdue University, has been studying media forensics for 25 years. Credit: /seeprogress//politicswatchdogĮdward J.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |