Mice could detect deepfakes

There may be a new weapon in the war against misinformation: mice.

As part of the evolving battle against “deep fakes” – videos and audio featuring famous figures, created using machine learning, designed to look and sound genuine – researchers are turning to new methods in an attempt to get ahead of the increasingly sophisticated technology.

And it’s at the University of Oregon’s Institute of Neuroscience where one of the more outlandish ideas is being tested. A research team is working on training mice to understand irregularities within speech, a task the animals can do with remarkable accuracy.
It is hoped that eventually the research could be used to help sites such as Facebook and YouTube detect deep fakes before they are able to spread online – though, to be clear, the companies won’t need their own mice.

“While I think the idea of a room full of mice in real time detecting fake audio on YouTube is really adorable,” says Jonathan Saunders, one of the project’s researchers, “I don’t think that is practical for obvious reasons.

“The goal is to take the lessons we learn from the way that they do it, and then implement that in the computer.”

Mr Saunders and team trained their mice to understand a small set of phonemes, the sounds we make that distinguish one word from another.

Researchers hope mice may be able to hear irregularities the human ear might miss.

“We've taught mice to tell us the difference between a ‘buh’ and a ‘guh’ sound across a bunch of different contexts, surrounded by different vowels, so they know ‘boe’ and ‘bih’ and ‘bah’ - all these different fancy things that we take for granted.

“And because they can learn this really complex problem of categorising different speech sounds, we think that it should be possible to train the mice to detect fake and real speech.”

The mice were given a reward each time they correctly identified speech sounds, which was up to 80% of the time.

That's not perfect, but coupled with existing methods of detecting deep fakes, it could be extremely valuable input.

Most of the deep fakes in circulation today are quite obviously not real, and typically used as a way of mocking a subject, rather than impersonating them. Case in point: a deep fake of “Mark Zuckerberg” talking openly about stealing user data.

But that’s not to say convincing impersonation won’t be a problem in the not-too-distant future - which is why it has been a significant topic of conversation at this year’s Black Hat and Def Con, the two hacking conferences that take place in Las Vegas each year.

Select to read the entire article.

Posted by Acumé

August 2019

Phone Hacking Scandal

Battle to stop UK police using facial recognition