[Fox News] Researchers warn ‘humans cannot reliably detect’ audio deepfakes even when trained

AI-generated audio that mimics humans can be so convincing that people can’t tell the difference a quarter of the time – even when they’re trained to identify faked voices, a new study claims.

Researchers at University College London investigated how accurately humans can differentiate between AI-generated audio and organic audio, according to a report in the science and medical journal Plos One. The study comes amid the rise of deepfakes, videos and pictures that can be edited to appear as if they are actual images of other people.

“Previous literature has highlighted deepfakes as one of the biggest security threats arising from progress in artificial intelligence due to their potential for misuse,” researchers wrote in their paper published this month. 

“However, studies investigating human detection capabilities are limited,” the researchers continued, explaining why they launched the endeavor to find just how realistic speech deepfakes are to human listeners.

WHAT IS AI?

The research team used a text-to-speech algorithm on two data sets that generated 50 deepfake speech samples. The researchers used both English and Mandarin speech “t​​o understand if listeners used language-specific attributes to detect deepfakes.”

The speech samples were then tested on 529 people who were asked if they believed a sample was an actual human speaking or if the speech was computer-generated. 

WHAT IS CHATGPT?

Participants were only able to accurately identify deepfake speech 73% of the time, while results only improved “slightly” after participants were trained on how to recognize computer-generated audio, according to the study.

“Our findings confirm that humans are unable to reliably detect deepfake speech, whether or not they have received training to help them spot artificial content,” Kimberly Mai, an author of the study, said in a statement. 

“It’s also worth noting that the samples that we used in this study were created with algorithms that are relatively old, which raises the question whether humans would be less able to detect deepfake speech created using the most sophisticated technology available now and in the future.”

The study is considered to be the first of its kind to investigate how humans detect deepfake audio in a language other than English.

WHAT IS VOICE CLONING? UNDERSTAND HOW IT WORKS, WHAT IT CAN BE USED FOR AND MORE

English and Mandarin-speaking participants showed roughly the same rate of detection, with English-speakers citing they relied on listening to breathing to help determine if the audio was real or computer-generated. Mandarin-speakers said they paid attention to a speaker’s cadence and word pacing to help correctly identify audio.

“Although there are some differences in the features that English and Mandarin speakers use to detect deepfakes, the two groups share many similarities. Therefore, the threat potential of speech deepfakes is consistent despite the language involved,” the researchers wrote.

The study comes as a “warning” that “humans cannot reliably detect speech deepfakes,” with researchers highlighting that “adversaries are already using speech deepfakes to commit fraud,” and the tech will only become more convincing with the recent advancements in AI. 

“With generative artificial intelligence technology getting more sophisticated and many of these tools openly available, we’re on the verge of seeing numerous benefits as well as risks. It would be prudent for governments and organizations to develop strategies to deal with abuse of these tools, certainly, but we should also recognize the positive possibilities that are on the horizon,” study author and University of London computer science professor Lewis D. Griffin said in a statement published by the university. 

AI ‘VOICE CLONE’ SCAMS INCREASINGLY HITTING ELDERLY AMERICANS, SENATORS WARN

Audio deepfakes have already been used repeatedly across the U.S. and Europe to carry out crimes

The study pointed to a scam in 2019, for example, that left a U.K.-based energy firm roughly $243,000 in the red after a fraudster hopped on the phone with the firm’s CEO and pretended to be the boss of the organization’s Germany-based parent company.

The scammer was able to use AI technology to capture the boss’ slight German accent and “melody” of the man’s voice while demanding the CEO immediately transfer money to a bank account, the Wall Street Journal reported at the time.

AI VOICE-CLONING SCAMS ARE ON THE RISE, HERE’S HOW YOU CAN PROTECT YOURSELF

Stateside, victims are sounding the alarm on phone scams that often target elderly Americans. The Federal Trade Commission warned last month that scammers are increasingly relying on voice cloning technology to convince unsuspecting victims to fork over money. The criminals can take a soundbite or video of a person that’s posted online, clone the voice and call the person’s loved ones while pretending to be in a dire situation and in the need of fast money.

Many victims later tell police that the cloned voice sounded so similar to their loved one that they didn’t immediately suspect it was a scam.

Mai told Fox News Digital that the research shows that training people to spot AI-generated speech will unlikely “improve detection capabilities, so we should focus on other approaches,” pointing to a handful of other avenues to potentially mitigate risks associated with the tech. 

“Crowdsourcing and aggregating responses as a fact-checking measure could be helpful for now. We also demonstrate even though humans are not reliable individually, detection performance increases when you aggregate responses (collect lots of decisions together and make a majority decision),” Mai explained. 

“In addition, efforts should focus on improving automated detectors by making them more robust to differences in test audio. In addition, organizations should prioritize implementing other strategies like regulations and policies.”

Read More 

[Fox News] How to take photos in low light using your phone

Are you tired of taking blurry and underexposed photos at night?

Well, I’m here to help. With a few tips and settings on your phone’s camera app, you can fine-tune and improve your chances of capturing stunning low-light and night photos that will leave your friends and family in awe.

CLICK TO GET KURT’S FREE CYBERGUY NEWSLETTER WITH SECURITY ALERTS, QUICK TIPS, TECH REVIEWS AND EASY HOW-TO’S TO MAKE YOU SMARTER

Let’s dive into the world of nighttime photography and discover the secrets to capturing the beauty of the night.

Use something to stabilize your phone. The most important thing to do when taking pictures at night is to keep the phone camera still. The best option is a mini or extendable tripod but the next best thing is to find a stable object like a tree limb, picnic table, or street sign to balance your phone against for added stability. This small effort to take any wobble out of your shot can make a huge difference in the quality of photos.

Additionally, avoid zooming in as it will tend to make your image pixelated and grainy. 

1) Turn off the flash

As a general rule, the newer your phone, the better it should perform at night. That shouldn’t stop you from turning off the flash so that it doesn’t accidentally get triggered when snapping photos.

Once you open your iPhone or Android camera, you’ll find the flash icon in the upper left-hand corner. Before you take your photo, if you click that flash icon, it will be forced off.

2) Adjust ISO on iPhone

ISO allows you to adjust the camera’s sensor to light; however, to adjust this on your phone, you’ll need a 3rd party app to do this.

iPhone: 4.7 stars (at time of publishing) 

Or VSCO: Photo & Video Editor 

iPhone: 4.7 stars (at time of publishing)

Android: 3.4 stars (at time of publishing)

Adjust ISO on Android

Settings may vary depending on your Android phone’s manufacturer 

For more of my tech tips & security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter

3) Use Night Mode

If your phone camera has this feature, it will improve photos taken at night.

How to take a Night mode photo using your iPhone

For iPhone 11 and later models, Night mode automatically turns on when the camera detects a low-light environment.

HOW TO TURN LIVE PHOTOS ON AND OFF ON IPHONE

Settings may vary depending on your Android phone’s manufacturer 

Night mode employs multi-frame processing to merge 30 images, resulting in a single, vividly clear photograph. By harnessing the additional light captured by the camera sensors on your phone, your photos will showcase a heightened brightness and overall enhancement.

For other Androids try:

HOW TO DIGITIZE OLD PHOTOS AND SLIDES 

I hope you found those tips and camera settings helpful. Remember, keeping your phone camera steady is key to capturing great nighttime shots, so using a mini tripod or finding a stable object to balance your phone against is crucial. 

And don’t forget to turn off the flash. Adjusting the ISO settings and utilizing features like Night Mode can also greatly improve your photos. 

So, grab your phone, experiment with these techniques, and let the beauty of the night come to life in your photos.

Have you tried any of these tips and settings for taking low-light and night photos with your iPhone or Android? Share your experiences and any additional tips you have by writing us at Cyberguy.com/Contact

For more of my security alerts, subscribe to my free CyberGuy Report Newsletter by heading to Cyberguy.com/Newsletter 

Copyright 2023 CyberGuy.com. All rights reserved.

Read More