ARTIFICIAL intelligence isn't just watching and listening - it may also be speaking to you.
A new study has revealed that most people can't tell the difference between real human speech and that generated by a computer.
Podcastle, a podcast platform powered by artificial intelligence, surveyed 1000 Americans to gauge public opinion on voice-cloning tools.
Shockingly, two in three Americans were unable to distinguish between an AI-generated voice and a human speaker.
Despite being seen as having lower tech literacy, baby boomers most effectively identified the human voice.
Meanwhile, millennials (31%) and Gen Z (35%) struggled to correctly guess that the clip they listened to was human and not computer-generated.
The same clips used in the survey were shared exclusively with The U.S. Sun - take a listen yourself and see if you can beat the odds.
The study also uncovered varying levels of optimism about the future of artificial intelligence.
When it came to AI voice technology, 26% of Americans are "cautious," while 24% are "supportive" - an exceedingly narrow margin.
A whopping 81% of Americans said they were "concerned" about the future implications of AI voice cloning.
Most read in News Tech
And they have a right to be worried. Just this year, cybersecurity experts have witnessed an explosion in the use of AI tools by malicious actors.
This includes an increase in the number of scams using voice "deepfakes" that mimic a real person's speech with the help of AI technology.
"Vishing," a portmanteau of "voice" and "phishing," is a type of cyberattack where scammers trick people into divulging sensitive information over the phone.
Thanks to accessible voice-cloning technology, cybercriminals can pose as a victim's friends and relatives with ease.
All they need is audio data, which may be readily available on social media.
The rest is simple - the data is fed into an AI program that generates words or entire sentences.
These deepfakes can replicate tonality, cadence, and other unique features of a person's voice.
The Federal Trade Commission sounded the alarm on voice cloning attacks early last year.
"All (a scammer) needs is a short audio clip of your family member's voice — which he could get from content posted online — and a voice-cloning program," the agency wrote.
And the risk goes beyond scams. The technology could pose a national security risk as the upcoming presidential election looms ahead.
In January, a robocall using President Joe Biden's voice urged Democrats not to vote in New Hampshire primaries.
The man behind the plot was later indicted on charges of voter suppression and impersonation of a candidate.
And threats are only expected to increase as the technology grows more advanced.
Microsoft has been hard at work on its own AI voice tools.
Its most recent text-to-speech generator, VALL-E 2, can recreate a human voice based on just a few seconds of audio.
The tech behemoth says VALL-E 2 is the first of its kind to achieve "human parity," meaning it meets or surpasses benchmarks for human likeness.
However, VALL-E 2 is so convincing that Microsoft is barring it from public release, citing "potential risks in the misuse of the model."
The company hopes the technology will someday find a niche where it isn't exploited, such as in accessibility tools.
READ MORE SUN STORIES
What are the arguments against AI?
Artificial intelligence is a highly contested issue, and it seems everyone has a stance on it. Here are some common arguments against it:
Loss of jobs - Some industry experts argue that AI will create new niches in the job market, and as some roles are eliminated, others will appear. However, many artists and writers insist the argument is ethical, as generative AI tools are being trained on their work and wouldn't function otherwise.
Ethics - When AI is trained on a dataset, much of the content is taken from the Internet. This is almost always, if not exclusively, done without notifying the people whose work is being taken.
Privacy - Content from personal social media accounts may be fed to language models to train them. Concerns have cropped up as Meta unveils its AI assistants across platforms like Facebook and Instagram. There have been legal challenges to this: in 2016, legislation was created to protect personal data in the EU, and similar laws are in the works in the United States.
Misinformation - As AI tools pulls information from the Internet, they may take things out of context or suffer hallucinations that produce nonsensical answers. Tools like Copilot on Bing and Google's generative AI in search are always at risk of getting things wrong. Some critics argue this could have lethal effects - such as AI prescribing the wrong health information.
Podcastle aims to reshape the conversation around AI voice technology and find helpful applications amid the doom and gloom.
"From AI’s benefits to its perceived downsides and the need for widespread education, we at Podcastle want to champion the safe use of AI voice technology now and in the future," the company said.