Tech
»
1-min read

Radio Journalist Regains Voice With Help From AI

"This has saved my job and saved my family from a terrible financial unknown," political radio journalist Jamie Dupree was quoted as saying to the BBC.

IANS

Updated:June 18, 2018, 5:26 PM IST
facebookTwittergoogleskypewhatsapp
Radio Journalist Regains Voice With Help From AI
Artificial Intelligence. Representative Image. (Image: AFP Relaxnews)
Loading...
A US radio journalist who had lost his voice two years ago due to a rare neurological condition has regained the ability to speak, thanks to artificial intelligence (AI), the media reported.

Jamie Dupree, 54, a political radio journalist with Cox Media Group, got a new voice that trained a neural network to predict how he would talk, using samples from his old voice recordings, the BBC reported.

With his new voice, Dupree can now write a script and then use a free text-to-speech software programme called Balabolka on his laptop to turn it into an audio recording.

If a word or turn of phrase does not sound quite right in the recording, he can slow certain consonants or vowels down, or swap a word to one that does work, or change the pitch, and he can have a full radio story ready to go live in just seven minutes.

"This has saved my job and saved my family from a terrible financial unknown," Dupree was quoted as saying to the BBC.

In 2016, Dupree was diagnosed with tongue protrusion dystonia -- a rare neurological condition where the tongue pushes forward out of his mouth and his throat tightens whenever he wants to speak, making it impossible for him to say more than two or three words at a time.

Thanks to the new computer-generated voice, created for him by Scottish technology company CereProc, Dupree is set to come back on air, the report said.

The AI system slices each word read out by an individual into 100 tiny pieces, and does this with lots of common words until eventually it understands how basic phonetics work in that person's voice and has an ordered sequence for all the pieces in each word.

Then, the neural network can create its own sounds and predict what the person would sound like if they were to say a series of words in conversation.

"AI techniques work quite well on small constrained problems, and learning to model speech is something deep neural nets can do really well," Chris Pidcock, CereProc's chief technical officer and co-founder, told the BBC.

Watch: Honor 7C Review | Premium Looks in a Budget


Read full article
Loading...
Next Story
Next Story

Also Watch

facebookTwittergoogleskypewhatsapp

Live TV

Loading...
Loading...