专注【中高档】餐饮家具定制
当前位置: 主页 > 帮助中心 > 帮助中心
谷歌语言交互新突破 能更逼真模拟人声“太阳2007官网”
来源: / 发布时间:2024-03-25 04:31:02 / 浏览次数:

本文摘要:Googles DeepMind have revealed a new speech synthesis generator that will be used to help computer voices, like Siri and Cortana, sound more human.谷歌旗下的人工智能公司DeepMind近日研制出了一种新型语音合成系统, 该技术可以让如Siri和Cortana这样的计算机制备语音听得一起更加相似现实人声。

Googles DeepMind have revealed a new speech synthesis generator that will be used to help computer voices, like Siri and Cortana, sound more human.谷歌旗下的人工智能公司DeepMind近日研制出了一种新型语音合成系统, 该技术可以让如Siri和Cortana这样的计算机制备语音听得一起更加相似现实人声。Named WaveNet, the model works with raw audio waveforms to make our robotic assistants sound, err, less robotic.这项取名为WaveNet的技术通过研究完整音频波形,使机器人助手的声音听得一起不那么像机器人。

WaveNet doesnt control what the computer is saying, instead it uses AI to make it sound more like a person, adding breathing noises, emotion and different emphasis into senteneces.WaveNet并会掌控计算机的说出内容,它只不会应用于人工智能技术在句子中加到呼吸声、情感和各种重音,从而使计算机语音听得一起更加像真人。Generating speech with computers is called text-to-speech (TTS) and up until now has worked by piecing together short pre-recorded syllables and sound fragments to form words.用计算机制备语音的技术叫作“从文本到语音(TTS)”,现存的工作原理是将提早录音好的短音节和声音碎片制备语言。

As the words are taken from a database of speech fragments, its very difficult to modify the voice, so adding things like intonation and emphasis is almost impossible.由于语言就是指语音碎片数据库中萃取出来的,声音很难标记,所以完全不有可能加到声调和重音等因素。This is why robotic voices often sound monotonous and decidedly different from humans.这就是为什么机器人语音听得一起很做作,显著和人声有所不同。WaveNet however overcomes this problem, by using its neural network models to build an audio signal from the ground up, one sample at a time.然而WaveNet解决了这个考验,利用神经元网络模型由头创建一个音频信号,每次分解一个样本。During training the DeepMind team gave WaveNet real waveforms recorded from human speakers to learn from.培训期间,DeepMind团队让WaveNet自学了一些现实记录的人类语音波形。

Using a type of AI called a neural network, the program then learns from these, much in the same way a human brain does.通过一种叫作神经元网络的人工智能技术,这个系统可以像人类的大脑一样对这些波形展开自学。The result was that the WaveNet learned the characteristics of different voices, could make non-speech sounds, such as breathing and mouth movements, and say the same thing in different voices.所以WaveNet自学了有所不同声音的特点,可以收到非语言声音,比如呼吸声和嘴部活动的声音,并且可以用有所不同的声音说道某种程度的内容。Despite the exciting advancement, the system still requires a huge amount of processing power, which means it will be a while before the technology appears in the likes of Siri.虽然这个系统有激动人心的变革,但是它必须很强劲的处置能力,这意味著这项技术并无法迅速应用于到Siri当中。

Googles machine learning unit DeepMind is based in the UK and have previously made headlines when their computer beat the Go World champion earlier this year.Google旗下的机器学习技术企业DeepMind总部设于英国,今年早些时候,他们的计算机因击败了棋士世界冠军而上了头条。


本文关键词:太阳2007官网

本文来源:太阳2007官网-www.zuanboke.com