TIMES OF TECH

Amazon Nova Sonic Audio Generation AI Model Released, Can Process Speech in Real-Time

Amazon Nova Sonic Audio Generation AI Model Released, Can Process Speech in Real-Time

Amazon introduced a new artificial intelligence (AI) model in its flagship Nova family of models on Tuesday. Dubbed Amazon Nova Sonic, it is a voice generation model capable of generating human-like speech. However, it is not a text-to-speech (TTS) tool; instead, it can process voice input in real time and respond to it. The Seattle-based tech giant says developers can use the model to build conversational AI chatbots and similar tools. Notably, the Amazon Nova Sonic AI model also supports functional calling and tool use, making it compatible with agentic application developments as well.

Amazon Nova Sonic Is Available As an API

In a blog postthe tech giant announced the release of the Amazon Nova Sonic. The company said traditional approaches to voice-enabled applications use a complex with multiple models such as text recognition, speech-to-text conversion, data processing, and TTS models. This often leads to an increase in latency, and failure in preserving linguistic context, the post added.

Amazon said its approach with the Nova Sonic model was to unify speech understanding and speech generation components. The AI model is said to be able to process data and generate speech in real time, giving it a conversation-like experience. This unified system also allows the model to better understand the pace and timbre of input speech to contextualise the intent of the user.

Additionally, the AI model can understand different speaking styles as well as separate masculine and feminine-sounding voices in different accents. It can also understand when a user misspeaks, mumbles, or pauses while speaking. Amazon says the model can pick up speech even in a noisy setting.

In response generation, the company claims the model can be more expressive and human-like, and can adjust its response style to match the context of the conversation. Currently, the AI model only supports the English language. Amazon said support for more languages will be added soon. The model supports a context window of 32,000 tokens for audio, with an additional window to handle longer conversations. It has a default session limit of eight minutes.

To use the Nova Sonic model, developers can head to Amazon Bedrock and find it under the model access option. It can also be accessed via a bidirectional streaming application programming interface (API) that can both process audio input and generate output.

Source link

For more info visit at Times Of Tech

Share this post on

Facebook
Twitter
LinkedIn

Leave a Comment