Monday, December 23, 2024

Sparse Artificial Intelligence MCU from Femtosense and ABOV Semiconductor Delivers Low-Cost, Low-Power AI-based Voice Processing to the Edge

Related stories

Doc.com Expands AI developments to Revolutionize Healthcare Access

Doc.com, a pioneering healthcare technology company, proudly announces the development...

Amesite Announces AI-Powered NurseMagic™ Growth in Marketing Reach to Key Markets

Amesite Inc., creator of the AI-powered NurseMagic™ app, announces...

Quantiphi Joins AWS Generative AI Partner Innovation Alliance

Quantiphi, an AI-first digital engineering company, has been named...
spot_imgspot_img

Femtosense, in partnership with ABOV Semiconductor, launched the AI-ADAM-100, an artificial intelligence microcontroller unit (AI MCU) built on sparse AI technology to enable on-device AI features such as voice-based control in home appliances and other products. On-device AI provides immediate, no-latency user responses with low power consumption, security, operational stability, and low cost compared to GPUs or cloud-based AI.

The AI-ADAM-100 integrates the Femtosense Sparse Processing Unit 001 (SPU-001), a neural processing unit (NPU), and an ABOV Semiconductor MCU to provide deep learning-powered AI voice processing and voice-cleanup capabilities on-device at the edge. With language processing, appliances can implement “say what you mean” voice interfaces that allow users to speak naturally and express their intent freely in multiple ways. For example, “Turn the lights off”, “Turn off the lights,” and “Lights off” all convey the same intent and are understood as such. Voice/audio cleanup processes data before it is sent to the cloud, improving reliability and accuracy while reducing the volume of data sent, thus reducing backend infrastructure costs.

“With sparsity integrated throughout the AI development stack, the AI-ADAM-100 is the first device on the market to fully unlock the advantages of sparse AI,” said Sam Fok, CEO, Femtosense. “Our sparsity-enabling technology allows our customers to deliver compact, efficient AI processing to a growing variety of markets and products, including home appliances as well as small form factor, battery-operated devices like high-fidelity hearing aids, industrial headsets, and consumer earbuds.”

On top of the AI-ADAM-100, Femtosense provides a highly customizable selection of AI-ADAM-100–based AI software application products—from full turnkey solutions to tool-driven applications or full custom implementations using a manufacturer’s own AI models, whether dense or sparse.

The Sparse AI Advantage

Sparse AI reduces the cost of AI inferencing by zeroing-out irrelevant portions of an algorithm and then only allocating hardware memory and compute resources to the remaining nonzero, relevant portions of the algorithm. A system that stores and computes only nonzero weights can deliver up to a 10x improvement in speed, efficiency, and memory footprint. Similarly, a system that computes only when a neuron’s output is nonzero can deliver up to another 10x increase in speed and efficiency. Those 10s can multiply. Consequently, sparse AI enables manufacturers to implement deep learning–based AI models of up to 100x the power/complexity of previous MCUs without adversely impacting speed, efficiency, memory footprint, or performance.

Also Read: Applied Digital Announces Appointment of Industry-Leader Chris Jackson as Senior Vice President of Operations

While many edge applications can benefit from AI, they often lack the price or power flexibility to implement a GPU, cloud connectivity, or the volume to support a dedicated silicon solution. This has limited the adoption of edge AI. With the introduction of the AI-ADAM-100, manufacturers can implement voice language interfaces at the edge even for devices that are not connected to the cloud.

Many existing AI systems are always processing and consuming power even when the task is easy, like when the environment is quiet. Pure, cloud-based voice processing requires continuous throughput, leading to high infrastructure cost. The AI-ADAM-100 resolves tasks on-device to significantly reduce power and backend cloud loading. Specifically, the AI-ADAM-100 enables home appliance manufacturers to implement sophisticated wake-up and control functionality, allowing other system controllers and connectivity modules to drop into sleep mode and consume substantially less power when a user is not interacting with the system. This capability can be used to listen until a user’s voice command is received, and then to either process the command on-device or wake the system to send the command to the cloud.

A Product of Partnership

Femtosense and ABOV developed the AI-ADAM-100 MCU in strategic collaboration, leveraging the core strengths of each partner. “The AI-ADAM-100 is the best-optimized AI MCU solution for voice and audio-based AI applications and enables a variety of on-device AI applications for consumer electronics and standalone devices,” said Choi Won, CEO of ABOV Semiconductor. “Together with Femtosense, we will continue to develop the most cost- and power-efficient AI MCUs for global customers.”

ABOV has verified AI-ADAM-100’s top-notch voice command recognition performance under multiple noise conditions, meeting the requirements of leading customers. Global home appliance makers are working to reduce the number of buttons on their devices and streamline the user experience. AI-based voice command can accelerate this trend.

Source: Businesswire

Subscribe

- Never miss a story with notifications


    Latest stories

    spot_img