Applied Brain Research (ABR), a leader in the development of AI solutions, is demonstrating the world’s first self-contained single-chip speech recognition solution at the AI Hardware and Edge AI Summit this week. This is an unveiling of the technology integrated into ABR’s first time series processor chip, the TSP1, capable of performing real-time low latency automatic speech recognition.
The solution employs ABR’s innovations at several levels of the technology. It starts with the world’s first state-space network, the Legendre Memory Unit (LMU; patented), which is a breakthrough in efficient computation for time series processing. Next, the networks are trained and compiled using ABR’s advanced full-stack toolchain. Finally, the network runs on ABR’s proprietary computational neural fabric that greatly reduces power consumption through reduction in data movement within the chip.
“What ABR is showcasing today has been five years in the making, starting with our earliest observations of how the brain processes memories which led to the state space network model that we derived from that study and subsequently patented,” said Dr. Chris Eliasmith, ABR’s co-founder and CTO. “From that starting point, we have innovated at every level of the technology stack to do what has never before been possible for speech processing in low-powered edge devices.”
Also Read: Scale Computing’s New GPU Hardware Boosts Performance
“ABR’s TSP1 is going to revolutionize how time series AI is integrated into devices at the edge,” said Kevin Conley, ABR’s CEO. “We are showcasing the fastest, most accurate self-contained speech recognition solution ever produced, with both English and Mandarin versions. The TSP1 will deliver these capabilities at 100X lower power than currently available edge GPU solutions. And speech recognition, which we are actively engaged with customers to develop, is only the first step in commercializing the potential of this technology.”
ABR‘s TSP1 is a single-chip solution for time series inference applications like real-time speech recognition (including keyword spotting), realistic text-to-speech synthesis, natural language control interfaces and other advanced sensor fusion applications. The TSP1 integrates a neural processing fabric, CPU, sensor interfaces and on-chip NVM for a self-contained easy-to-integrate solution. The TSP1 is supported by a no-code network development toolchain to create the easiest to develop and deploy time series solution on the market.
Source: PRNewswire