Intel

Intel’s Speech Enabling Developer Kit is out for future smart home solutions

  • 3K
  • 50
  •  
  • 50
  • 50
  • 5
  •  
  •  
  •  
  •  
  •  
  •  
  •  
  •  
    3.2K
    Shares

Machine learning, Natural language processing and AI are the core of smart home solution and that enables it to grow with continuous learning of the behavior of the homeowner and members. It automates many of the task such as setting room temperature based upon the time and members choice. That makes life easier, in the recent survey conducted by Intel shows 68 percent of Americans agreed that smart home solutions make life easier for them

Intel, in collaboration with the Amazon Alexa Voice Service* (AVS), is making it easier for third-party developers to accelerate the design of consumer products featuring AVS. Intel announced the Intel® Speech Enabling Developer Kit, which provides a complete audio front-end solution for far-field voice control. The announcement was made in an editorial written by Miles Kingston, general manager of the Smart Home Group at Intel Corporation.

The Intel Speech Enabling Developer Kit is designed to perform well in even acoustically challenged condition. This marks the latest in a string of smart home innovations, including the Intel-powered Amazon Echo Show. The speech recognition and responding over the every commands accurately required lots of engineering and that is done successfully by Intel, that it is known for.

The Intel® Speech Enabling Developer Kit will be available for pre-order now. Among the developer kit’s technology components:

  • High-performance algorithms for acoustic echo cancellation, noise reduction, beamforming and custom wake word engine tuned to “Alexa”
  • Intel’s dual DSP with inference engine
  • Intel 8-mic circular array

While designing smart home solution, it should be under consideration that machine needs to recognize and respond to commands from 360-degree, not only the directly line of sights. This needs lots of technological challenges and a series of microphones. The device needs to identify the speaker location to respond to the user who gave the commands and it should be processed as multi-tasking. Means, Command must be executed even user is listening music or doing something else.