by Tara Kelly
As the influx of data from multiple sources continues to open up new possibilities for businesses and consumers, much of the focus has been on channeling those data streams to improve company or product performance and increase customer satisfaction. Industry professionals are devising new platforms and interactive technologies to make data actionable at the customer and operational levels. But in terms of end user utility, a key question remains: How close are we to seamless human to machine communication?
That question is important because, outside of data science and development professionals, few people are eager to read huge datasets and interpret their meanings. Instead, they want to interact on a human level, and increasingly, they prefer to use their voice, not their keypad. A user interface that enables seamless human-machine communication will unleash even more innovation, so it’s important to understand how close we are to being able to truly converse with machines.
Voice assistants like Amazon’s Alexa are already making inroads into this space, but the technology still faces challenges with language processing. Working through Amazon Echo, a voice-enabled wireless speaker with far-field voice control capabilities, the Alexa voice assistant can interact with users to play music, create to-do lists, set alarms, read audiobooks and provide real-time information on weather and traffic. It can also interact with smart devices – like a home automation hub.
Just recently, Amazon announced that Alexa had reduced errors in task completion by a factor of two, which means that the technology is getting better at communicating. Google, similarly, is expected to launch a competitor to Alexa later this year. Code named “Chirp” while in development, the device will be sold as “Google Home,” a virtual assistant that can also play music, retrieve information and control smart home devices (Google purchased the Nest Labs, makers of smart home products, in 2014 for $3.2 billion).
With improvements in natural language processing and more devices coming to market to work across mobile and home device ecosystems, virtual assistants are a promising sign of a deeper human-machine connection. Even more advanced robots are in the works, including Jibo, which its makers call the world’s first “social robot.” Jibo is designed to take communication to the next level, integrating emotive cues along with natural language processing and artificial intelligence learning capabilities.
These developments suggest that technology is poised to take a significant leap forward. As natural language processing abilities improve and people become more comfortable interacting with machines, they’ll be able to use their devices and bots as intermediaries to access data from IoT devices — not just the raw information, but interpretations that are meaningful to the user.
The intensive focus on perfecting natural language processing suggests that voice capabilities will lead the way toward innovation. Given human nature, this seems fitting. The human voice is powerful: It can express love, communicate knowledge and create movements. And as communication between humans and machines becomes truly seamless, voice has the power to spark innovation – with IoT data streams and voice-enabled technology coming together to make life better for businesses, their customers, for people.
Tara Kelly is the founder, president and CEO of Splice Software.