With the rise of mobile came messaging as the universal user experience. And while visual communication methods such as selfie filters and GIF messages have proliferated, we seemed to be typing more and more with our thumbs each year in general.
Recent research from Google found that nearly half of U.S. consumers are using voice search on a daily basis. This was a significant jump from 20% in 2016 and a main reason why some are calling 2017 the year of voice search. The WSJ recently declared “the end of typing” for the internet’s next billion users who will rely on voice computing.
From voice assistants to voice messaging, the generation that forgot phone calls and voicemails is rediscovering their voice.
Voice and Messaging are Deeply Intertwined
But before you start trying to rethink your entire customer conversation strategy to accommodate voice, take a step back and recognize the big picture. Voice isn’t replacing the current messaging paradigm – one means of communication that integrates with and is given structure by the UX metaphor of messaging.
For brands to make better use of voice, it’s important to look deeper than the surface-level interface. When you’re speaking to Alexa or Siri, you’re actually populating a text-based conversation the same way you would in the messaging apps consumers have come to love.
This may not be obvious when you think about using the Amazon Echo in your kitchen, but it’s a critical concept for designing a customer conversation strategy that gives consumers the flexibility they demand. A great example is the artificially intelligent Easy Button that Staples is developing.
Staples, a Layer customer that understands the importance of messaging for customer conversations, will have a digital experience that rolls voice interaction, rich product cards, text-based messages and more all rolled together. In a recent Adweek article profiling voice AI trends, Staples’ head of applied innovation, Ian Goodwin, had this to say about where messaging and voice come together:
“Once we got a really good idea of what our customers were asking on chat channels, then we built the voice experience with the Easy Button. It was really a gradual ramp-up rather than just going out and doing it.”
The messaging metaphor provides a means for organizing a traceable history of customer conversations, which helps service agents foster more valuable relationships and drive greater lifetime value. Having a voice interface included only serves to add another means of communication that feels more natural for some users, some of the time.
Any voice interfaces you’ve used are directly integrated with a greater messaging application in some way. However, brands themselves face challenges when trying to make a voice interface native to their messaging strategy. That’s why we built out a three-pronged demo that showcases how Layer supports voice interfaces.
Meet Larry, our Layer for Voice Demo
There are many use cases for voice in customer conversations. But we built the Larry demo app for iOS using Layer UI to showcase three common aspects of voice—AI interaction, call center usage, and user-to-user conversation.
To highlight these aspects of voice, Larry includes three separate third-party integrations:
We used Apple’s Speech Framework to enable tap-to-speak in a conversation thread. The API helps convert the voice input into text that can be sent as a message. You can even review the text before it sends to correct any potential misinterpretations (no AI assistant is perfect just yet). Apple’s framework offers the same experience across iOS apps, so the consumer interface remains familiar for users.
Voxeet VoIP and Video
The flexibility behind Layer’s UI combines with Voxeet technology to place rich cards in the conversation that enable multi-user videoconferencing that remains stateful during and after the call to highlight the call duration and participants. The VoxeetConferenceKit SDK also makes it possible for users to access the conversation while using an in-app videoconferencing window.
Alexa Skill and Bot
The Larry demo was built with its own Alexa skill to integrate Amazon’s platform in the conversation. Users can say “Alexa, ask/tell Larry…” and capture a speech command as text on the Layer server. These commands can be sent as messages in the conversation, sent to a bot service like Api.ai to execute the request, sent to Alexa for a voice response, or sent to other Layer and third-party services. The interaction also works in the opposite direction – users can send a text message via Alexa to the Larry skill to send commands that way. Having the dialog captured in the conversation thread and allowing for multimodal interaction unlocks a range of possible use cases depending on context and user preference. We used the Alexa platform here, but a similar interaction could be built with any of the leading voice assistant platforms or devices.
The Voice-First Movement Is Growing, But It’s All Part of the Customer Conversation
Don’t get stuck debating text-based customer conversations, voice interfaces, and chatbots. The future of conversations isn’t about choosing the latest interface, but rather integrating them all in a platform that seamlessly connects you to your customers.
If you want to learn how to get to market quickly with a customer conversation strategy that connects voice and messaging, contact us today for a free demo of the Layer platform. We’re happy to demo Larry for you and provide sample code so you can get started building your multimodal conversation experience today.