Track A, generally targeted to managers, is devoted to conversational dialogues and personal assistants, how to design, build, and test them, and avoid the pitfalls and roadblocks of this new technology. It explains what you need to know to make informed decisions about building conversational agents.
Monday, April 27: 10:30 a.m. - 11:15 a.m.
Digital assistants that communicate with customers using human language—text or speech—are increasingly necessary. They are evolving beyond classical customer service to include marketing and sales. Companies that recognize this opportunity will benefit from both publicity and new customers, leaving those that move more slowly scrambling to catch up. This talk by the author of the new book, Computer Intelligence, describes what to expect in advanced deployments and approaches to creating your own.
William Meisel, President, TMA Associates
Monday, April 27: 11:30 a.m. - 12:15 p.m.
The goal: to incrementally and economically add AI and machine learning capabilities to legacy IVR applications. This session provides the road maps, steps, options, and examples of successful AI migrations and deployments. We also examine how to use these approaches to add conversational and omnichannel components to the migration architecture. Finally, we address AI integration targets, and options in relation to an organization’s culture, vertical market, size, appetite for early technologies, and degree of investment in legacy applications.
Greg Stack, Vice President, Speech-Soft Solutions, LLC
Monday, April 27: 1:15 p.m. - 2:00 p.m.
Join us as we seek to demystify the complex topic of voice biometrics, one of the strongest ways available for you to secure your voice applications and telephone agents against fraudsters and identity phishing. We simplify the common terms, discuss ideal use cases for the different approaches to the technology, and share insights around best practices. With no extra cost for hardware, voice biometrics can be the smartest way to introduce a highly secure and convenient authentication mechanism.
Jeffrey (Jeff) D Hopper, Vice President, Client Services, LumenVox, LLC
Monday, April 27: 2:15 p.m. - 3:00 p.m.
This talk discusses the latest methods for increasing security and why increased security and frictionless user experiences don’t have to be an “either/or” situation. It discusses how conversational AI leverages voice biometrics and NLU to authenticate individuals within seconds of users speaking across voice channels (smart speakers, IVRs, etc.). Attendees learn new ways for flagging fraudulent activity in real time based on a choice of words, natural utterances, and patterns of speech or writing during an interaction with a human or a virtual assistant. This talk also describes continual, multi-layered analysis that combines traditional voice biometrics with other methods like conversational biometrics and behavioral biometrics.
Roanne Levitt, Senior Manager, Commercial Security Strategy, Nuance Communications
Monday, April 27: 3:15 p.m. - 4:00 p.m.
Encouraging customers to enroll in voice biometric applications is critical to the success of these applications. Persuading customers to enroll (for active/text-dependent biometrics) or talk long enough (for passive/text-independent biometrics) to create an enrollment is a frequent challenge. We discuss real-world best practices to increase enrollment, including enticing and encouraging customers, reducing agent friction, and ways that you might be able to leverage existing data and prior customer interactions in the enrollment process.
Roy Bentley, Solution Delivery Manager, LumenVox
Monday, April 27: 4:15 p.m. - 5:00 p.m.
The quality of synthesized speech and the ease with which a person’s voice can be synthesized from a shrinking amount of speech data poses a threat to all systems that use a person’s voice for security. Global competitions, such as the Interspeech Conference, encourage companies to develop new strategies against a baseline set of threats. ID R&D’s Amein discusses insights and strategies submitted for publication in the industry leading Interspeech Conference of March 2019, as well as ID R&D’s perspective on the areas of synthetic speech that present the greatest threats.
John Amein, Vice President, ID R&D
Tuesday, April 28: 10:45 a.m. - 11:30 a.m.
This is a three-part introduction to the Open Voice Network, the nonprofit industry initiative dedicated to a future of voice assistance that is open–standards-based, interoperable, accessible, and data-protected. We describe the why and the who, with core value propositions; explain the how and how fast; and explore an approach to the what (the proposed standards and architectures) in this fast-paced glimpse into the future.
Jon Stine, Executive Director, Open Voice Network
Tuesday, April 28: 11:45 a.m. - 12:30 p.m.
Due to the lack of standardization in the domain of intelligent agents, there is no interoperability among them. This talk introduces some first thoughts toward standardization of voice-based intelligent personal assistants that are currently in discussion at the W3C Voice Interaction Community Group, including identification of areas for standardization, interoperability of IPAs, architectural drafts, and messaging formats like a JSON representation of semantic interpretation. We conclude with an outlook of next steps of our standardization efforts.
Dirk Schnelle-Walka, Research Scientist, Multimodal System Architecture, modality.ai
Tuesday, April 28: 1:45 p.m. - 2:30 p.m.
The dramatic improvements in speech technology that we’ve seen in the last few years have been based to a great extent on massive amounts of recorded speech data. This panel discusses what the limits should be with the use of this speech. Should callers be asked if each conversation can be recorded, or is it enough to ask once when they start using an app or device? Should callers be told how the recordings will be used? May the recordings be used to improve the caller’s speech recognizer, improve the caller’s dialogue, analyze the recordings to extract the caller’s interest in products and services, or synthesize the caller’s voice? May the recording be sold to others? Who owns the callers’ recordings? What legal protections are currently available, and are additional legal protections needed?
Anthony Scodary, Co-Founder, Co-Head of Engineering, Gridspace
Steven M. Hoffberg, Of Counsel, Tully Rinckey, PLLC
Tuesday, April 28: 2:45 p.m. - 3:30 p.m.
Learn all you need to know about speech analytics from the bottom up. In this educational session, we help you get a better understanding of the following: what you need to know as you research your options, including critical questions to ask to avoid speech analytics failure; how to establish a high-level business case; and proof of value (POV) before you buy. This session provides valuable insights to make you an informed buyer.
Roger Lee, Vice President, Customer Success, Gridspace
Tuesday, April 28: 4:15 p.m. - 5:00 p.m.
Voice analytics AI integrated into speech recognition engines is bringing a personalized customer experience to the voice age, allowing voice kiosks to instantly get a read about new customers based on voice characteristics, understand emotional state, and learn customer ordering habits over time. This talk includes a service application demo that combines wake words, NLU, voice analytics, biometric recognition, and a variety of other technologies to enable a truly personalized ordering experience.
Bernard Brafman, VP of Business Development, Sensory, Inc.
Wednesday, April 29: 10:45 a.m. - 11:30 a.m.
Consumers have become accustomed to using speech to interact with devices in their everyday lives, but conversational AI is less common in the contact center. This session shows you how to leverage conversational AI technology in your contact center to give your customers the seamless, low-effort experience they deserve. We explore each of the critical steps—from reviewing your customer’s journey and using self-service to picking the right conversational IVR platform.
Allyson Boudousquie, VP Market & Product Strategy, Concentrix
Wednesday, April 29: 11:45 a.m. - 12:30 p.m.
The age of AI-powered robots and devices that can have full-fledged intelligent conversations with people seemingly is now upon us. Or, is it? Not quite. While the last few years have seen an explosion in chatbot and virtual assistant (VA) development platforms, these chatbots can only handle simple tasks, like answering FAQs or routing to an agent. This talk describes innovations that will make it possible to have advanced, enterprise-grade, conversational experiences.
Eduardo Olvera, Sr. Manager & Global Emerging Technology Lead, UI Design, Professional Services, Nuance and AVIxD