Since publishing this episode, we've rebranded to TELUS Digital.
On this episode, we explore whether voice-first experiences will be as ubiquitous as the internet and smartphones — and how brands can prepare for a voice-first future.
Voice technology offers distinct advantages over other forms of inputs, allowing for lower effort customer interactions, increased efficiency and improved accessibility.
With recent advancements in generative AI, voice-first experiences — the combined process of voice recognition and natural language processing — have the potential to completely transform how we interact with our devices in our everyday lives, leveraging multimodal environments to create the ultimate interface.
Listen for the compelling insights of Tobias Dengel, president of TELUS Digital Solutions & WillowTree, a TELUS Digital Company; and Bret Kinsella, founder, CEO, and research director of Voicebot.ai.
Show notes
Tobias's book, The Sound of the Future: The Coming Age of Voice Technology, is available for purchase in hardcopy, digital and audiobook formats.
Guests

President of TELUS Digital Solutions & WillowTree, a TELUS Digital Company

Founder, CEO, and research director of Voicebot.ai
Transcript
Robert Zirk: What's the next major breakthrough in technology?
That's the big question that clients were asking Tobias Dengel, president of WillowTree, a TELUS International Company.
Tobias Dengel: We were in the internet wave from about 1997-98 till 2008 or nine… We've been in this mobile wave since 2008-2009 when the iPhone got introduced…
Voice of smartphone user: Wow, there's an app for everything!
Robert Zirk: And in trying to answer that big question of “what's next?”, Tobias's research led to him authoring a book called The Sound of the Future, where he concludes that…
Tobias Dengel: Voice is the next big wave – as big as the internet or as big as mobile.
Robert Zirk: So today on Questions for now, we'll ask: Has the time come to prioritize voice-first experiences?
Welcome to Questions for now, a podcast from TELUS International where we ask today's big questions in digital customer experience. I'm Robert Zirk.
Here's what we mean by voice technology. First, there's voice recognition - the ability for computers to interpret our speech using artificial intelligence and machine learning. When that's paired with natural language processing, the computer or device can recognize the meaning of what's being said and respond with an output to the request.
Voice-first experiences have several advantages over other forms of input. Tobias noted speed as a major benefit of voice technology And cited how voice can reduce the input time a customer needs to spend making a selection out of a wide range of options.
Tobias Dengel: Domino's, for example, has said there's 4 billion combinations of pizza that you can create within Domino's. Obviously, that is super complicated and takes a while, right? It takes two or three minutes to put together a complicated pizza order on the Domino's app, whereas it would take 10 seconds to just say that order. Now, you don't want to listen to the response. You just want it to develop on its own. And so really, anytime you're picking from a broad selection of things, it's always going to be faster to say it rather than type it.
Robert Zirk: For many people, using voice can be a more intuitive way of communicating. A Stanford experiment found that, through voice recognition, humans could input text three times faster than they could by typing on their phone's keyboard, and error rates were lower as well.
Bret Kinsella is the CEO, founder and research director of Voicebot.ai, which provides news and analysis on conversational and generative AI. He highlighted the significance of voice technology opening the door to a more human way of communicating with our devices.
Bret Kinsella: Fundamentally, this is the first time we're able to speak to computers in our language. Up to that point, all of the interaction with technology, whether it be the printing press, the car, the computer, we were always using some tool that the machine could understand in order to make the machine do something for us. With natural language processing, voice, voice-first, these capabilities mean we can communicate with them like we would another person. And there are a lot of things, particularly where we might not know exactly how to ask for them.
Robert Zirk: Or if we do, it might take several menus, filters and screens, just to find what it is we're looking for.
Bret Kinsella: US Bank, Richard Weeks, who runs the application there. He's like, "Bret, I have 300 features in this application on a mobile user interface. I cannot expose 300 features in a way that people can find them. They have to use search and wouldn't it be better if they could just ask for it? And it would just automatically do it?"
If you're doing an international money transfer, you probably need to know the SWIFT code for your bank, but there's very few people who know that that's the terminology. But you can go into Bank of America and use the Erica voice assistant and you can say, "I need to get the number that I use to do an international wire transfer." And it'll just come back on the screen.
And a lot of people don't recognize, like, how big of an impact that is, that we can now just communicate with each other in a way that's comfortable to us, and maybe provide more context, than we would in the past, like we would've with a human. And do that in a way that the machine then can respond to us more effectively.
Robert Zirk: Voice technology can also break down language barriers between people.
Tobias Dengel: Real time or slightly delayed translation is another aspect. Obviously, all of us can appreciate that from a tourism perspective. But there are many, many languages spoken in India and Africa that have very small bases of users that have been excluded largely from the digital world that are going to be starting to participate through these technologies. It's gonna allow things like customer service to happen real time between people that don't speak the same language.
Robert Zirk: Another area where voice can be beneficial is safety. Particularly in "hands-on heads up" applications where your hands are busy performing a task that makes it impossible or unsafe to type a command.
As an example, think of a chef who's in the middle of food prep and needs to know a cooking temperature without interrupting their workflow and changing out their gloves.
But beyond food safety, the ramifications of voice can go so far as to save lives, and Tobias shared a great example from his book, The Sound of the Future, that illustrates how vital this technology can be.
Tobias Dengel: One of my favorite stories in the book is someone was driving, I think, in Iowa in 2018, and they ran off the road into a lake, and the car flipped, and their phone was just somewhere in the car, they couldn't reach it, and they just said, "Hey Siri, call 9 1 1." It's the first known use of Siri in that paradigm of, ultimately, voice technology in terms of saving lives.
Robert Zirk: And voice can change lives as well, creating inclusive experiences that allow people to communicate and connect in ways they weren't able to before.
Tobias Dengel: At WillowTree, we launched the Vocable app, which was designed to help people who've lost the ability to speak to communicate.
It uses Gen AI tools to analyze what people around them, these patients are saying, and then it presents likely responses on a screen, and by looking at the right letters and the right areas of the screen, it actually creates the responses on their behalf. And so, it's just an example of how voice is going to allow members of our society that have been constrained to fully participate in the digital world.
Robert Zirk: Voice technology also has the potential to transform the way we work, creating greater efficiencies by using the context of information.
Tobias Dengel: We're working for a large beverage manufacturer and they have hundreds of thousands of vending machines and fountain systems in restaurants that get serviced.
And it's when you observe how those technicians spend their time, a big chunk of it is ordering parts. They do that via either apps or from print catalogs, but it takes them 15 to 20 minutes from the time that they know they need a part to actually complete the order.
And that's just a perfect voice application, right? Because we know using location data, what machine you're standing in front of. So we know the model, we know what the parts are, we know what the most likely parts are to break. And then we can analyze a voice request with an incredibly high degree of fidelity and figure out with, you know, well over 99 percent accuracy what part it is that the technician is ordering from the field.
And now you've taken a process that takes 15 to 20 minutes and you've turned it into a 10 to 20 second process. The efficiency when you have tens or hundreds of thousands of employees that are doing certain tasks of combining all this data is, it's just truly breakthrough.
Robert Zirk: Bret predicted a growing number of voice assistants, or copilots, will revolutionize the way employees and teams manage and complete tasks.
Bret Kinsella: We're gonna have them individually. We're gonna have, I think we're gonna have several, then companies are gonna have many. There's all sorts of different types of internal assistants who are gonna help anybody out there who's doing a job execute a portion of their job more effectively.
So the voice technology is critical because it can understand what we're saying. Like we're saying, "Oh, here's an action item. Sue, you're gonna do this, Bill, you're gonna do that. Right? Great." And then there's the assistant or the co-pilot in the background which might summarize that so everybody knows that, might take that and put it into a to-do list or your project management system and then notify everybody about it. It's all these things together, which I think are really extraordinary.
Robert Zirk: These voice assistants can help employees get more done while making the most of the technology at their disposal.
Bret Kinsella: We use a lot of different applications throughout the day. It's just a fact of the world. Every one of those requires us to learn that and we generally don't know all the features. We generally know a small subset of the features of them. Sometimes we need to work across two applications at the same time and they're not integrated and those types of things.
Wouldn't it be nice if we had an assistant that actually knew the information and the functionality of both of those systems and we can just have it do something for us? And it just comes back to that idea, like, if you could have your right hand person next to you. It's just, like, ready to do whatever needs to be done and they can parallel process with you, and they know things you don't know and can do things you can't do, it can just listen along. You can just ask it and it'll just put it in front of you. And meetings kind of seem mundane, and I think a lot of people understand, like we can have meeting transcripts and the like, but now this idea that if you join a meeting five minutes late, it's got a transcript and it's doing a summary in the sidebar for you right away. So you can catch up really quickly. You don't interrupt the flow, like, you can catch yourself up. They don't have to stop and start over so that you have context.
Robert Zirk: While there are many benefits that extend across the organization, there are also different benefits applicable specifically to different roles.
For example, with the help of automation, voice technology can help your sales team reduce the friction of inputting client information into a CRM.
Bret Kinsella: If you're having these calls and it's automatically transcribing what's going on and doing action items and identifying key people and activities that are going to take place, it can actually auto-populate that into your CRM.
You don't have to worry about, you know, the sales lead had to run to the airport so they didn't have time to put it in, and by the time they got to their desk the next day, they didn't remember part of it or something like that. It's just already there. All they have to then is go through and verify it and just say, yep, that's correct.
This is correct, and just done. And it'll give them then the list of action items to follow up with and it should actually also at the same time be like, "Oh, and here's a, a recent article link or a blog post that you should just send as a follow up to your client because it's relevant to something they asked about."
Robert Zirk: In a customer experience setting, Bret cited agent-assist technology, which can be used to gather relevant information for an agent before they get connected with a customer.
Bret Kinsella: Now, this co-pilot assistant would actually listen to the call in real time and just be flashing things on the screen in real time based on what they're saying. Product catalogs change, which specials change, all these other types of things, things that are specific to a customer, right?
But if you've only been on the job for 3, 4, 5 months, what if you had the expert, the best performing person in the call center right next to you?
Robert Zirk: It's safe to say the interest in voice technology is here.
Voicebot.ai reports that in 2021, 78.4 million adults in the United States were active monthly users of voice assistants on their smartphones. And, as of 2022 94.9 million adults in the U.S. owned at least one smart speaker, with half using them on a daily basis.
It's clear that voice has the potential to improve our lives in so many capacities. But if the user base is there, and voice is capable of everything we've talked about, why doesn't it seem as ubiquitous in our lives as the internet and smartphones?
Tobias explained that up to this point, we've been in the early phases of voice technology, similar to how the potential of the internet wasn't necessarily fully unlocked back in the mid nineties.
Tobias Dengel: So any new technology follows the paradigms of the old technology or something that people are familiar with. So the first TV shows were really just broadcasts of existing radio shows or plays, just like the early Internet was really just a reconstitution of things we know. When you saw Time magazine on AOL in 1995, it was really like a PDF.
it didn't have any interactivity. It ultimately was easier to read the print magazine than look at it on AOL and early mobile was super frustrating, right? We all had mobile devices that were clunky and difficult to use, and we all used them because we kind of got that this was going to be a big deal.
But until the true user interface revolution happened in each of those cases, there wasn't the breakthrough, right?
Robert Zirk: Think of how easy it is, for example, to stream video content today, relative to the nineties, when watching video, if you could even find the content you wanted, would require you to jump through many hoops, only to receive relatively low quality picture and sound. Or how, as Tobias mentioned, the touch screen popularized by the iPhone, unlocked an ease of use that other inputs couldn't match.
Bret noted the reality of how people have been using voice technology up to this point didn't equate to how they thought they'd use it.
Bret Kinsella: We have this idea of single turn versus multi-turn conversations.
Robert Zirk: A single turn conversation is basically a command: one input, one response. You don't need to provide additional context. You just want to turn the lights in your kitchen on or off. Multi-turn conversations allow for more complex interactions that build off of the context of previous inputs or questions.
Bret Kinsella: And there was a lot of hype around these multi-turn conversations. You're gonna talk to these assistants like they're your friend or like they're your mentor and all these things. And those applications do exist. But the vast majority of people, in fact, nine of the top 10 use cases that were adopted on smart speakers are all what we'd call single turn. It's request, response. You request something, it's delivered to you, an action is executed, it gives you information and you're done. And what the users found was this was really convenient for asking for a song, for initiating a phone call, for setting a timer, for getting the cooking temperature, like you said, for the recipe you're working on.
I kind of feel like what we did with voice assistants was the hors d'oeuvre. This was like, "Okay, we're gonna make this more accessible, or we're working with natural language." Now we've got the ability for machines to do some basic reasoning to generate things that didn't exist before.
Robert Zirk: Tobias predicts that people will be much more likely to interact with multiple turns now that conversational AI is being combined with generative AI, which he refers to in The Sound of the Future as the ultimate interface.
Tobias Dengel: When you think about how voice is interpreted by a machine, it really has three components, right? One is natural language processing, which means can it transcribe what you're saying into words? Then the second is natural language understanding. Can it take what those words are and understand what you meant?
And then the third paradigm is how does it respond to that request? People have been working on voice technology for 60 or 70 years, but, over the last five to seven years, there's been a lot of advances in natural language processing and natural language understanding, but Gen AI as an underpinning really has accelerated both, because in terms of transcribing what you're saying, the system does its best to figure out, "Alright, what did Robert say? What were the words?" But there might be two or three words that are wrong, then it runs it through a Missing resource for 6wkaRBN7riJz5h27edfKQc system, and what's Gen AI really, really good at? Pattern recognition. And so it's going to say, "Alright, most likely based on the words around that we know what Robert said, the other word that we're not quite sure of has to be X or is a 99 percent chance."
We're seeing that every day. Like, if you look at what Siri does when you're talking, you probably have noticed that it tries to transcribe what you're saying. And then, like, a second or two later, it switches it up because what it's done is processed it through Gen AI in some sense. The same goes for understanding what you meant.
One of the tools everyone talks about that Gen AI is so good at is summarization, right? So it can take what the words that you sent and say, "Alright, what he means by these words is a command to the system to show all movies between 7 p. m. and 10 p. m. tonight," and turns it into an API call. And then you get a response either on a screen or via voice.
So all those three things have to work together really well. And certainly the first two, and likely the last one as well, are made so much better by Gen AI. What I keep telling people is the Gen AI revolution, the way it's going to manifest itself for most people, is voice, right?
The ChatGPT next gen, their big announcement with the app, was the app is now voice powered. And I think a lot of the work that they've been doing at OpenAI and otherwise is how to make the interface into Gen AI much more voice based and voice friendly.
Robert Zirk: I asked Tobias: if we've been in the early stages of voice technology all along, what have we learned that will underpin the voice-first user interface revolution?
In The Sound of the Future, Tobias makes the argument that voice is best used in multimodal experiences, meaning the interaction involves more than one type of communication. In this case, it involves voice and at least one other method of input or output.
Tobias Dengel: Voice in a multimodal environment is the fastest possible way that we can communicate to machines and they can communicate back to us. You have to have voice and screen concurrently.
The reason we want to use voice at the end of the day is because it's so fast versus typing, three times as fast as on a keyboard, five times as fast as on a mobile device. So we always are wanting to speak into our devices, but it's also super slow to listen to transmissions and hard to remember what's going on versus a screen. The early applications have completely missed that. And it goes all the way to the nomenclature we use. We call them smart speakers, which is entirely backwards. They should be smart mics. We want to be talking to them, not listening to them.
Robert Zirk: Tobias gave an example that we might ask our voice assistant "what movies are playing tonight?"
But what we don't want to hear back is...
Voice assistant: Questions for now: The movie. In 3D. Plays at. 5. 45. p.m. 7. 30. p.m. 9. 45. p.m. 11. 30. p.m.
Questions for now: The movie. In panoramic view. Plays at...
Robert Zirk: You get the idea.
Tobias Dengel: What we want is to see that on a screen and then just say, " Get me two tickets to Star Wars at 8 p.m." because you're already authenticated, etc. And so once that paradigm evolves, voice will become the primary way that we transmit information to machines, not necessarily receive it.
One of the mistakes that have been made in voice design is only relying on this concept of copying human conversation. There is a theory called the Uncanny Valley that was presented in Japan in the 1970s, it's been proven many, many times over, that basically says, the more human you make an experience without it actually being human, the more freaked out and less trusting the users get. Because we know it's not human. And it's, it's kind of bugging us out. And so this is another reason why I think the multimodal approach is so important because if it's multimodal, the human being on the other side isn't comparing the experience to what a human conversation would be.
Any new technology has this trust issue that it has to get through and voice hasn't crossed the trust issue because we didn't have multimodal. And once we break through that, we're going to be off to the races.
Robert Zirk: So if voice technology has the potential to Missing resource for 4p8Ft2lOwds1idD1gSfIDT, what do brands need to do to prepare for a voice-first future?
Tobias advises listeners to think about where voice holds a distinct advantage over any other type of interaction a customer might have with your business.
Tobias Dengel: It all starts with looking at what does voice do better than other forms of communication and then map out literally every single process where customers or employees are interacting with the system and asking ourselves, "Alright, in any of these steps, do any of these voice cases apply? And if they do, can we use voice to make them more efficient?" And really going through a very thoughtful process around how to do that. I think when we start doing that with clients, the breakthroughs are astounding and rapid. Like within a three to five day work session, we can identify literally hundreds of processes in a company that are going to be much, much more efficient using voice.
Robert Zirk: Bret says that setting up a knowledge management system Is one of the easiest actions brands can take. Not just for self-service applications with voice technology, but also to improve the organization's capacity to solve problems and respond to inquiries.
Bret Kinsella: So for example, Morgan Stanley did it for their investment analysts, basically took a hundred thousand documents. Hundred thousand documents! Like, there's no one on staff that's read a hundred thousand documents, right?
So they have a hundred thousand documents, but this is all useful information for their wealth managers. And now they can just ask it a question. "What about this? Here's a situation, these three things, you know, what should I look at next? Or what might I suggest?" Those types of things. What are the rules associated with these types of investments? So those are things that they would have to ask somebody. They would be sending email, and we go back to this idea of unknown unknowns, like we don't know about them, they're out there. We've got this thing about known knowns, like we know it and we know where it is, but we also have these things with, like, unknown knowns. Like, nobody knows about this part of it. It's like the organization knows it, but the organization doesn't know it knows it, or the people in the organization don't know that.
So then they're just running around trying to find the answer to something when it's right there and that's what some of these tools will be able to do.
Robert Zirk: With any new paradigm shifting technology, there are important questions. For instance, will voice technology makes some roles redundant?
Tobias notes that, much like the arrival of past technologies, some jobs will become obsolete, but other jobs will be created. It's just that the latter is more difficult to predict.
Tobias Dengel: We're starting to see the concept of a prompt engineer, right? That is a job that didn't exist before Gen AI. We're starting to see the job of a conversational designer, designing a conversation with the machine and how do you approach that, et cetera.
So, there'll be a whole series of tech jobs that get created. But I don't think it's just tech jobs. What I think is going to happen is it's going to allow people to do the higher value part of their jobs.
Robert Zirk: Another important consideration when leveraging voice technology is maintaining customer trust. Hallucinations and deepfakes are real concerns for businesses looking to integrate generative AI. But Tobias noted the value of partnerships to help address potential issues.
Tobias Dengel: Hallucinations are interesting because you kind of confront that in two different ways. One is by creating much more narrowly bounded voice and Gen AI experiences, right?
It's really hard to create a completely human experience without a hallucination sneaking in. But if you're saying, "Look, we're a banking app. We're going to give you banking information." If someone starts asking about the weather, we're not going to answer that.
Whereas in a human to human, completely human assistant paradigm, you kind of have to answer those questions and you get off track. So A: brands have an advantage because they can narrowly bound the voice experience. And then second, there are techniques like using a second LLM to look at what the first LLM said and the chances of two LLMs making exactly the same error or having the exact same hallucination is very, very low. Same thing goes for voice authentication. Voice authentication on its own can get cracked, not 100 percent easily because there's ways to see what is a simulation and what's not, but then when you add two factor on, like, an eye scan or fingerprint scan or a PIN or whatever it is, testing the phone number that the call is coming from, all of a sudden you can get to a very, very high degree of reliability. And so it's always looking at what the latest thinking is and the latest research.
Robert Zirk: Bret stressed that with the customer expectation of trust and safety, brands need to ensure they're using deepfake detection and voice clone detection tools. But he also noted deepfake technology can be an opportunity for brands if it's used properly.
Bret Kinsella: Some are using it to, like, their CEO can give an address to the company and they can do it in eight languages, right? They do it because, not to fool anybody, but it's just to make it easier for people to understand what's being conveyed in their native language from the CEO of the company. There's other things that people are doing from a creative standpoint with these technologies. So my number one advice is just get out there and do this. This is a learning by doing market, and the more you know, because you've done something, and it doesn't even have to be public, it can just be internal, then you'll understand where the opportunities are and where the potential risks are. You'll know better yourself. You won't just have to read about it. And then you'll be able to leverage it more effectively.
Robert Zirk: I asked Tobias if voice technology is the sound of the future, how far off is the future?
Tobias Dengel: I think we are in the middle of it. We are working with clients every day to voicify their apps. And I think that's really where the change in thinking is happening. This isn't about building Alexa skills or something within Siri, because no one knows, like, if you're a Citibank customer, you don't know, how to get to Citibank via Alexa or what it even does, right?
But now you're going to start with the app because it's always there. You hit it with one tap and then you start speaking. That's the implementation most of us are going to see and most of us are going to see it in 2024 and 2025 for most brands. And then it's going to get better and better and better.
And it's going to become ubiquitous in the sense that most of the functions that we ask an app to do are going to be voice-first. And then I think, a decade from now, like, 2030, 2035, something else amazing is going to happen, but this is going to be a long run.
Robert Zirk: Bret also sees the rise of voice technology in the near future, but noted it might take a few years for people to start to use voice assistants more frequently.
Bret Kinsella: We live in an era where there's an onslaught of information, of requests, right? Everybody feels it. There's clutter in terms of information, and there's an overwhelming avalanche of requests for our time, for our attention, whatever it might be, right?
And there's a friend of mine who founded a company called Soul Machines. His name's Greg Cross. He's one of the co-founders and CEO. And for Synthedia, our generative AI focused publication and research service, we had an online conference and the title of his presentation was: The robots are coming and they're just in time. Because we've created a world where we need robots to help us navigate it.
And I love that idea that they're just in time, and they're not only just in time, they're just they're better, faster than I thought they were gonna be.
And I'm the one who said that I thought there would be a million, you know, there's gonna be a million assistants just by businesses. And I think very legitimately that's true today. If you just think about everybody who has even some sort of assistant type chatbot on their website or in the mobile app and things like that. But maybe I'll just do that here. I mean, I'm going to revise that. I'll say there's gonna be a billion assistants.
Robert Zirk: A Voicebot.ai report highlighted that, in 2021, 44% of consumers in the U.S. said they'd like to see a voice assistant capability within their mobile apps.
As Tobias mentions, the brands that are leading the way are the ones who are able to find quick and easy ways to take a frustrating experience and make it a delightful one.
Tobias Dengel: I get asked a lot, "What's the sign, like, how will you know that voice is starting to arrive in the way that you're talking about?"
And to me, it's that the apps that we use, whether they're apps for an employee to interface with a company or for us to interface with a brand as consumers, that there's a big mic button in the middle of an app or a part of an app and that's the primary way we tell the app what to do. And when that happens, you'll know that voice has arrived.
And a lot of times it's just the app doing something. Sometimes it might connect us to customer service, but great. Now we're connected to customer service, and customer service knows what we were trying to do in the app and that we're pre authenticated, et cetera. So it takes so much friction out of the customer service experience.
At TELUS International, WillowTree together, that's what we're so excited about bringing to customers, is this complete experience where all these different modalities are tied together. And it's invisible to the customer what's going on in the background. They're just having this seamless, delightful experience.
Robert Zirk: And a great start in leveling up your understanding of voice technology is the book Tobias wrote, The Sound of the Future. I asked Tobias who he wrote The Sound of the Future for, and who needs to hear the message of the book the most.
Tobias Dengel: Anyone involved in leading businesses or organizations, they don't have to be business. It could be nonprofits. First of all, at the executive level, and secondly, in marketing and product and customer service. I think folks will find it incredibly illuminating and hopefully inspiring how voice is going to make their brands more effective in terms of dealing with consumers.
On the employee side, I think it's senior HR executives, senior IT executives really understanding the importance of getting going here. And I think that's really what I hope to inspire folks to put the book down, say "You know what? This is just like the internet was. And this is just like mobile was. This combination of AI and voice is a really, really big deal. And we have to start implementing it. And at least coming up with MVPs, examples of where we're going to use this, testing it very, very quickly because it will be an opportunity to leapfrog some of our competition."
Robert Zirk: You can find Tobias's book, The Sound of the Future, on Amazon, Barnes & Noble, and in thousands of local bookstores. And yes, it's also available as an audiobook. For a full list of retailers or to learn more about the book, visit TobiasDengel.com. We'll put a link in the podcast description as well.
Thank you so much to Tobias Dengel and Bret Kinsella for joining me and sharing their insights today. And thank you for listening to Questions for now, a TELUS International podcast.
Be sure to follow Questions for now on your podcast player of choice to be the first to hear the latest episodes as soon as they're released.
I'm Robert Zirk, and until next time, that's all... for now.
Explore recent episodes
Hear from experts discussing the most timely topics in customer experience.
Suggest a guest or topic
Get in touch with the Questions for now team to pitch a worthy guest or a topic you’d like to hear more about.
Email the show