Healthcare AI’s trust problem: The critical role of UX in solving it

Bobby Brown
Vice President, Healthcare Transformation

Key takeaways
- A recent TELUS Digital survey revealed clear consumer boundaries and trust gaps around the use of AI in healthcare.
- These gaps stem from ongoing patient concerns around data privacy, response accuracy, a perceived loss of human touch and limited transparency and explainability.
- By embedding human-centered UX design, healthcare organizations can proactively build trust, increase adoption and deliver AI-enabled healthcare that patients can be confident using.
Every app will need to be rebuilt for an AI-first world. That rebuild is already underway. A TELUS Digital survey found that 32% of consumers have already replaced at least one app with an AI assistant, and 36% expect to rely on AI more than apps within a year.
In most industries, that's an opportunity. In healthcare, it's a test that comes with real consequences if organizations get it wrong. With 88% of consumers surveyed having personally seen AI make a mistake, the risks in healthcare are particularly high. A misdiagnosis, a medication error, a patient routed to the wrong provider or an unnecessary emergency visit creates patient anxiety and cost, strains care capacity, impacts call volume and in the worst cases, directly impacts patient outcomes.
Compounding this is the fact that asking AI assistants like ChatGPT or Claude follow-up questions, such as “Are you sure?”, rarely yields a more accurate response. It's no surprise, then, that nearly half of patients (46%) use AI only as a starting point for health decisions and why 16% avoid it entirely.
The trust gap is wide. In this article, we’ll make the case that closing it starts with UX design and lay out what that actually looks like from data annotation to the final patient-facing interface.
What our healthcare AI prototyping revealed about patient trust
In a rapid prototyping exercise building an AI-powered healthcare companion, cross-functional TELUS Digital teams went from AI healthcare concepts to working code in two weeks. User research was a crucial stage, in which recent emergency care patients shared sentiments on the contextual use of the technology. While there was positive sentiment toward key AI benefits, 92% of respondents reported that human oversight was important to their comfort with AI in healthcare.
By operating in deeply personal contexts where accuracy, fairness and data privacy matter more than speed, small errors, bias or unclear reasoning can feel high stakes for users. Of all the levers available to healthcare organizations, UX can be the earliest and most direct way to respond to patient concerns, shaping whether patients encounter them at all.
How thoughtful UX design can help healthcare organizations overcome consumer distrust
Creating healthcare AI that’s worthy of user trust requires intentional design decisions made long before a patient ever sees the interface, including:
- How training data is curated
- How outputs are explained
- How escalation paths are built
The following UX best practices provide a practical foundation for healthcare organizations eager to overcome consumer distrust and build lasting consumer confidence.
Make data privacy visible
According to an Experian report, roughly half of healthcare decision-makers cite data privacy and security as the top barrier to AI adoption. Patients understandably have questions around who has access to their personal data, how it will be used and what happens in the event of a data breach. In practice, these concerns may surface when a patient interacts with a virtual health assistant and feels the need to withhold critical details out of fear that the data will be misused. When that happens, the AI’s ability to provide accurate guidance is compromised. That's the compounding cost of distrust.
Patients don't read privacy policies. They look for signals. When it comes to UX, security must be both seamless and visible. Features like identity verification and fraud detection should reduce friction without disappearing entirely. When patients feel their data is protected, they are more likely to provide accurate information, ultimately improving AI performance and building trust.
Establish clear expectations
Concerns around response accuracy create an aura of skepticism. Only 12% of the consumers we surveyed said they trust AI for some health-related questions. When a patient distrusts the response, they may ignore something critical that requires immediate human follow-up.
Healthcare users need a clear understanding of AI’s abilities and its limitations, depending on the application. In practice, UX must be grounded in high-quality, expert-guided data and rigorous annotation practices, including disclaimers or confidence indicators. By establishing clear expectations up front, this strikes a critical balance between appropriate use and long-term trust.
Design for diverse populations
AI systems must reflect the diversity of the populations they serve. A lack of data diversity can perpetuate existing healthcare disparities, including language and cultural barriers. Addressing this requires UX practices that ensure representative data, localized language support and culturally relevant conversational flows. When patients see themselves represented in the AI results, trust improves and engagement grows.
Preserve human connection
A study from the International Journal of Medical Informatics found that 50-70% of patients had concerns about the loss of human connection in healthcare AI. For most patients, healthcare professionals play a critical role in personalized care and emotional support, while AI can feel transactional.
UX must reinforce that AI is not a replacement for human care. That starts well before deployment, with a human-in-the-loop approach that incorporates supervised training on high-quality human data and ongoing model refinement to anticipate user needs. In practice, this means providing clear messaging and consistent AI iconography that lets users know when they’re interfacing with AI and ensuring clear paths to escalate to human support ( “Connect to a clinician” or “Schedule a follow-up appointment”). Reinforcing a human-in-the-loop approach from model development through real-world interactions reassures patients and enhances overall care.
Build in explainability
Almost nine out of ten Americans (86%) agree that a problem with using GenAI in healthcare is not knowing where the information is coming from and how it was validated. A lack of transparency into how recommendations are generated and where data is sourced can further erode patient confidence.
Explainability is foundational to user trust and critical in maintaining regulatory compliance standards. Transparency must be built in from the start. UX solutions should clearly surface sources, citations and provenance for health information, reducing the perception of AI as a black box and helping users understand how recommendations are generated.
GenAI platforms like Fuel iX™ offer tools such as drag-and-drop retrieval augmented generation (RAG) to ground outputs in verified, approved content, enabling healthcare organizations to deliver evidence-based responses and further strengthen patient trust.
“We love to perform user testing, even at a low stage of fidelity, because at that point, we can still get an idea of how the work is performing.” – TELUS Digital Design Director Ryan Davis
Together, these UX considerations form the foundation for building healthcare AI that people can understand, trust and ultimately adopt. For more applied insights, explore our five key design techniques for building trustworthy AI experiences.
A shifting interface landscape raises the bar further
Building trust in healthcare AI is challenging enough when the interface is stable, but the way users interact with AI is changing fast, and patient expectations are moving with it.
"At TELUS Digital, we've long believed that every app will need to be rebuilt for an AI- and voice-first world to deliver richer, more intuitive experiences," says TELUS Digital President Tobias Dengel. "The brands that invest in strong, user-friendly application foundations and securely connect their AI capabilities through shared APIs will be best positioned to deliver seamless, personalized and trustworthy interactions within AI assistants."
Our user research on the future of AI interfaces supports this trajectory. Experienced AI users shared how they expect to interact with AI in just five years. They imagine AI that is deeply integrated, context-aware and connected across the tools they use every day, with source transparency and user control as baseline requirements, not advanced features. 89% of surveyed users want a deeply personal partnership with AI that truly knows them.
But in healthcare, it only converts to adoption if the trust architecture keeps pace. That architecture has to be built before the moment of adoption arrives, not in response to it. The organizations that move now will set the standard. The ones that wait will inherit it.
How to select the right partner to build trustworthy healthcare AI
Understanding your organization’s AI maturity is step one. Step two is choosing the right partner to act on it. Here’s what to look for:
AI expertise and responsible design
Look for a partner with a proven ability to build and validate healthcare AI responsibly, supported by ongoing model monitoring. They should be able to clearly articulate how data is sourced, governed and audited for bias. This foundation is imperative in ensuring compliance and building patient trust.
Human-centered UX capabilities
A strong UX partner can translate complex AI outputs into intuitive, accessible user experiences. In healthcare, that means designing for how patients actually communicate, which can be emotional, contextual and not limited to a single concern at once. A patient describing chest pain while also asking about a new medication and a family history of heart disease isn't presenting three separate queries; they're having one conversation. The best UX partners build interfaces capable of holding that complexity, handling multi-intent and multimodal interactions without losing the thread or the person on the other end. The result is reduced confusion, increased engagement and strengthened trust.
Strategic end-to-end approach
A partner who views AI as part of a broader, end-to-end customer journey is essential in delivering experiences that anticipate user needs and provide meaningful support. With this approach, healthcare providers are ensuring AI enhances overall care and builds trust, all without losing the human touch.
Healthcare domain experience
Finally, your partner should have a deep understanding of clinical workflows, as well as regulatory and data privacy requirements. This ensures healthcare AI engagements integrate seamlessly with existing care protocols and don’t compromise patient safety or provider efficiency.
When TELUS Digital built an emergency care AI prototype, we brought in Matthew Trowbridge, MD, MPH, an emergency medicine physician and researcher, to ground the work in clinical reality. He shared a belief that could be supported by providers and patients alike:
"We could have much better utilization of our existing resources, which includes doctors' attention, time, nursing, even just the physical space of emergency departments. Everything could run a lot better."
The potential is there. None of that is possible if patients don't trust the system enough to use it. Ready to build healthcare AI your patients will actually rely on? Let's talk about next steps.



