What if a chatbot could replace a clinician’s 30-minute onboarding call?

What if a chatbot could replace a clinician’s 30-minute onboarding call?

A conversation design project where I enabled patients to easily navigate MAIA independently, and gave clinicians back the time they were spending showing them how.

A conversation design project where I enabled patients to easily navigate MAIA independently, and gave clinicians back the time they were spending showing them how.

TLDR;

TLDR;

Problem: New users arriving on MAIA, a patient engagement health portal, had no guided way to understand what the platform offered or how to navigate it. Drop-off was high and the onboarding experience was entirely self-directed.

Solution: Designed and built a guided AI chatbot using OpenDialog and Google Dialogflow, covering conversation flow, voice and tone, NLU training, and full integration with the live portal across 3 NHS trusts.

Impact: Patient engagement on MAIA surged by 40% across 3 NHS trusts. 7 in 10 healthcare professionals reported feeling less burdened by patients who had recently adopted the service.

Problem: New users arriving on MAIA, a patient engagement health portal, had no guided way to understand what the platform offered or how to navigate it. Drop-off was high and the onboarding experience was entirely self-directed.

Solution: Designed and built a guided AI chatbot using OpenDialog and Google Dialogflow, covering conversation flow, voice and tone, NLU training, and full integration with the live portal across 3 NHS trusts.

Impact: Patient engagement on MAIA surged by 40% across 3 NHS trusts. 7 in 10 healthcare professionals reported feeling less burdened by patients who had recently adopted the service.

CONTEXT

CONTEXT

MAIA was a patient engagement portal built for NHS trusts with features such as appointment management, health resources, clinical communications etc. Functionally it was solid, but new users arriving for the first time had no guidance.

The portal assumed people knew what it was and how to use it. Most didn't.

The brief was to design a chatbot that could give new users a virtual tour. But a chatbot in a healthcare context is particularly high-stakes. The wrong tone feels cold. An overly scripted flow feels patronising. And in a clinical environment, broken interactions erode trust fast.

MAIA was a patient engagement portal built for NHS trusts with features such as appointment management, health resources, clinical communications etc. Functionally it was solid, but new users arriving for the first time had no guidance.

The portal assumed people knew what it was and how to use it. Most didn't.

The brief was to design a chatbot that could give new users a virtual tour. But a chatbot in a healthcare context is particularly high-stakes. The wrong tone feels cold. An overly scripted flow feels patronising. And in a clinical environment, broken interactions erode trust fast.

USER RESEARCH FINDINGS

USER RESEARCH FINDINGS

Before writing a single line of conversation, I needed to understand two things: what new users actually struggled with on MAIA, and how people in a healthcare context expect to be spoken to by a digital service

Before writing a single line of conversation, I needed to understand two things: what new users actually struggled with on MAIA, and how people in a healthcare context expect to be spoken to by a digital service

Users didn't know where to start

New patients consistently missed the appointment booking entry point. The portal structure assumed familiarity with NHS digital services most users didn't have it.

  1. Users didn't know where to start
    New patients consistently missed the appointment booking entry point. The portal structure assumed familiarity with NHS digital services most users didn't have it.

Clinical tone creates distance

Button-driven conversation flows can be frustrating. Users consistently tried to type their own queries even when buttons were available, a signal that shaped the entire conversation architecture.

  1. Clinical tone creates distance
    Button-driven conversation flows can be frustrating. Users consistently tried to type their own queries even when buttons were available, a signal that shaped the entire conversation architecture.

Broken chatbot interactions erode trust fast
In a clinical context, a bot that gets things wrong doesn't just frustrate users, it undermines confidence in the whole service. Accuracy and graceful failure states were non-negotiable.

  1. Broken chatbot interactions erode trust fast
    In a clinical context, a bot that gets things wrong doesn't just frustrate users, it undermines confidence in the whole service. Accuracy and graceful failure states were non-negotiable.

CONVERSATION ARCHITECTURE

CONVERSATION ARCHITECTURE

OpenDialog supports two conversation models. Rather than choosing one, I designed a hybrid architecture that used each where it genuinely served the user better.

OpenDialog supports two conversation models. Rather than choosing one, I designed a hybrid architecture that used each where it genuinely served the user better.

Static conversation

Static conversation

Guided, button-based. The bot leads. Users follow a scripted path.

Guided, button-based. The bot leads. Users follow a scripted path.

Build flow in Miro → close all paths → build in OpenDialog sandbox → test at every step → integrate with MAIA

Build flow in Miro → close all paths → build in OpenDialog sandbox → test at every step → integrate with MAIA

Dynamic conversation

Dynamic conversation

Open-ended, NLU-powered. Users type freely. The bot interprets intent.

Open-ended, NLU-powered. Users type freely. The bot interprets intent.

Write intents in NLU spreadsheet → attach to Dialogflow interpreter → set confidence 60–80% per Q&A → test and iterate

Write intents in NLU spreadsheet → attach to Dialogflow interpreter → set confidence 60–80% per Q&A → test and iterate

Example of a full user flow showing static and dynamic paths, decision points, fallback states, and module boundaries across the MAIA portal

Example of a full user flow showing static and dynamic paths, decision points, fallback states, and module boundaries across the MAIA portal

VOICE, TONE AND UX WRITING

VOICE, TONE AND UX WRITING

Conversation design is different from UI design. Every word the bot says is a design decision. I established the bot's character before writing a single response: warm, direct, never clinical. Three writing rules applied to every line:

Conversation design is different from UI design. Every word the bot says is a design decision. I established the bot's character before writing a single response: warm, direct, never clinical. Three writing rules applied to every line:

No jargon, no assumptions

Write as if the user has never used a healthcare portal before. Every term that required prior knowledge was rewritten in plain English.

  1. No jargon, no assumptions
    Write as if the user has never used a healthcare portal before. Every term that required prior knowledge was rewritten in plain English.

Always give the user a clear next step

Every bot response ended with an action or an offer. No dead ends.

  1. Always give the user a clear next step
    Every bot response ended with an action or an offer. No dead ends.

Acknowledge failure gracefully
When the bot didn't understand, it said so clearly and offered an alternative path. A bot that gives a wrong response breaks trust immediately. I also had to avoid over-personalisation as it can undermine clinical credibility.

  1. Acknowledge failure gracefully
    When the bot didn't understand, it said so clearly and offered an alternative path. A bot that gives a wrong response breaks trust immediately. I also had to avoid over-personalisation as it can undermine clinical credibility.

NLU TRAINING

NLU TRAINING

For the dynamic conversation to work, the bot needed to understand what users actually meant contextually, not just what they literally typed. I built an NLU training spreadsheet mapping 20 user intents to bot responses across each of the features in MAIA.

Each response was calibrated to hit the right confidence threshold. Too low and the bot hedges on every answer. Too high and it confidently delivers wrong ones. The target for Phase 1 was 60% to 80% accuracy over confidence.

For the dynamic conversation to work, the bot needed to understand what users actually meant contextually, not just what they literally typed. I built an NLU training spreadsheet mapping 20 user intents to bot responses across each of the features in MAIA.

Each response was calibrated to hit the right confidence threshold. Too low and the bot hedges on every answer. Too high and it confidently delivers wrong ones. The target for Phase 1 was 60% to 80% accuracy over confidence.

GOOGLE DIALOGFLOW INTEGRATION

GOOGLE DIALOGFLOW INTEGRATION

I worked closely with the lead developer to match OpenDialog utterances with the Dialogflow intent interpreter. The Dialogflow knowledge base was fed with the full Q&A set and tested daily in standups, catching mismatches before they compounded into bigger problems in a live clinical environment.

I worked closely with the lead developer to match OpenDialog utterances with the Dialogflow intent interpreter. The Dialogflow knowledge base was fed with the full Q&A set and tested daily in standups, catching mismatches before they compounded into bigger problems in a live clinical environment.

Agile, module-by-module deployment
Each module was built, tested, and deployed independently across 3 NHS trusts before the next was started. This contained risk and gave us early signal on what was working before we scaled.

Agile, module-by-module deployment
Each module was built, tested, and deployed independently across 3 NHS trusts before the next was started. This contained risk and gave us early signal on what was working before we scaled.

PHASE 1 TO PHASE 2

PHASE 1 TO PHASE 2

After Phase 1 deployed, user feedback was clear. The chatbot worked, but it could work better. Two signals drove the Phase 2 iteration:

After Phase 1 deployed, user feedback was clear. The chatbot worked, but it could work better. Two signals drove the Phase 2 iteration:

Phase 1

  • Button-driven static flows as default

  • Longer bot responses with full explanation

  • 84% peak confidence on 20 Q&A pairs

  • Conversations felt guided but rigid

Phase 2

  • Dynamic typing prioritised as users preferred it.

  • Responses shortened to one action per message.

  • 92% peak confidence with expanded Q&A set.

  • Conversations felt natural and responsive.

The 40% engagement uplift wasn't from adding features, it came from listening to what Phase 1 users actually did, and removing the friction they hit.

The 40% engagement uplift wasn't from adding features, it came from listening to what Phase 1 users actually did, and removing the friction they hit.

IMPACT

IMPACT

40%

40%

surge in patient engagement across 3 NHS trusts

surge in patient engagement across 3 NHS trusts

7 in 10

7 in 10

clinicians felt less burdened by patients using MAIA

clinicians felt less burdened by patients using MAIA

92%

92%

clinicians felt less burdened by patients using MAIA

clinicians felt less burdened by patients using MAIA

REFLECTION

REFLECTION

"Conversation design pushed me to think about UX differently. Without a screen to design, every decision lives in the words. That constraint sharpened my thinking about clarity in ways that have carried into every project since.

The 20% satisfaction rate was lower than expected. In hindsight, the survey captured general MAIA sentiment rather than chatbot-specific feedback. I would also involve clinical staff in tone reviews earlier.

Their instinct for appropriate language in a patient context was sharper than mine."

"Conversation design pushed me to think about UX differently. Without a screen to design, every decision lives in the words. That constraint sharpened my thinking about clarity in ways that have carried into every project since.

The 20% satisfaction rate was lower than expected. In hindsight, the survey captured general MAIA sentiment rather than chatbot-specific feedback. I would also involve clinical staff in tone reviews earlier.

Their instinct for appropriate language in a patient context was sharper than mine."

Work with me!

Have a project in mind? I’d love to hear from you and explore how we can create something meaningful together.

Get In Touch

© 2026 Swetha Ravindra

Work with me!

Have a project in mind? I’d love to hear from you and explore how we can create something meaningful together.

Get In Touch

© 2026 Swetha Ravindra

Work with me!

Have a project in mind? I’d love to hear from you and explore how we can create something meaningful together.

Get In Touch

© 2026 Swetha Ravindra