Rebuilding the Call Center app to keep nurses on the call.

Trading data-model rework for experience redesign to hit a 3-month timeline. Cognitive walkthroughs during real patient calls.

Sole Product Designer 3 months NDA
The redesigned Call Center dashboard — patient context, timeline, and call controls all in a single view

The redesigned Call Center dashboard. Patient context, timeline, and call controls all live in a single view — no more navigating between apps mid-call.

At 83 Bar, dedicated nurses and Call Center agents contact, educate, and recruit patients for clinical trial studies. They work the phones for full shifts, juggling lead profiles, scripts, screening questions, and a separate scheduling app — frequently on a single laptop screen.

The existing Call Center app had grown unmaintainable. New agents struggled with navigation. Technical glitches interrupted live patient calls. Turnover was high and complaints were a steady drumbeat. Leadership asked for a rebuild that would actually stick.

The Problem

Every second a nurse spends fighting the tool is a second she isn't talking to a patient.

Nurses are constantly annoyed when using the app because they can't easily find the necessary information.

From a cognitive walkthrough, early research phase

Agents were running three or four applications simultaneously during every call. Information was fragmented — agents stitched patient context together in their heads. Access was slow; the old timeline buried critical information. Data transfer between apps was clumsy enough that agents often asked patients to repeat themselves. And handling an incoming call meant navigating between two different apps before the agent could even greet the patient.

Strategic Considerations

Three decisions made before the design started.

01
Timeline non-negotiable — redesign the experience, not the data model
Three months from kickoff to MVP, tied to a recruitment cycle. The cleanest version of the fix was a data-model re-architecture. That was off the table. I made the case early that we'd redesign the experience and accept compromises in the underlying data plumbing rather than try to rebuild both. That decision is what made the timeline possible.
02
One card system, not two separate apps
Agents and Call Center Managers have different jobs but use overlapping tooling. The path of least resistance was two separate apps. The path of greatest leverage was one component system — cards — that could compose into either experience. I argued for the second because parallel apps double maintenance permanently. The card-based dashboard is the result of that decision.
03
Research method — real calls, not staged tasks
I asked to run cognitive walkthroughs with agents during real patient calls. That request created discomfort with leadership — what if a session disrupted an active recruitment? I made the case that the friction we needed to see was only visible under real cognitive load. The data from those five sessions shaped almost every subsequent decision, and would not have surfaced in a usability lab.
Research

Four user attributes that anchored every decision afterward.

The cognitive walkthroughs gave me a clear picture of what agents actually needed. Limited screen space — agents aren't going to get a second monitor. Wildly varying technical proficiency — designs need to work for both ends of the range. Visual simplicity wins; dense information panels lost in testing every time. And agents wanted more information about each lead, not less — but presented selectively, surfaced on demand.

I built a journey map for the primary persona — Hanna — tracing the full arc of an agent's interaction with a single lead. That map became the reference document the team kept returning to in design reviews when scope debates got abstract.

Journey map for the agent persona Hanna tracing the full arc of an agent–patient interaction
Journey map for "Hanna," the primary agent persona. Each stage maps the agent's tasks, emotional state, and the friction points the redesign needed to address.
Competitive Analysis

What I borrowed and what I deliberately left behind.

I benchmarked against seven call-center products to map what good looks like and where novel patterns might fit. The goal wasn't feature parity — it was finding patterns that reduce cognitive load during live patient calls.

Logos of seven call-center products analyzed during competitive research: Five9, Talkdesk, Zendesk Talk, Dixa, Aircall, HubSpot, and Bitrix 24
Competitive landscape — Five9, Talkdesk, Zendesk Talk, Dixa, Aircall, HubSpot, and Bitrix 24. The patterns I pulled forward were unified call handling and information-on-open. The patterns I left behind were AI summary panels (premature for our agent context) and gamified leaderboards (wrong incentive for nurses recruiting trial patients).
Constraints

Wireframes before pixels.

Three constraints framed the design space. Three months from kickoff to MVP launch — non-negotiable, tied to a recruitment cycle. The existing data model couldn't be rewritten in scope. And the solution had to coexist with the Agent Scheduling App that managers depend on.

Hand-drawn wireframe sketches exploring agent screen layouts and card variants
Early wireframe explorations — agent screen layouts, card variants, and call-handling component arrangements drawn out before any pixels were pushed. The card-based dashboard that shipped came directly out of these sketches; the alternatives I rejected here were a sidebar-only nav (too much wasted real estate on small laptops) and a tab-based interaction model (broke the "everything visible during a call" principle).
Design Decisions

Five anchoring decisions that everything else flowed from.

  • A streamlined timeline that prioritizes key events first and demotes the rest into a progressive disclosure pattern — reducing what's on screen at any given moment was the single highest-impact change in usability testing.
  • Integrated data transfer — copying lead information between the app and adjacent tools became a first-class action, directly addressing the "asking patients to repeat themselves" problem.
  • Unified call handling — lead context now sits inside the call handling interface, not on a different screen the agent has to navigate to.
  • Information-on-open — the agent sees the most-used patient context the moment the app opens, not after navigating to a profile.
  • A card-based dashboard, tailored per role — agents and managers see different cards because their jobs are different, but the card system is the same component underneath.
Detailed view of the redesigned touchpoints timeline — interactions grouped by date, with status icons that summarize call outcomes so agents can scan a patient's history at a glance
The redesigned touchpoints timeline. Status icons summarize a call's outcome at a glance. Agents can scan a patient's entire history without expanding everything — the kind of information density that only works when the hierarchy is right.
Outcome

MVP launched in June and stayed in production.

The complaint pattern changed
The recurring frustrations that originally drove the rebuild — "I can't find anything," "I have to ask the patient to repeat themselves" — stopped being what agents flagged. New feedback shifted toward the seams between our app and adjacent tools. That's a healthier place to be.
Onboarding got less painful
New agents stopped getting stuck on the navigation patterns that had previously been the steepest part of their ramp. Training sessions spent less time on tool mechanics and more time on the actual work.
The card system paid off in maintenance
Because agents and managers share the same underlying card components with different content, subsequent feature work shipped to both audiences without parallel design and engineering tracks.
The redesign held up to a real workload
The MVP went live during an active recruitment cycle and didn't buckle — calls didn't drop because of the tool, and agents weren't escalating UI bugs in the middle of patient conversations.

I underestimated how much adjacent context — the scheduling app, external CRMs — shaped the agent's screen. The MVP solved problems inside our app beautifully. The next set of complaints turned out to be about the seams.

From the project retrospective
What I'd Do Differently

Map the agent's full screen, not just our slice of it.

If I were starting again, I'd map what the agent has open across all windows in week one — not just our app. The seams problem turned out to be the next horizon's real work, and I'd have arrived there faster with a wider initial scope of observation.

Tools: Miro · Lucidchart · Figma