Unifying campaign and survey tooling for clinical trial recruitment.

Three disjointed tools, one strategist workflow. Scope discipline, build-vs-buy on Survey Builder, and the case for sequenced release.

Sole Product Designer Cross-functional Remote Team NDA
Survey Builder interface — question list on the left, contextual configuration panel on the right

Survey Builder — question list on the left, contextual configuration on the right. Built around the way strategists actually compose surveys: iteratively, with frequent reuse and constant client back-and-forth.

83 Bar runs digital marketing campaigns to recruit patients into clinical trials. Strategists and analysts use a stack of internal tools to configure campaigns, build the surveys that qualify patients, and pass qualified leads to a downstream Call Center team. The tooling had grown up in pieces — Campaign Configuration, Campaign Builder, and Survey Builder each built at different times, with different layouts and conventions.

Collaboration features were missing across the board. Strategists were sharing screenshots in Slack and emailing clients PDFs to get approvals. My job was to reshape these three tools around how the work actually happened — collaboratively, iteratively, with constant client input.

Context

A tool ecosystem that spans strategists, agents, managers, and patients.

The full ecosystem these tools live inside is wider than the three I redesigned. I mapped the whole landscape early — strategists, call center managers, agents, clients, medical office managers, and patients, each with their own application — so that decisions inside Survey and Campaign Builder wouldn't quietly break workflows elsewhere.

The Problem

Three structural issues, all rooted in the same gap.

The tools were built for solo use in a workflow that was never solo. Disjointed UX across three tools forced strategists to relearn the same patterns in different forms. The lack of a shared platform layer meant campaign and survey context lived in separate worlds — a strategist building a survey couldn't see the campaign it would feed. And limited user control meant strategists routinely needed engineering intervention for changes that should have been self-serve.

Collaboration was happening anyway, just outside the product. Screenshotting, exporting, emailing — strategists had built workarounds because the tools hadn't caught up with the work.

Strategic Considerations

Three tensions that shaped every design choice that came after.

01
Scope discipline — sequenced release vs. unified launch
Three tools needing simultaneous redesign is a manager's problem before it's a designer's problem. The temptation was to design the unified vision in parallel and ship it as one event. I argued for a sequenced release instead — one tool at a time, with the unified vision as the destination but not the launch. That argument lost initially; the team wanted to demo the unified product. I revisit this in what I'd do differently.
02
Ecosystem boundary — scope to the strategist surface
Decisions inside Campaign Builder propagate down into Call Center workflows and ultimately into patient experience. I made an explicit choice to scope this work to the strategist-facing surface and resist the gravitational pull of redesigning adjacent tools. Holding that line meant the Campaign Builder work could ship without dragging in the Call Center app — but it also meant accepting downstream inconsistency until later phases.
03
Build-vs-buy on Survey Builder
Several stakeholders asked whether we should drop our own Survey Builder and integrate Typeform or Google Forms. I pushed back: clinical trial qualification has compliance and conditional-logic needs that consumer survey tools don't handle, and the question/answer bank reuse pattern strategists needed wasn't supported externally. Owning the tool was the right call — but the burden of proof was on me, and I made that case with a competitive analysis before it became a design decision.
Research

What cognitive walkthroughs revealed about how strategists actually work.

I ran cognitive walkthroughs with six users — a mix of strategists and analysts — observing them complete representative tasks in the existing tools. Five themes emerged.

  • Collaboration was happening anyway, just outside the product — screenshots, exports, email threads.
  • Survey duplication was a major time sink. Most surveys shared 60–80% structure with a prior one, but every survey was built from scratch.
  • Campaigns lacked visual scaffolding — users couldn't see event logic at a glance and frequently lost track of it.
  • Logic testing happened in production. Bugs surfaced through downstream Call Center complaints, not before launch.
  • The work was inherently iterative with clients, but the tools assumed a single author and a single approval moment.
Design Decisions

Three threads, each tackling one of the original problems.

Campaign Configuration became the single source of truth for client and campaign metadata, so the other two tools could read from it instead of asking users to re-enter context. This was the connective tissue the redesign depended on.

Campaign Builder got collaboration baked in — sharing, commenting, publishing — and event logic surfaced next to the campaign flow rather than buried in modals. The biggest decision here was showing logic visually and inline rather than as a separate logic-editor tab. It cost more layout complexity but matched how users described thinking about campaigns.

Survey Builder got duplication, copying, and sharing as first-class features; templates for common question types and answer banks; cleaner labeling in survey configuration; and richer disqualifier and conditional logic. I deliberately resisted adding every feature the competitors had — survey power-users in this context wanted speed of reuse more than novel question types.

Campaign Builder showing the Goals, Decisions, Sequences, Timers, and Processes toolset in the left panel alongside the visual campaign flow on the right
Campaign Builder — the toolkit (Goals, Decisions, Sequences, Timers, Processes) lives in the left panel; the visual campaign flow on the right. Logic, sequencing, and configuration all in one view rather than scattered across modals.

The unified vision is the right north star — but it made every individual review feel speculative, because reviewers couldn't yet experience the connection.

From the project retrospective
Outcome

Outcome — updated as each tool ships and stabilizes.

Strategists saw their own workflow in the design
In review sessions, the moments that landed hardest were the ones where users recognized something they'd been doing manually outside the product — sharing screenshots, tracking versions in filenames — finally living inside the product. That recognition was the strongest validation that the consolidation thesis was correct.
Engineering load shifted
Several tasks that previously required engineering intervention — duplicating a survey, tweaking conditional logic, pushing a campaign live — moved into self-serve user actions, freeing engineering bandwidth for harder problems.
The unified vision held under scrutiny
Stakeholders who initially questioned the scope signed off after seeing how the shared platform layer collapsed redundant work across three tools. Adoption and time-to-launch comparisons will be added as the staged rollout produces them.
What I'd Do Differently

Ship the smallest, most-painful tool standalone first.

I should have pushed harder for a sequenced release — Survey Builder first (highest pain, smallest surface), then Campaign Builder, then Configuration as the unifier — rather than designing the unified vision in parallel. If I were running this again, I'd let the wins compound rather than asking stakeholders to believe in a connection they couldn't yet experience.

Tools: Figma · Miro · FigJam · Lucidchart · Maze