top of page

PORTFOLIO

Modelling Drop-Off and Support Burden in Identity Verification Flows

MUFG Corporate Markets​
​
Context

Investor Centre supports over 3 million global users onboarding and account management within a regulated financial environment.

​

Despite stable acquisition volumes, early-stage activation rates were inconsistent, and support contact volumes remained high. Incremental UI changes had been made over time, but no structured framework existed to quantify friction across verification and early-account journeys.

​

The absence of a shared measurement system meant prioritisation was reactive rather than evidence-led

​

Core question:

Which specific behavioural patterns during onboarding and verification most strongly predict abandonment, repeat attempts, or downstream support contact?

​

Hypothesis:

We suspected that friction was not evenly distributed across the journey. Rather than a general usability issue, we hypothesised that specific interaction patterns (e.g., repeated document uploads, time delays between steps, cross-device switching) were disproportionately associated with abandonment and support escalation.

​

Approach

To test the hypothesis, I:

  • Partnered with engineering to define a structured event taxonomy across onboarding and verification flows

  • Built cohort definitions based on behavioural patterns (completion time, repeat attempts, device switching, interruption gaps)

  • Applied regression and cohort analysis to model which patterns were most predictive of:

    • Successful activation​

    • Early-stage churn

    • Support contact within 7 days

  • Mapped customer-to-agent handoffs to quantify operational burden linked to specific behavioural journeys

  • Synthesised behavioural data with qualitative insight from support logs to validate root causes

 

Key findings

Analysis revealed that:

  • A small subset of repeat-attempt users accounted for a disproportionate share of support volume

  • Time-to-complete beyond a certain threshold was a stronger predictor of churn than total number of steps

  • Certain verification retries were correlated with domain language confusion rather than information failure

Importantly, friction was concentrated, not systemic.

​

Decision & Implementation

Rather than broad UX simplification, I:

  • Prioritised interventions targeting high-risk behavioural cohorts

  • Introduced measurable performance thresholds

  • Embedded activation and effort metrics as guardrails in roadmap prioritisation

  • Established quarterly evidence reviews to reassess friction patterns over time

​

Impact
  • 15% improvement in successful activation rates

  • 22% reduction in related support contacts

  • Shift from reactive issue-fixing to structured evidence-based prioritisation across onboarding

Reframing Subscription Packaging Through Behavioural Segmentation

Spotlight Sports Group
​
Context

After years of plateaued subscription conversion, the underlying assumption was that additional context would increase perceived value. Behavioural data suggested otherwise.

​

Core question:

Were existing tiers aligned with distinct behavioural needs, or were we bundling fundamentally different user motivations into the same package?

​

Approach
  • Conducted behavioural clustering using event-level interaction data to identify distinct usage patterns

  • Modelled conversion likelihood across cohorts using regression analysis to isolate predictors of upgrade

  • Designed and deployed structured perception surveys to measure feature valuation across segments

  • Triangulated behavioural and attitudinal signals to test tier misalignment hypotheses

 

Outcome & Impact

The analysis revealed that high-engagement and casual users were responding to fundamentally different value drivers, yet were treated as a single segment within the tiered offering.

 

The insight informed a restructured subscription architecture, separating feature bundles by behavioural need rather than content volume.

 

The new packaging reversed multi-year conversion stagnation and produced a sustained uplift in subscription growth​.

Designing Research Infrastructure to Scale Quantitative Decision-Making

Spotlight Sports Group
​
Context

When I joined Spotlight Sports Group, research output varied significantly across squads. Metrics were inconsistently defined, experimentation practices were uneven, and behavioural analysis was often conducted reactively rather than systematically.

As the product portfolio expanded, the absence of shared analytical standards risked fragmented decision-making.

​

Core question:

How can we create a repeatable research and measurement framework that ensures consistent analytical rigor across teams?

​

Approach
  • Defined standardised metric definitions for conversion, engagement, and retention to ensure comparability across initiatives

  • Established a structured research workflow integrating behavioural analysis, survey measurement, and qualitative validation before major roadmap commitments

  • Introduced experimentation guardrails clarifying when A/B testing, cohort analysis, or exploratory research was appropriate

  • Partnered with data and product leadership to embed these standards into quarterly planning and review cycles

 

Impact
  • Reduced inconsistencies in metric interpretation across squads

  • Improved comparability of experiment outcomes

  • Increased leadership confidence in research-backed decisions

  • Shifted research from reactive validation to proactive problem framing

bottom of page