Back to Work

Metaquest VR
usability
benchmark.

UX research for a B2B wearable technology product at a Fortune 500 tech company — improving task success through behavioral insights.

Client
Fortune 500 Tech
My Role
UX Researcher
Timeline
9 Months
Product
Immersive Wearable
Meta Quest headset
Image to drop in
blink-hero.avif

Impressive hardware.
Impossible to use.

Users were hitting walls during core interactions. Hand controls felt unintuitive, gesture-based inputs weren't landing, and navigating within an immersive environment tripped people up in ways that weren't obvious from the outside.

The Fortune 500 client needed to understand exactly where and why users were struggling — so they could make confident product decisions backed by real behavioral evidence rather than assumptions.

That was the tension at the center of the project: hardware that could be impressive and still feel impossible to use.

The Approach

A structured,
actionable rubric.

My job on this study was to translate a vague "usability problem" into something the product and design teams could act on directly. That meant designing a repeatable benchmarking system, not a one-off report.

01
Behavioral Benchmarking

Ran 120+ moderated sessions to capture real-time behavioral data — task attempts, failure points, recovery strategies — rather than relying on self-reported experience alone.

02
Quantified Frustration

Paired each task with a post-task frustration rating and a Relationship NPS score, turning qualitative moments into something comparable across cohorts and releases.

03
Cohort Segmentation

Segmented results by age, familiarity with immersive tech, and physical dexterity — surfacing patterns that aggregate averages were hiding from stakeholders.

04
Evidence Over Opinion

Built the rubric so every recommendation the team made afterward could be tied back to a specific, observed behavior — no more "gut call" product decisions in roadmap reviews.

Qualtrics NPS survey builder used for the benchmarking study
Image to drop in
blink-qualtrics.jpg
Survey instrumentA custom Qualtrics flow captured relationship NPS, per-task frustration, and demographic segmentation data across every session — the scaffolding that made 120+ sessions comparable.

Watching real hands
meet new hardware.

Sessions took place across a range of user profiles — first-timers, power users, accessibility-sensitive participants. Every session ran the same task script so behavioral data stayed comparable, while moderator notes captured the ad-hoc moments where friction actually lived.

Participants wearing Meta Quest headsets during a moderated session
Image to drop in
blink-users.jpg
Participants in sessionModerated usability testing with real users across age and familiarity cohorts.
Meta Quest hand controllers used during gesture-based tasks
Image to drop in
blink-controllers.avif
The controllersThe hand-control mapping that became a central thread of the study — and the fixture most users wrestled with first.

The Pattern

The product was asking users to
move in ways their bodies didn't expect.

A few patterns kept showing up across sessions, pointing to the same underlying issue.

Missing Feedback Loop

Users failed tasks most often because of unintuitive hand-control interactions — not because they didn't understand the concept, but because they couldn't tell whether what they were doing was working.

Uneven Learning Curve

Adults struggled significantly more than younger users to adapt to gesture-based controls. The learning curve wasn't just steep — it was steeper for certain groups in ways the product hadn't accounted for.

Small Fixes, Real Impact

Small changes to control mapping and visual cues produced noticeable improvements in task success — a hopeful signal that the fixes didn't have to be massive to matter.

The Through Line

Physical interaction design in immersive products is load-bearing. Get it wrong and nothing else about the experience can compensate for it.

Meta Quest VR headset — the hardware at the center of the study
Image to drop in
blink-hero.avif
The headset in handMeta Quest — the hardware our participants strapped on for every session. The whole study centered on the moments between pulling this on and the first successful selection.

Outcome & Impact

From guesswork to
grounded decisions.

The benchmarking study gave the product team a shared language for what "hard to use" actually meant — and a baseline they could measure future releases against.

120+
Moderated research
sessions run
9 mo
End-to-end study
timeline
B2B × Enterprise
Fortune 500 stakeholder
rollout
Physical interaction design in immersive products is load-bearing. Get it wrong and nothing else about the experience can compensate.
Reflection from the study

Working at scale.

This was my first time working at this scale — 120+ sessions, enterprise stakeholders, a product category most users had never touched before. It stretched my research skills in ways a smaller study wouldn't have.

I came out of it with a much sharper instinct for how to observe without interfering, how to spot a pattern early enough to make it useful, and how to translate messy behavioral data into something a product team can actually build from.

  • Designing rubrics that make research re-runnable, not just readable
  • Segmenting early — averages hide the insight
  • Sitting with ambiguity long enough to see the real pattern
  • Translating behavioral data into product language stakeholders act on
The cross-functional research team behind the study
Image to drop in
blink-team.jpg
The teamA cross-functional group of researchers, designers, and product stakeholders — the people who turned 120+ sessions of raw observation into shared language the whole organization could use.
← Previous Project Queer Calendar

Got a product your users are
quietly struggling with?

Behavioral research turns vague friction into a roadmap.

Book a Discovery Call