As a UX Researcher and Design Consultant at J.D. Power, I analyzed large-scale automotive survey data for brands like Ford, BMW, and Volkswagen. I turned raw customer feedback into actionable design recommendations. For Ford, this contributed to a +28% increase in customer satisfaction.
View Impact →Technology is reshaping how we experience cars. Full self-driving, AI, touchscreens controlling everything from HVAC to navigation — every interaction is digital. But automakers don’t always know what customers actually think about these experiences. That’s where J.D. Power comes in.
Touchscreens replace physical controls. Every tap, swipe, and voice command is a UX decision.
J.D. Power collects feedback from millions of vehicle owners across every brand and model.
Automakers have the data but not the insight. Someone needs to translate numbers into design direction.
My role was that bridge. I took large-scale survey data, identified where vehicles fell short, evaluated prototypes hands-on, and delivered visual evidence that automakers could act on.
I followed a structured research and consulting process — from raw data to delivered recommendations.
I started by identifying where a vehicle excels or falls short against industry benchmarks. Using J.D. Power’s proprietary survey data and probability analysis tools, I could pinpoint exactly which categories — infotainment, driving assistance, interior comfort — scored below the competitive set.
Crosstab analysis: green = above benchmark, red = below. Each row is a UX category.
Numbers tell you where the problem is. Verbatims tell you why. I analyzed thousands of customer comments to find patterns — the same frustrations appearing across different owners, different models, different years.
41 customer verbatims categorized into 6 themes — each theme mapped to a specific design recommendation
With data and verbatims in hand, I evaluated pre-production vehicles at testing facilities. I focused on the specific problem areas the data identified — testing the actual experience against what customers reported.
Hands-on evaluation at a testing facility — every annotation maps to a data-backed issue
I combined quantitative data, verbatim themes, and evaluation findings into visual reports. Competitive benchmarks, diagnostic breakdowns, and trend analysis — all designed to make the case undeniable for the client.
Competitive analysis: Range vs. Time, IQS diagnostics, and performance benchmarks across EV models
The final step was presenting findings to client teams — often directly to engineering and design leads. I delivered data-backed recommendations, demonstrated issues on prototypes, and provided wireframes for proposed improvements.
HMI wireframes: instrument cluster and infotainment redesign proposals
The deliverable: category-level diagnostic with matched verbatim quotes and competitive positioning
The insights I delivered to Ford were implemented in their next model year. The results:
In a pay-for-research model, these results validate the entire consulting engagement. The recommendations weren’t theoretical — they were specific enough to implement and measurable enough to track.
I was part of J.D. Power’s Auto Advisory team, working across multiple OEM engagements.
A bar chart showing -3.2% in infotainment satisfaction is easy to dismiss. But when you read 50 customers saying “the navigation doesn’t understand me” — that changes a room. I learned to always lead with the human voice, then back it up with the data.
Automakers don’t need more data — they need direction. The most impactful deliverables weren’t the ones with the most charts. They were the ones that said “here is the problem, here is why it matters, and here is what to change.” I stopped presenting findings and started presenting decisions.