DriverIQ Research & Redesign
The Challenge
Repair agent interest in program
Our customer success team brought forward two pieces of high level client concerns delaying the client from scaling their program with our telematics software:
A lack of agents offering and enrolling users into the program
Enrolled users placing a high number of calls to the client’s service center due to confusion
With that information alone, I set out to understand the program’s current state process and design, as well as the jobs-to-be-done of the agents / client / users, in order to create and test hypotheses in order to provide client with low effort and high impact recommendations.
PROJECT DETAILS
Role: Sole UX Strategist
Timeline: 2 Weeks
UX Budget: $300 for incentives
Tools:
UserInterviews.com
UserTesting.com
Figma
METHODS:
Problem Identification
Research Planning & Execution
Client Management
Usability Testing
Prototyping
DELIVERABLES
Interview Syntheses
Prototypes
UX Recommendations
UI Design
OUTCOMES
Reduced call-center complaints
15,000 MAU increase
19% reduction in churn from program
Initial State & Discovery
Chaotic UI created confusion amongst users & agents
Understanding Users Job-to-be-done: While I was not permitted to interview the client’s end users at this time, I self served app reviews and requested survey results from the client which pointed to user confusion around their performance, as well as a disconnect between the score shown in their app versus the discount they did or did not receive. It was clear that the user’s ultimate job-to-be-done was to understand how much of a discount they should plan to receive on their policy premium due to their driving performance, or how to increase their odds of receiving a bigger discount.
Understanding Agent Job-to-be-done: While I was not permitted to interview the client’s agents at this time, I requested any agent feedback they had which highlighted their own confusion with the program UI which barred them from being able to assist their customers. They also mentioned that users were seeing high ‘Two Week Scores’ in the UI but then would not receive a discount on their policy.
Understanding Business Capabilities: In addition, I interviewed several internal subject matter experts within my company to ensure I understood how the current state design was populated, what additional data is available, as well as the business goals and confidence in data sources or business strategies. This immediately identified disconnects in the UI, particularly that the score shown was not related to the score that would generate discounts, thus creating a piece of user confusion created.
Created Hypotheses: Upon inspection of the client designed UI, I immediately created hypotheses of potential areas causing confusion for agents and users given design principles and heuristics. A clear hypothesis for testing was that the amount of metrics and data points shown do not create a simple, easy to understand status of the user’s program performance.
Users want to quickly and easily understand whether their driving performance will result in a discount on their policy, and if so, how much the discount will be
Key Working Hypotheses:
If at risk of not receiving a discount, users want to know how to improve their driving to earn a discount
It is unclear to users how much of a discount they will receive based on the “Two Week Score” shown
Users who see a high numeric “Two Week Score” will expect a discount
Understanding additional aspects of their performance, such as miles driven, or change month-over-month, are not primary jobs-to-be-done or helpful on the dashboard card
Test Preparation
Ensuring the right participants, scenarios, & tasks was crucial
Core Hypothesis: If users are able to intuitively and confidently understand their projected discount (their main job-to-be-done), we will see reduced user confusion and complaint calls, while also increasing agent confidence in offering the program
Approach:
Provide key tasks users will need to complete while participating in the program and monitor behavior between two groups:
Control group: Given a minimally clickable prototype of the current state UI design of the client
Test group: Provided with a minimally clickable prototype I designed to address working hypotheses gathered from initial discovery research
Participants:
As the client was unable to provide a list of their users for interviews, I turned to non-client users, taking time and care to screen for participants who match characteristics of likely program participants based on previous user persona research, such as being the primary decision maker on their insurance policy, and open to participating in a driving program but have not yet experienced one as to not bias their results, etc.
Upon completion of two rounds of testing, I then asked to run the test on a set of client agents as they are a key party in the users journey and business challenge.
Goal
Confirm whether users are able to complete the most important tasks around score:
How well they’re doing in earning a discount
What their projected discount might be
Why their projected discount is what it is
Observe
How long it takes users to complete tasks
What users look at when completing tasks
What users are thinking / feeling as they complete tasks
What gets in users way when completing tasks
Scenario
You've signed up for a safe driving program through your auto insurer. They advise you that you have received a 10% discount for your current policy period just for signing up. You download your insurer's app called DriverGo and you give the app permission to see how you drive.
The app senses and tracks the following factors around your driving: Miles driven, Hard braking, Speed, Road type, Phone distraction
You drive as you normally would over the next week. You decide to open the app to review your driving summary.
Control Group Screens: Current Design
Test Group Screens: Test Design
Learnings
Current design misses mark on user clarity, focus, & transparency
Control Group Learnings - TLDR:
Participants spend too much time/energy trying to piece together info, or searching for missing info, leading to confusion
Participants are unclear as to what goes into the discount, why, & weighting of driving factors
Definitions of driving factors are not clear or accessible
Test Group Learnings - TLDR:
Less is more in regards to participants comprehending and consuming information, by removing mental load and clarifying important information in user friendly phrasing. We need to focus users on the highest priority user needs versus distracting / secondary information
Participants want to understand more behind how factors go into their projected discount
Agent Group Learnings - TLDR:
New design direction is straightforward and intuitive, but would be more beneficial with additional detail around scoring calculations and factors
Agents/Employees will benefit from additional education and support related to DriverIQ
Solution & Impact
Focus on savings & provide simple statuses
Final Solution:
Taking the learnings and user feedback into consideration, iterations were made to the prototype design to address the business concerns that initiated the work, as well as additional improvements to meet user jobs-to-be-done.
Business Impact:
Future Considerations:
How important is it for users to understand their trends over time? How might we connect the dashboard discount factors with the ‘Trends’ page?
Do users want to understand how they compare to other drivers? Particularly if they drive in different places / times / etc.? Is it fair to compare?
Should features be moved off of the dashboard to focus on key user needs? Consider removing ‘Leaderboards’ comparison buckets.
How to align trip detail cards to dashboard discount approach