Work / Capital One / 'Understanding user types and their priorities'

In this case study, I reflect on a user research project for Capital One, conducted to aid communication of a new mobile product to their cardholders.

Capital One's mobile app (Capital One Wallet) packed 13 distinct features into one app to help cardholders manage various things about their card usage. In order to best showcase and market the product, my team was tasked with understanding the user types so that product messaging could be carried out in an effective manner.

What needs to be done?

Here's how I broke the tasks.

Define objective

Determine approach

Design research

Collect

data

Collect data

Analyze and synthesize

Defining the objective

Identify cardholder types and their respective, perceived priorities to effectively communicate the value propositions of a new Capital One mobile product.

Determining the approach

We started with several rounds of stakeholder interviews and product walk-throughs with the Capital One team for a clear understanding of our challenge. We then pinned down a mixed methods research approach based on the key questions we needed to address, as well as assumptions that needed to be challenged.

Key questions to address

  • What are the demographic and behavioral traits of Capital One cardholders?

primary research

quantitative

  • How receptive are cardholders to each of the 13 features found in Capital One Wallet? (attitudinal)

primary research

quantitative

qualitative

  • Are certain types of features perceived by users to be more valuable than others?

primary research

secondary research

organizational frameworking

Assumptions to challenge

  • The 13 features can be further conceptually organized into categorical feature sets.

organizational frameworking

affinity diagramming

  • Demographic and behavioral attributes can serve as the primary driving attributes for user types.

multiple regression analysis

2x2 matrix

  • Different user types will have different feature set priorities.

multiple regression analysis

affinity diagramming

persona development

Research design

A 20-minute survey was designed and launched on the Qualtrics platform to capture both quantitative and qualitative data around the demographic, behavioral, and attitudinal traits of our survey panel. Two rounds of feedback were incorporated after reviews with stakeholders, and a pre-test was conducted with 20 respondents to make sure there were no issues with the survey design as well as for accuracy of questionnaire language and interpretation by respondents.

Data collection with confidence

To ensure confidence in our results, we secured a sample size of 600 for 95% confidence with a ±4% margin of error, against a total population size of 20M (est. Capital One cardholder population). The panel was screened for with the following conditions:

  • All respondents are cardholders between the ages of 18 to 65

  • 300 respondents with at least 1 Capital One credit card

  • 300 respondents with at least 1 credit card (non-Capital One, control)

The analysis

We first set the groundwork with thematic organization.

Before finalizing the survey design, we first conducted an affinity diagramming workshop to organize the 13 different features into a higher-level of order or 'feature sets'. The three categories that emerged were: organization, convenience, and peace of mind. This exercise would prove to be critical in shaping the exploration and questions developed for our data collection.

Applying frameworks helped complete the picture.

We then set out to organize how users might perceive the value of these 3 feature sets - i.e., are any of these feature sets perceived to be inherently more valuable than the others? My team had a hunch, but wanted to support this exploration with some secondary research before finally validating or invalidating it through our data.

Two key concepts published by Nielsen Norman Group provided a helpful framework for how we could think about this question.

Expected utility (as a function of) perceived value vs. perceived cost

Estimated value (as defined by) information-foraging theory

Applying concept 1, we labeled the 13 features based on the estimated perceived cost to the user (time and effort) for its use.

I then assigned a value to each label of perceived cost (1 = high cost, 0 = medium cost, -1 = low cost) and found the averages for each of the feature set categories (i.e., the lower the score, the less effort or input required of user).

'Organization'

+0.8

'Convenience'

0

'Peace of mind'

-0.5

Finally, the scored feature sets were plotted on a binary spectrum to visualize the mental model of assumptions - this model would be confirmed or rejected through the analysis of the data collected in our study.

Priority drivers determined through regression analysis

With our data, we conducted a multiple-regression analysis to determine which attributes were most influential in a user's level of interest for each of the 13 features.

What we found was that two attributes - age (demographic) and frequency of credit card usage (behavioral) were the most significant predicting factors in how a user rated the 13 features. In fact, age was the most influential attribute for 7 features while frequency of card usage was the most influential attribute for the remaining 6 features.

Segmentation based on key drivers of feature priorities

These two attributes now provided a framework from which to develop user segments in the following manner:

A clear pattern emerged from this exercise.

Results

High-level findings

Peace of mind

#1 prioritized feature set across all user types.

Convenience

Prioritized higher by infrequent card users, and de-prioritized by older users.

Organization

Prioritized higher by frequent card users, de-prioritized by younger users.

A detailed look

Outcomes

These findings were applied across iOS and Android marketplaces, as well as user onboarding and email communications.

The results of this 2-month study were communicated to stakeholders and teams closely working with the Wallet product in a 60-page report. With confidence in our findings, the Capital One team tasked my team with copywriting and visual communication of the top 5 most resonant Wallet features in the form of app description and product screenshots. These were pushed live in subsequent updates across the Apple App Store and Google Play Store.

We measured the impact of these changes for the following 3 months in the form of (∆ in) volume of downloads, user ratings, and keyword rankings.

My team requested and obtained the internal metrics Capital One Wallet's user acquisition performance for the 6 months previous to application of our research, which were used as benchmarks across various performance metrics.

For the following 3 months, my team used Apple and Google's proprietary marketplace measurement tools and App Annie to monitor and measure the change in:

  • ∆ in volume of app downloads

  • ∆ in average user ratings

  • ∆ in keyword rankings (by individual keywords)

While the non-disclosure agreements with Capital One restrict the sharing of specific numbers, we recorded substantial improvements across all three of the metrics listed above and the project was wholly considered a success.

Final thoughts

Thinking back, there was one huge missed opportunity.

To follow up on a very quant-heavy user segmentation exercise, it would've been quite valuable to bring the user types to life through additional qualitative research, specifically user interviews.

While our initial assessment of the project's scope determined that there would not be enough time, it would have been more than possible to conduct them all quickly, online (with easy-to-use qual. insights tools like dscout, UserTribe, etc.).

I believe building these user personas would have made an impact outside of the Wallet team, to serve as building-blocks for the client's user-centric mobile initiatives moving forward.