I wanted to call this post ‘Reducing Decision Fatigue’, but the reality is that most of the posts I’ve written here could have that title! 🙂 As cited in my recent post about Design Principles, I think a core principle of IUI is to help people make smart decisions quickly.
One of the great papers at the 2017 AAAI (Association for the Advancement of Artificial Intelligence) Spring Symposium was ‘Communicating Machine Learned Choices to E-Commerce Users’. It was written by a bunch of folks at eBay… and the basic premise was that you could use Machine Learning to help guide people through a long list of products by grouping them based on attributes (new vs. used, seller rating, etc.) that were most relevant to the purchase decision of a given product… but doing so required making good design decisions.
When a shopper researches a product on eBay’s marketplace, the number of options available often overwhelms their capacity to evaluate and confidently decide which item to purchase. To simplify the user experience and to maximize the value our marketplace delivers to shoppers, we have machine learned filters—each expressed as a set of attributes and values—that we hypothesize will help frame and advance their purchase journey. The introduction of filters to simplify and shortcut the customer decision journey presents challenges with the UX design. In this paper we share findings from the user experience research and how we integrated research findings into our models.
They started by analyzing historical transactions to identify inherent value placed on specific attributes, and identified them as “global” or “local”. Global attributes are ones that are common across products (e.g. condition) and local attributes are ones that are specific to a subset of products (e.g the OS version of an Android phone), and some of the local attributes actually replace the global attributes (e.g. ‘Rating’ for baseball cars replaces ‘Condition’)
They then came up with something they called the ‘Relative Value’ of an attribute, which basically looked at the premium that shoppers paid for a product given the value of that attribute (e.g. a returnable item vs a non-returnable item).
In the above image, we see that the higher price paid when an item is returnable.
They then went on to review Behavioral Signals, to determine which attributes were “Sticky” and which attributes were “Impulsive” during a shoppers decision making process. Sticky attributes are obviously ones where buyers stick to a specific value (or range) in their purchase journey significantly more than random chance would dictate. Impulsive attributes are ones that correlated with impulsive transactions (short view trail before purchase).
Once they identified the attributes that really mattered, it then came time to figure out how to design the experience… and there were three parts that they covered:
- Filter Naming – how to communicate understandable and compelling filter titles?
- Filter Overlap – how to communicate that filters are not mutually exclusive
- Filter Heterogeneity – how to communicate why eBay is displaying unrelated filter sets in close proximity
For the filter naming, each one could include one or more attributes (global or local) and they were constrained by the need to identify each of the filters with a human readable name. For example, for products where people prefer buying things that are new and want to have the flexibility of returns, and are weary of overseas shipping, they had a theme named ‘Hassle Free’.
Then the Usability testing began, where they tested a variety of titles – from “emotive & engaging” to “simple & descriptive” They discovered a few things:
- People overwhelmingly preferred simple titles.
- Item condition was the first reference frame most people locked into
- People found longer titles, especially those with compound filters, were difficult to understand
They landed on B, the descriptive titles split over two lines.
One of the Design Principles that I recently wrote about was ‘Developing Trust’, so it was really cool to see the following:
User study participants also expressed low confidence in our recommendations when the inventory covered using ML filters was smaller than that of search results. For example, when the value based filters are concentrated on one or two attributes, significant inventory may be left out. We addressed this concern by taking inventory coverage into consideration in our ML research.
They then go on to say…
We also added navigation links for shoppers to explore the entire inventory beyond our recommendation, which has helped us gain users trust in our recommendations. These links to “see all inventory” also provide easy access to listings not highlighted by our filter-sort formula, in support of cases where a shopper’s ‘version of perfect’ went undetected by our analysis.
This is such a cool example of leveraging machine learning to help people make decisions.
What do you think?