IUI Example – AirBnB

One of the principles that I covered in my recent post on IUI Design Principles, was: “Reduce Cognitive Load”.

Part of the inspiration for that principle was an article I read a few years ago about a pricing tool that AirBnB built for people listing their properties.

The problem, they had discovered, was that critical moment when a person is trying to do decide what price to charge when they’re listing their property:

In focus groups, we watched people go through the process of listing their properties on our site—and get stumped when they came to the price field. Many would take a look at what their neighbors were charging and pick a comparable price; this involved opening a lot of tabs in their browsers and figuring out which listings were similar to theirs[clip] …some people, unfortunately, just gave up

Dan Hill, Product Lead @ AirBnB

There is really so much to love about the discovery of that insight. At the risk of stating the totally obvious, if people don’t list their properties, the supply side of the market starts to erode, which isn’t good. 🙂

AirBnB isn’t the only company that has to deal with pricing, obviously! I’ve spent a lot of the last 20 years in the Consumer Goods / Retail domains, and think about the number of products on the shelf at your local grocery store, those prices were set by a person.

The awesome thing is how they solved this, and it is a great example of a truly intelligent interface! They actually built a tool called Aerosolve, a machine learning algorithm to provide intelligent pricing recommendations to people listing their properties.

They aren’t the only company that has used algorithms to set prices. Think about ride sharing apps like Uber, Grab or Go-Jek, prices change based on different variables: distance, weather, and demand (give or take). But those are pretty simple, all known quantities and over time develop a tremendous amount of historical data… e.g.  how much more are people willing to pay when it’s raining? In other words, price elasticity.

Setting prices for a property on a site like AirBnB is a little more complicated. In a technical paper that Dan Hill wrote (which is where I got the quote above), he covers both some of the history or Aerosolve as well as some of the challenges in building it.

In addition to all the normal things you’d expect to be factors, like number of rooms, WiFi, seasonality, and so on, there are some other interesting factors that come into play. It turns out that the number of reviews plays a large part in pricing, people are willing to pay more for a listing with good reviews (seem obvious retrospectively).

But what about big events?

Source AirBnB

Consider SXSW, as depicted in the above graphic, what about the World Cup? Those are obvious big examples, but there are events in cities all the time that don’t get the press of things like those events? AirBnB has to account for those as well.

It’s interesting to note here the origin story of AirBnB was that Brian Chesky came up with the idea for the site when we wanted to attend a design conference, realized that all the hotels in SF were sold out, and decided he could pay for his ticket to the conference by renting an air mattress in his apartment to someone who wanted to come to SF but couldn’t get a hotel room.

One of the other really interesting things is the way they had to deal with geographic boundaries for properties. An early version of their algorithm simply drew a circle around a property, and considered anything within that radius a “similar listing”, but what they discovered was that simplistic view had a serious flaw…

Imagine our apartment in Paris for a minute. If the listing is centrally located, say, right by the Pont Neuf just down from the Louvre and Jardin des Tuileries, then our expanding circle quickly begins to encompass very different neighborhoods on opposite sides of the river. In Paris, though both sides of the Seine are safe, people will pay quite different amounts to stay in locations just a hundred meters apart. In other cities there’s an even sharper divide. In London, for instance, prices in the desirable Greenwich area can be more than twice as much as those near the London docks right across the Thames.

We therefore got a cartographer to map the boundaries of every neighborhood in our top cities all over the world. This information created extremely accurate and relevant geospatial definitions we could use to accurately cluster listings around major geographical and structural features like rivers, highways, and transportation connections.

Dan Hill

What a great example of improving the experience people have using a tool by applying computational intelligence.

One of the things I’m going to cover in an upcoming post is how to discover opportunities for IUI.

In the meantime, I’d love to hear what you think about what AirBnB are doing! Thoughts?

IUI Design Principles

I’m a huge fan of principles, and usually try to define them for every project I work on.

As I’ve worked on “smart” applications, like Recommendation Engines and other intelligent apps I’ve found a few that seem to recur and thought that I’d share. This is by no means exhaustive nor will all of these be applicable for every project… however, here are a few that might be a useful starting point:

Raise people’s acumen

Acumen is roughly defined as the ability to make good decisions, quickly. Where a principle like this works really well are for things like analytical tools. As an example, if you’re designing a dashboard, think about the decisions that someone would make with the data and figure out how you can enable them to make better decisions, faster. Another way that I’ve written this principle is “Help people make smart decisions quickly”.

Reduce Cognitive Load

This is really just a rewording of the age old heuristic “don’t make people remember information” from Jakob Nielsen, but I really like adding some dimension to it (“reduce”), suggesting that it’s measurable.

Support the transition from Creation / Authorship to Review / Approve

As machines get smarter, their ability to create or author content grows. Computers are writing articles, generating images, and creating music. Where we used to build tools for people to create things, the future will require us to think about interfaces for people reviewing & approving content created by machines.

Be transparent, the real job is developing trust

If a machine is going to do something or make a suggestion for a person, they should have the ability to see how that output was chosen. Look for ways to provide some transparency in the system that gives people trust & confidence. This one really does tie nicely to the one above, about supporting a transition from creating to approving.  I’ll share some great examples of this in some upcoming posts.

Strive to provide moments of delight

One the areas that I’m particularly interested in right now is Discovery, as the amount of content (music, movies, apps, etc.) has grown, how do people find things that they’ll really love? I think there are some great opportunities to leverage computational intelligence to delight people with recommendations.

What do you think? Any you’d like to add?

IUI Example – Intelligent Remote Control

Picture the remote control from your TV / Cable / Satellite provider.

Chances are, the image in your head is similar to the image most other people come up with. We all know what a remote control looks like. There are rows and rows of buttons, some bigger than others, some with alphanumeric characters, some with symbols, some round, and some rectangular. There is something for power, volume, changing the channels, and a whole host of functions that you probably use very infrequently. Remote controls haven’t changed much in years… they are what they are.

I really began thinking about remote controls back in 2011 after reading the really good book Simple & Usable by Giles Colborne.

In the book he outlines Four strategies for simplicity, and he does so by describing an interview exercise that he runs job applicants through. What he does is ask them to “simplify” the remote control for a DVD player.

Back in 2011, most people probably still had a DVD player, and this exercise presents some tricky problems.

Typically, a DVD remote has about forty buttons, many have more than fifty, and, as GIles suggests, that seems excessive for a device that is used to play and pause movies. When something is that complicated, there should be plenty of scope for simplifying it. But the task turns out to be harder than you’d imagine.

Before he reveals some simplification strategies, he suggests people go off and try it themselves, and offers a template to work from. He has even posted a couple of examples of solutions that people came up with

Giles outlines four basic categories that all the solutions he’s seen fall into.

  • Remove – get rid of all the unnecessary controls until the device is stripped back to it’s essential functions
  • Organize – arrange the buttons into groups that make more sense
  • Hide – hide all but the most important buttons behind a hatch so that the less frequently used buttons don’t distract people
  • Displace – create a very simple remote control with a few basic features and control the rest via a menu on the TV screen, displacing the complexity of the remote control to the TV.

Some people, he says, do a little of each but everyone picks a primary strategy. Each have strengths and weaknesses, and he says that those four strategies work whether you’re looking at a something large, like an entire website, or something small, like an individual page. He goes on to describe each of those four strategies in more detail, and says that a big part of success comes from choosing the right strategy for the problem at hand.

Here is where I’ll let those people who are interested in learning more about those strategies go get the book

For people who own an Apple TV, you’ll notice they really embraced the displace strategy. Their remote is really nice, same with Roku, and other modern device makers – they have a simple device that displaces most of the functionality to the screen.

Those are nice, but there is a company that thought there might be a better way to solve this problem, and part of their app includes some IUI.  

The company is named Peel, and they built a smart remote control. They didn’t follow any of the four strategies that Giles outlined, they got rid of the remote altogether and put in on smartphones and tablets. They’re obviously not the only company to do that part, Logitech did the same thing, as did others, including some TV manufacturers and cable providers.

What makes the Peel remote so interesting is the interface is that they completely reimagined what a remote control could be. They brought the content down to the device, so it isn’t just a bunch of buttons with alphanumeric characters on it. They actually display imagery for the show, like the poster art for a movie or channel logos for networks.

Although a few years old, there was a report from Nielsen back in 2016 that indicated that the average consumer only watches about 19 total channels, or about 10% of the channels available to them. From an intelligence standpoint, it wouldn’t take long for a system to learn the ~20 channels someone watches regularly and make those the primary channels displayed in the interface.

But Peel goes even further, they add smart recommendations.

Instead of making people channel surf, they actually make recommendations of shows to watch. I’m not really sure the efficacy of those recommendations as I’m not much of a TV person (some news, some soccer, stream movies, etc.). Regardless of how good the recommendations are today, it’s hard to argue that the UX of the Peel remote is pretty great and, recommendations can always improve. Don’t get me wrong, I’m not suggesting that recommendations are easy, I’m sure Netflix has spent a ton of money on this, including their $1M dollar competition but consider what recommendations mean in the sense of a remote control…

Think about the access they have to a lot of behavioral information, what channels people tune into, how often they change channels, recurrence (same channel at some repeating interval), etc..

As a simplistic example, they know that for the last few weeks, on Monday night, that a person has changed the channel to ESPN at roughly the same time, so about 30 minute before that time, they could display some graphic for what is coming on at the time that person normally tunes in. It wouldn’t take long for a smart system to learn about the seasonality of sports, and stop suggesting it when that “program” was no longer on.

Both of those are a bare minimum of intelligence but I think still qualify for an intelligent user interface.

What do you think?

What is IUI?

IUI is an acronym for ‘Intelligent User Interfaces’. This is not really anything new. As a concept, it’s been around for more than 20 years, but given all the recent advances in Machine Learning / A.I. it’s become increasingly possible to improve the experiences we design for people by applying some computational intelligence.

I think the name says it all, User Interfaces that are intelligent, but I wanted to see if I could find a really good definition. One of the first results in Google comes to us from Wolfgang Wahlster, who is Computer Science Professor in Germany. He defines IUI as:

Intelligent user interfaces (IUIs) are human-machine interfaces that aim to improve the efficiency, effectiveness, and naturalness of human-machine interaction by representing, reasoning, and acting on models of the user, domain, task, discourse, and media (e.g., graphics, natural language, gesture).

Wolfgang Wahlster

I really like several parts of that:

  1. He starts with calling out Human-machine interfaces. We’re building something for humans, for people. Love that he starts with people first!! That is the right orientation! I like the word Human better than User, also.
  2. He identifies goals for IUI – to improve the efficiency, effectiveness… of interactions. Those seem like good goals to me! We’re trying to make life better for people.
  3. Finally, he calls out how to achieve it… reasoning, and acting on models of the user, domain, task, etc. We’re going to have the computer do some work for people. Cool!

One of the other top results from Google is the IUI page on Wikipedia. I was a little less impressed by this explanation of IUI, but it’s simple:

An intelligent user interface is a user interface that involves some aspect of artificial intelligence (AI or computational intelligence)…

Wikipedia

What made the Wikipedia explanation so interesting was that they went on to include an example of IUI, and the example they cited, was… Clippy!

…there are many modern examples of IUIs, the most famous (or infamous) being the Microsoft Office Assistant, whose most recognizable agentive representation was called “Clippy”

Wikipedia

If you aren’t old enough to recall clippy, I’ll put a link at the bottom of this post to a good article on it. Basically it was a little animated paper clip that appeared when you used Microsoft Office Applications and he would offer to help you do things.

Let me hit pause on Clippy for a second…

One of the reasons I wanted to get a good definition for IUI is that being able to define it enables us to determine if something is actually IUI or not.

If you look at how the Wikipedia article defines IUI, then yes, Clippy is an example of it. However, I’m not sure that it really made life more effective or efficient for people, as Professor Wahlster calls out. Regardless of how successful Clippy was, though, it was an interesting idea to try and with all the amazing advances in Natural Language Processing, we have today, a modern version of Clippy could be pretty powerful in helping people writing more compelling & influential documents, as one example. There are plenty of companies that are in the Natural Language Generation business (like Narrative Science, Automated Insight, etc) so we know machines are capable of writing like humans and that capability is only going to get better over time.

Returning to our definition… we have a basic overview of IUI: it’s about making life easier / better for people by applying AI to user interfaces. We’ll go ahead and offer a definition of IUI as:

Improving the acumen, acuity, and productivity of people by applying computational intelligence to experience design.

There is a great quote by William James that I think of when the subject of AI comes up:

The more of the details of our daily life we can hand over to the effortless custody of automation, the more our higher powers of mind will be set free for their own proper work.

William James

There was a bit more to that quote, and his point was really about creating habits to reduce our daily cognitive load… but I love the quote nonetheless and think it fits here pretty well. This is really one of the promises of IUI, to reduce cognitive load when people are using technology.

On the off chance that you’re new to IUI, there is actually an annual ACM Conference on the Topic of IUI, and it’s entering its 24th year!

The tagline for the conference is GREAT: “Where HCI meets A.I.”

OMG! That is great! Instead of stealing it, which I thought seriously about, I’m going to tweak it a bit… my tagline is: Improving HCI by Applying AI.

As with most ACM conferences, it’s fairly technical and academic, so I don’t recommend it for most people. If you aren’t able to make it, no worries, I’ll be providing some great coverage of the event here. If you are going, let me know, I’d love to meet you!

In addition to covering conferences, like IUI, some other things I’ll be covering here will include examples of IUI in the real world, ideas I have, design principle, IUI Patterns, interviews and some general articles of interest. Hope you find it useful!

If you have question, a topic you want covered, or an example of IUI that you’d like to share, I’d love to hear about it!