IUI Example – Creative Inspiration, from Google

If you read any of “the year ahead” predictions for 2019, or even ones from the last few years, one thing you’ll undoubtedly come across is that Robots or AI will eventually take everyone’s job… maybe not today, or tomorrow, but eventually. I personally don’t buy this line of thinking, and think that Marc Andreessen had it ‘mostly’ right in a post he wrote way back in 2014: “This is probably a good time to say that I don’t believe that robots will eat all the jobs…”

One of the most interesting topics in modern times is the “robots eat all the jobs” thesis. It boils down to this: Computers can increasingly substitute for human labor, thus displacing jobs and creating unemployment. Your job, and every job, goes to a machine.

This sort of thinking is textbook Luddism, relying on a “lump-of-labor” fallacy – the idea that there is a fixed amount of work to be done…The counterargument to a finite supply of work comes from economist Milton Friedman — Human wants and needs are infinite, which means there is always more to do. I would argue that 200 years of recent history confirms Friedman’s point of view.

Marc Andreessen

Marc then goes on to rip apart the Robots take all the jobseventuality” with a number of really compelling arguments and thought experiments.

The post is incredible and I’m probably not smart enough to do it justice, but let me try to offer a quick summary of his arguments and then I’ll provide my take… and circle back to the creative inspiration title of this post!

The overly simplistic version of the points in his very long post are:

  1. For the Luddites to be right today (that robots / AI will take all the jobs) one has to believe that there won’t be any new wants or needs (which runs counter to human nature) and that people / humans won’t contribute to those things being created
  2. While it’s true that automation / technology displaces work (and that is something that must be addressed), the flip side is that the same automation / technology increases standards of living. The way that I’ve always thought about this is thinking about all the safety features in cars – blind-spot monitoring systems were only available in high end vehicles a few years ago and now most cars have them, this is a life-saving feature… not just a cheaper big screen TV. Technology enables both of these…
  3. He offers three suggestions to help people who are hurt by technological change: first, focus on increasing access to education. Second, let markets work. Third, create & sustain a vigorous social safety net… with those three things in place, humans will do what they’ve always done: create things to address and/or create new wants and needs.

Now, I love all three of those, and it would probably be enough to stop there… but Marc goes MUCH further…

  • The flip side of robots eating jobs is that this same technological advancement also democratizes manufacturing – it puts the power of production in everyone’s hands! I love this thinking!
  • Costs go down / things get cheaper… robots producing things means lower costs, which lead to falling prices, which stretches people’s purchasing power and raises people’s standard of living.

He wraps up the long post by restating his “this is a good time to state that I don’t think robots will eat all the jobs…” and offers the following…  which I’ll take one by one…

First, robots and AI are not nearly as powerful and sophisticated as I think people are starting to fear. Really. With my venture capital and technologist hat on I wish they were, but they’re not. There are enormous gaps between what we want them to do, and what they can do.

Marc Andreessen

I think Marc is still right on the first one. I’m a goofy optimist. Always have been, always will be. There is a line that someone used on me once that I’ll never forget: “don’t mistake a clear vision for short distance” and while technically true given the subject we were discussing, one never knows where the next breakthrough is going to come from! Again, he was right in 2014, but probably less so every day.

One other point I’d like to make here is that they don’t have to be as powerful & sophisticated as people talk about in the general press. What I’m focused on are the specific examples of real progress. I think we are a long way from the “Singularity” (great video here) but trying to pinpoint it’s arrival isn’t really super exciting to me, not practically anyway.

The questions to really be asking are: “which jobs”, “when”, and “how”. My thought is that it won’t be some zero-sum game.

Second, even when robots and AI are far more powerful, there will still be many things that people can do that robots and AI can’t. For example: creativity, innovation, exploration, art, science, entertainment, and caring for others. We have no idea how to make machines do these.

Okay, this is one of the main reasons I wanted to write about this, and as someone who works in a creative field, Creativity is something that I’m incredibly interested in… so I was a bit floored when I started digging into Magenta…

A primary goal of the Magenta project is to demonstrate that machine learning can be used to enable and enhance the creative potential of all people.

There is a lot to Magenta, so I’ll focus on two parts that really caught my attention.

First, is the Sketch-RNN – given a source sketch, it will auto generate additional sketches. Pretty rudimentary, but two things: first, it is a rudimentary starting image and second, imagine what this could do over time?

Second is Music Transformer – given a starting sequence, it will generate music with long term coherence to the original sample provided.

It doesn’t take much imagination to come up with ideas on where were going to start seeing uses for this type of innovation. If you’ve ever played with Garageband on your iOS devices (or the full blown Logic Studio on a Mac, or similar DAW software), you can start to think that we’re seeing a further advancement in the democratization for the creation of art & entertainment.

As designers, the sketching stuff should be of particular interest. One of the most important parts of my design process is the Diverge / Emerge / Converge diamond. I can see a future where tools like this will help us quickly explore more divergent ideas.

You should really go check out the samples on that page, they are truly incredible.

And check out some of the other demo apps.

I thought about ending here, but there are two more points to cover, so back to Marc…

Third, when automation is abundant and cheap, human experiences become rare and valuable. It flows from our nature as human beings. We see it all around us. The price of recorded music goes to zero, and the live music touring business explodes. The price of run-of-the-mill drip coffee drops, and the market for handmade gourmet coffee grows. You see this effect throughout luxury goods markets — handmade high-end clothes.

Marc Andreessen

I love this line of thinking. Imagine a future where we see labels in clothes that read “Made by Humans” like the “Made in the USA” labels we see today?

Finally, his last point…

Fourth, just as most of us today have jobs that weren’t even invented 100 years ago, the same will be true 100 years from now.

Marc Andreessen

That final point is so true. Marc actually ends his post stating he is ‘way long’ on human creativity, as am I…

What do you think? Will technology take our creative jobs away? Change them? Thoughts?

What’s Next in Tech – Macro Drivers

A few years ago I had the good fortune of getting to build and help lead an Advanced Technology Lab for a large consumer goods company. During that time I created something called the ‘Future Opportunity Framework’. The goal of the framework was to give the company the ability to “peak around corners” when it came to the future of technology… to create a practical toolkit to drive prioritized investments and action.

There were four main sections of content:

  1. Macro Drivers – these were the big factors that were fundamental in helping to understand where things were headed. I’m going to share a handful of them in this post.
  2. Models of Change – more of a sidebar, actually. Just an outline of the 6 major types of change (e.g. Linear, Asymptotic, Cyclic, etc.).
  3. Technology Insights – this was the bulk of the content, a little more than 2 dozen insights that could be used as input into the process for ideation
  4. Framework Process – a simple process of What, So What, Now What that could be used to leverage the content above to generate hypotheses, identify opportunities, and determine next next steps.

I’ll be sharing the models of change and some of the Technology Insights in future posts, but wanted to get some of the macro drivers out in order to reference them later. None of these should really be much of a surprise, these were some of the big things that were going on in the world that helped explain some of the changes we were seeing.

Here are 6 of the original 9 macro drivers from early 2015…  again, these should all be fairly obvious…

Increasing Number of Software Libraries & Open Source Projects

Everything is now an API. From accepting payments (Stripe) to incorporating Machine Intelligence (Mahout, DeepLearning4J, etc), developers can find software libraries & open source projects for just about anything. This isn’t just the surface stuff either, we’re seeing an explosion in the availability of software infrastructure & management code availability as well. A great example is when Netflix open sourced their Simian Army, an API to improve availability & reliability of the Netflix Service. Netflix is not alone in this, Leading tech companies are all donating leading edge software libraries to the open source community, including Google, Facebook, Twitter, and even Walmart Labs.

Decreasing Cost to Build & Launch

There are two main parts to this driver. First, as mentioned above, the very practical implication of the increasing availability of quality software libraries means that building a product has never been cheaper (or faster). Second, we’re seeing tremendous growth in cloud environments, like AWS (Amazon Web Services). Any developer in the world can now provision a load-balanced, highly available, fault tolerant, n-tier server environment with a few mouse clicks and only pay for the actual usage.

Shifting Needs & Sources of funding

This is a potential disruption disruption to Silicon Valley. Given 1 & 2 above, tech companies need less funding to get started. Startups no longer need to go to a VC and give away a boatload of equity when the company is pre-revenue or still in seed stage. It’s pretty easy and cheap to get pretty far, whereas the past required funding by a VC just to build out the basic concept of an idea. The other major part of this is the tremendous growth of Crowdfunding sites, like Kickstarter and Indiegogo: got an idea for a product? Post it on a crowdfunding site and let the community fund the idea to start. Although not every project on crowdfunding sites gets funded, and not all the projects are worthwhile, we have seen some major success stories – like Pebble and Oculus – that are providing evidence of the power of these platforms to rewrite the role of venture funding.

Increasing Interest & Investment in Machine Intelligence

It seems everyone is doing something with AI lately. Watson winning on Jeopardy, Facebook opened an AI Lab, Baidu opening an AI Lab (and hiring very senior talent away from Google), Google investing in AI companies at a rapid pace to build out its capabilities. Everyday it seems there is a new advancement made, software library released, or news story about it.

Maturing Hardware Possibilities & Capabilities

For years the only hardware on the market came from large, well established players because building hardware is hard, and expensive. That is really starting to change, rapidly. There are two main reasons this is happening: first, crowdfunding sites are enabling people to sell products to people before they’re even built, offsetting the cost component. The second is the rise of manufacturing capabilities, everything from low cost options in China to specialized firms that will help bring a hardware vision to life by providing experience & expertise to novice hardware companies. One great example of this is the home automation company Smart Things, which recently sold to Samsung, they started on Kickstarter and raised the funds to build out their hardware platform.

Increasing Technology Adoption

This might seem incongruent with the rest of the list, as the other items are focused more on the nuts & bolts of tech, whereas this one is more of a consumer lens. Why this matters is that people are adopting tech to handle more and more parts of their lives. It started with the web, now we’re all carrying really powerful smartphones and rely on them more and more. There is an increasing comfort people have using tech. This is a virtuous cycle that leads more people to start building tech products because the audience is growing and it is becoming cheaper & easier to do so.

What do you think? Any that you’d disagree with?

IUI Example – eBay

I wanted to call this post ‘Reducing Decision Fatigue’, but the reality is that most of the posts I’ve written here could have that title! 🙂 As cited in my recent post about Design Principles, I think a core principle of IUI is to help people make smart decisions quickly.

One of the great papers at the 2017 AAAI (Association for the Advancement of Artificial Intelligence) Spring Symposium was ‘Communicating Machine Learned Choices to E-Commerce Users’. It was written by a bunch of folks at eBay… and the basic premise was that you could use Machine Learning to help guide people through a long list of products by grouping them based on attributes (new vs. used, seller rating, etc.) that were most relevant to the purchase decision of a given product… but doing so required making good design decisions.

The abstract:

When a shopper researches a product on eBay’s marketplace, the number of options available often overwhelms their capacity to evaluate and confidently decide which item to purchase. To simplify the user experience and to maximize the value our marketplace delivers to shoppers, we have machine learned filters—each expressed as a set of attributes and values—that we hypothesize will help frame and advance their purchase journey. The introduction of filters to simplify and shortcut the customer decision journey presents challenges with the UX design. In this paper we share findings from the user experience research and how we integrated research findings into our models.

They started by analyzing historical transactions to identify inherent value placed on specific attributes, and identified them as “global” or “local”. Global attributes are ones that are common across products (e.g. condition) and local attributes are ones that are specific to a subset of products (e.g the OS version of an Android phone), and some of the local attributes actually replace the global attributes (e.g. ‘Rating’ for baseball cars replaces ‘Condition’)

They then came up with something they called the ‘Relative Value’ of an attribute, which basically looked at the premium that shoppers paid for a product given the value of that attribute (e.g. a returnable item vs a non-returnable item).

In the above image, we see that the higher price paid when an item is returnable.

They then went on to review Behavioral Signals, to determine which attributes were “Sticky” and which attributes were “Impulsive” during a shoppers decision making process. Sticky attributes are obviously ones where buyers stick to a specific value (or range) in their purchase journey significantly more than random chance would dictate. Impulsive attributes are ones that correlated with impulsive transactions (short view trail before purchase).

Once they identified the attributes that really mattered, it then came time to figure out how to design the experience… and there were three parts that they covered:

  • Filter Naming – how to communicate understandable and compelling filter titles?
  • Filter Overlap – how to communicate that filters are not mutually exclusive
  • Filter Heterogeneity – how to communicate why eBay is displaying unrelated filter sets in close proximity

For the filter naming, each one could include one or more attributes (global or local) and they were constrained by the need to identify each of the filters with a human readable name. For example, for products where people prefer buying things that are new and want to have the flexibility of returns, and are weary of overseas shipping, they had a theme named ‘Hassle Free’.

Then the Usability testing began, where they tested a variety of titles – from “emotive & engaging” to “simple & descriptive” They discovered a few things:

  1. People overwhelmingly preferred simple titles.
  2. Item condition was the first reference frame most people locked into
  3. People found longer titles, especially those with compound filters, were difficult to understand

They landed on B, the descriptive titles split over two lines.

One of the Design Principles that I recently wrote about was ‘Developing Trust’, so it was really cool to see the following:

User study participants also expressed low confidence in our recommendations when the inventory covered using ML filters was smaller than that of search results. For example, when the value based filters are concentrated on one or two attributes, significant inventory may be left out. We addressed this concern by taking inventory coverage into consideration in our ML research.

They then go on to say…

We also added navigation links for shoppers to explore the entire inventory beyond our recommendation, which has helped us gain users trust in our recommendations. These links to “see all inventory” also provide easy access to listings not highlighted by our filter-sort formula, in support of cases where a shopper’s ‘version of perfect’ went undetected by our analysis.

This is such a cool example of leveraging machine learning to help people make decisions.

What do you think?

IUI Example – Google Flights

IUI Example – Google Flights

Google just rolled out a new feature to Google Flights, that is pretty cool – they are now predicting if your flight is going to be delayed.  As I’ve mentioned previously, I love to travel and really dig any innovation that will improve my travel experience.

As I read about this, I couldn’t help but recall a presentation I saw by Aparna Chennapragada, VP of Product Google, during an O’Reilly AI conference.

All systems imperfect — there will be a precision / recall tradeoff in almost any system that you rely on. But what you want to pay attention to, as a practitioner, is the cost of getting it wrong. Let me give you an example. Let’s say that you’re building a search system and you return a slightly less relevant article in a search result… it’s not the end of the world. But then let’s say that you build a local search product, where you inform the person searching that, yes, Home Depot is open, you should go now. Then, the person gets in the car, goes to Home Depot, and it’s closed, and they say “What the Hell?”. The cost of doing that, the cost of getting that wrong is higher.

She then gives the example of when they were building Google Assistant…

When we were working on the Google Assistant, and we say, hey, you’re flight is on time, you can leave right now and it takes 45 minutes to get to the airport and then you go to the airport and you miss the damn flight and can’t speak at the conference, then the “What the Hell” is much higher.

There are a number reasons a flight can be delayed or cancelled:

  • Mechanical Issue with the plane
  • Weather (at both the departure as well as the destination airport)
  • Late inbound aircraft
  • Crew
  • Etc.

What Google seems to be doing is simply tracking the inbound aircraft, either by gate numbers – if a flight to say New York  is departing at supposed to depart at 8:21 PM and the incoming flight to that gate is delayed, there is a great chance that the New York flight will be delayed. I’m sure they are doing more than that, they probably have tons of historical data and some good algorithms that take things a little further.

As a side note, each one of these is well known, and airlines have operational departments to deal with issues as they arise. I even read a great book a few years ago – ‘A New Approach to Disruption Management For Airline Operations Control’ that went into detail about a proposed multi-agent, intelligent system to improve operations. I also talked about Smart Airport system in a recent post.

The big takeaway here is that when you’re building things like this, it’s really critical to understand the costs of being wrong and what it means to the person using it!

What do you think?

Defining the Intelligent part of Intelligent User Interfaces

Since I’m talking about “AI” here, and took the time to define IUI, I figured it was probably worth some time offering my thoughts on AI…

Let me start off with a disclaimer: I’m not a machine learning engineer. I’ve never written a line of production AI code… I say production code on purpose, because the truth is, I have written some code that I consider to be “smart”, but more as a hobby. I actually began my career back in 1998 as a software engineer and took an interest in AI quite some time ago. I moved from software development to design about 5 years later and haven’t written production software since about 2003.

I first started digging into AI back in the early 2000’s, and the book that served as my introduction to AI was ‘Constructing Intelligent Agents using Java’. The principles that I learned from reading that, and playing around with some of the sample applications really became the foundation of my understanding of AI. I’ve built a few little things along the way, mostly just hobby projects – I built a little SPAM detection app and a new reading app that grouped similar stories (so I didn’t have to see 10 different versions of the same “Apple announces new iPhone”).

As someone who conceives and designs products for a living, I think it’s really helpful to have a solid understanding of what is possible with technology, so I’ve made sure to stay up to date with what is happening in the field as much as possible.

The way that I think about it is, there are really two kinds of “AI”, sometimes referred to as Narrow AI and Strong AI.

Narrow AI applications are optimized for a single problem or domain. As an example, there is a company that makes a system called Smart Airport, the domain is Airport Logistics – this is narrow AI aimed at optimizing the flow of an airport (planes, people, etc.). Just think about the domino effect of one delayed plane, a bad snowstorm, or mechanical issues with a plane. A smart system can run through all the scenarios of how to recover from that much faster than humans. Narrow AI can be incredibly smart, however, if you tried to use a system tuned to the operational efficiency of an airport to run the logistics of a different domain, say a sports venue, it would probably fail miserably.

Strong AI, on the other hand is what most people think of when they talk about AI: “human like intelligence”. As an example, we weren’t born with the knowledge of how to use the Internet, but we learned how to use it. That was not “programmed in”. Nor were we born with the knowledge of how do design things, we learned that. Strong AI, or “General Intelligence” is best characterized by its ability to learn to operate in any domain.

Recently there has been a lot of talk about Deep Learning, and the way that I like to think about this is that it falls somewhere between the two.

DeepMind, in case you haven’t heard of them, is an Artificial Intelligence company that Google acquired. They developed a Deep Learning algorithm that actually learned how to play – and win – video games. It started with some of the classic arcade style video games many of us grew up with, like Pac Man. Okay, you’re probably thinking, so? Who cares? Well, what made it so astonishing is that they didn’t teach the system the rules of the games, they just let it play the game until it learned the rules on it’s own. One of the games it learned to play was Boxing, and not only did it learn how to win, it learned how to optimize winning. It found, on it’s own, that you could pin the opponent in a corner and run up the score. At the time Google acquired them, it had mastered about 2/3rds of the 35 or so games that is was learning to play.

Fast forward to 2015 and DeepMind accomplished what most people thought was an impossible task for Artificial Intelligence, it beat a human champion at Go – in case you’re not familiar with it, Go is an ancient Chinese game in which you place stones on a 19 by 19 board, and capture your opponent’s stones by surrounding them. The rules are very simple, but they give rise to a complex, subtle game.

There are a number of articles online that describe why this accomplishment is such a big deal, but the simple explanation is that unlike Chess, which was solved using Brute Force algorithms, Go is different. Every single move in Go gives rise to infinitely more possible responses. If the average possible response to a move in chess could be one of about 35 moves, in Go the “Branching Factor” is about 250. To give you a sense of what that means, if you want to think 2 moves ahead in chess, there are about 1235 moves to consider (35 x 35). In Go, it is about 62,500 (250 x 250), and three moves would be 15,625,000 (250 x 250 x 250).

Winning at go requires a sort of an intuition of the board, something that a champion develops. This kind of intuition is something that Brute Force algorithms just can’t deal with.

Based on what I wrote above, you’d think that Deep Mind was Strong AI, because it learned on it’s own and made decisions to achieve it’s goals, and that is correct. However, the question of whether Deep Learning is really Strong AI is whether or not it could function in other domains? Could the same Deep Learning algorithm that learned to play & win those video games also run a supply chain? I’m not sure, my suspicion is that it couldn’t, therefore, my guess is that it falls somewhere between Narrow & Strong (but honestly, I’m not really smart enough to know for sure).

Just to be complete here, there is actually another category, which is called Super Intelligence, which is really a natural evolution of General Intelligence. If AGI can learn, then why can’t it learn to improve itself? This is what all the commotion has been about over the last couple of years with people like Elon Musk & others warning about our impending AI Doom.

Back in 2014, I had the good fortune of attending ICRA, the International Conference on Robotics & Automation. One of the workshops I attended was the Workshop on General Intelligence for Humanoid Robots. The guy who organized that is a pioneer in the field of General Intelligence, Ben Goertzel. During his presentation he said something that has really stuck with me:

…so, AI includes, conceptually, making systems that are intelligent like C3P0, HAL9000 or a thousand times smarter than any human being. The field of AI also includes “Expert Systems“ for say, medical diagnosis, that just goes through a hand coded list of rules or say a neural net control system for a self-driving car, which is highly specialized for that type of car. AI is a very big umbrella, so it’s not particularly clear where AI leaves off and algorithms begin, is there really a big difference between all the algorithms in an AI textbook and the algorithms in an algorithms textbook? Drawing borders between disciplines is not the most interesting thing, the world cuts across all the disciplinary boundaries anyway.

Ben Goertzel

That notion of it’s all just algorithms has always been something that I’ve kept in mind when talking about “AI”.

At the end of the day, for the purposes of this blog, I’m going to consider something intelligent as long as it as it loosely conforms to the definition I laid out in my first post here (What is IUI?)

Improving the acumen, acuity, and productivity of people by applying computational intelligence to experience design.

What do you think?

IUI Example – Kayak

This post is incredibly personal, I love to travel. Just look at my profile on Twitter…

Anyway… 🙂

One of the biggest questions someone has when they’re looking to book a flight is whether the price will go up or down, in other words, should they buy now or wait?

Kayak offers people an answer to this question with a little indicator “OUR ADVICE”.

PURCHASE ADVICE ON KAYAK

If you read my recent post on IUI Design Principles, the very first one was “Raise People’s Acumen”:

Acumen is roughly defined as the ability to make good decisions, quickly. Where a principle like this works really well are for things like analytical tools. As an example, if you’re designing a dashboard, think about the decisions that someone would make with the data and figure out how you can enable them to make better decisions, faster. Another way that I’ve written this principle is “Help people make smart decisions quickly”.

This is so perfect. They are answering that critical question of whether or not to buy now.

But they don’t stop there. They also follow one of my other main principles when building Intelligent User Interfaces, “Be transparent, the real job is developing trust

If a machine is going to do something or make a suggestion for a person, they should have the ability to see how that output was chosen. Look for ways to provide some transparency in the system that gives people trust & confidence.

They put a little ‘i’ icon that people can click to provide some detail behind the advice

This is so brilliant.

I’m not sure the explanation is quite as robust as it could be, however…

Let me explain…

This incredible little innovation didn’t originate at Kayak. The company that invented this was actually called Farecast, which Microsoft acquired in 2008… and, shockingly, they don’t offer this when you search for flights on their site.

One of the reasons that I know that the explanation could be better is because I know some of the history of Farecast.

Farecast was founded by Oren Etzioni, a computer science professor at the University of Washington. He came up with the idea back in 2002 when he was on a flight and learned that the people sitting next to him paid much less for their tickets simply by waiting to buy them until a later date. So he had a student go try to forecast whether particular airline fares would increase or decrease as you got closer to the travel date. With just a little bit of data, the student was able to make pretty accurate price predictions on whether someone should buy or wait.

From there Etzioni built Farecast. It was just like other online airfare search sites (OTAs), with one major addition: it added an arrow that simply pointed up or down, indicating which direction fares are headed.

The company, which was originally named Hamlet and had the motto of “to buy or not to buy”, was built using 50 Billion prices that it bought from ITA Software (which was acquired by Google in 2010). ITA is a company that sells price information to airlines, websites, and travel agents, and has information for most of the major carriers. When Farecast bought the data from ITA, it didn’t have prices for JetBlue or Southwest, but could indirectly predict fares for those carriers based on fares from the carriers it did have pricing data for.

Farecast based it’s predictions on 115 indicators that were reweighed every day for every market. They paid attention not just to historical pricing patterns, but also included a number of other factors that shifted the the demand or supply of tickets – things like the price of fuel, the weather, and non-recurring events like sports championships… anyone buy their tickets for Qatar 2022 yet? 🙂

As mentioned at the start of this post, I love travel… and there are some other travel examples I’ll be sharing in upcoming posts (including one from Google Flights).

What do you think? Any examples you can think of leveraging IUI for travel??

The 5 Reasons Design Deliverables Still Matter

Picture of a design space from a project I worked on

I’m going to go a bit off topic here, but still very much design related…

One of my favorite articles of this past year was the wonderful (manifesto?) “The only thing that matters” by Josh Clark. I found it via the UX Design Weekly newsletter from Kenny Chen.

The basic idea he argued for was that highly polished design artifacts — wireframes, user journeys, etc, — aren’t as valuable as they may seem. The only thing that matters is the deliverable itself, the product. Those things still matter, he says, however, the place to spend your time is on the product itself, not creating a bunch of pretty design artifacts.

I found myself nodding my head in agreement continuously as I was reading the article. I think he is spot on in so many ways.

There is much to love about the sentiment here. The artifacts are a means, not an end. We get paid for the product! We get paid for shipping something! SO true! The goal is to make the right thing, not produce a bunch of stuff that no one will ever look at again…

As much as I agree with Josh and nearly everything he outlined, I still think there are a number of reasons why some highly polished design deliverables still matter. This isn’t a rebuttal to what he outlined, it’s more to advocate for an “AND“… to say that there are indeed some reasons for producing some of those design highly polished deliverables…

1.) Synthesis

Experience Map I built for a project I worked on

I still recall the very first Experience Map that I created, I’m not sure that any one of my stakeholders spent more than a few moments looking at it, however, creating it helped me synthesize my understanding of the problem domain. One could argue that you don’t need a highly polished artifact for this, and that’s true, however, the “picture” enabled me to quickly zoom into an area that I wanted to focus on. If I was trying to understand part of the “plan” part of the experience, I knew exactly where to focus my attention.

As I was writing this, the thing that kept running through my head was:

Creative design seems more to be a matter of developing and refining together both the formulation of a problem and ideas for a solution, with constant iteration of analysis, synthesis and evaluation processes between the two notional design ‘spaces’ – problem space and solution space. In creative design, the designer is seeking to generate a matching problem-solution pair, through a ‘co-evolution’ of the problem and the solution.

Or, to simplify that, how does one ensure that they’re “making the right thing”?

The following picture illustrates what I consider to be a very basic design process.

My basic design process

What I want to call out here is the Discovery, Synthesis, Problem Framing Loop. The reason that I call that out so explicitly is that I think we often get pushed into design too quickly, and I’m a big fan of making sure we’re solving the right problem. There is a line I heard once that I really love: we teach people how to solve problems, but not how to find the right problem to be solved. I love that. Synthesis, for me, is where that work begins.

2.) Thoroughness

Taking the time to create these helps ensure that I don’t miss anything. If I’m creating a Persona, as an example, the template causes me to spend time considering each of the sections, and ensures I don’t miss anything. Yeah, I get that this doesn’t mean that the output has to be polished, however, why not spend the extra few minutes. I’m sure I’m not the only one that has a template for persona’s and the delta between creating one in a spreadsheet and creating one that looks nice is about 15 minutes… well worth the time if you ask me… which leads me to the next one…

3.) Evidence

We did the work, we explored the problem space and here is the proof. Chances are that there are some people on the team whose contribution would be invisible without these “highly polished artifacts”, and if you’re selling services to a client or trying to convince your boss you need to hire another design researcher, why would anyone believe that they are necessary? I get I’m being a bit hyperbolic here, and at the end of the day the thing that matters is the working software (or whatever it is), however, unless the organization you’re working for has a high degree of design maturity, I think some of these highly polished artifacts are extremely valuable as evidence of the work that was done.

Dan Brown has written some of the best stuff out there on design deliverables… including a really great book, Communicating Design.

4.) Validation

They say that a picture is worth a thousand words, and along those lines, showing someone a nice journey map or providing a quick, interactive prototype, instead of making them read through a spreadsheet or word doc is a better experience and helps validate direction. There is an old saying that I really love here:

  • Tell me, and I may hear you.
  • Show me, and I may see it.
  • Involve me, and I’ll completely understand.

Giving someone a prototype and letting them click through can yield some great insights. I’m sure we can all recall some design idea we’ve fallen in love with that didn’t quite hit the mark once people saw it. For me, I want to learn about these types of things as cheaply as possible.

I’m really a huge fan of what Josh proposes, in terms of writing code early and by favoring working software over prototypes, we really cut down on the back & forth between design and engineering when we throw designs over the wall (if we’re still doing that).

Interestingly, I recently read ‘Creative Selection: Inside Apple’s Design Process’ and one of the main ideas of the book was that the most important thing you can do is demo your stuff, let people see it and try it — early & often! There is no substitute for feedback!.

5.) Portfolio

As funny as this may sound, I think these design deliverables are an important part of a design portfolio. It’s great to see the finished product, but showing the process someone went through to come up with the idea and the deliverables that were generated are, to me, just as interesting… they are a way to get an understanding of how someone thinks, how they approach a problem.

Again, I really love and agree with the ideas Josh proposed, and think we need to move much closer to just-in-time, collaborative design process! I recently did this on a project, where we brought in a UI Engineer to start building the design as they were emerging, and it worked out beautifly! Instead of handing off wireframes, we handed off working front-end code, and our usability testing was much more effective because we could code in some conditional logic…

What do you think? Have you changed your design process to be more collaborative? I’d love to hear your thoughts!