‘Overchoice’ and UX design: a Q&A with cognitive scientist Rachel Watson-Jones

User experience researcher explains "choice overload" and how to guard against it.

By Nicole Reineke, Distinguished Engineer, Dell Technologies

In our tech-fueled lives, it’s common to feel a sense of paralysis when presented with too many choices. (Personally, I face this every time I search for something new to watch on an online streaming service.)

In the fast-moving world of enterprise technology, business leaders and IT decision-makers often find themselves overwhelmed by the plethora of options on the table. Overthinking a decision, or “analysis paralysis,” can stall progress and translate into being shut out of the digital economy further down the line.

To explore how the human brain responds to an overload of information, I sat down with Rachel Watson-Jones, Ph.D., a cognitive scientist and user experience researcher at Dell Technologies.

Q. There’s a popular notion, backed up by scientific studies, that too many options can lead to not making any decisions at all. What’s your experience with this phenomenon?

A. Humans can only hold so much information in our working memory at any given time. Making multiple decisions about important things over and over can be very fatiguing. At the same time, there’s data to back up the idea that consumers crave the freedom of choice. Some within the scientific community believe the “overchoice” phenomenon follows an inverted “U” graph: When people have too little choice, they experience very low levels of satisfaction. But when the number of choices is too high, they’re similarly dissatisfied.

There’s an oft-cited study from 1956 that theorizes people can only process around seven items of information at a time. After this “magic number,” consumers need to create coping strategies to formulate decisions, which can lead to confusion and unhappiness. A different study from 2010 suggested that the “ideal” number is even lower (three to five options), particularly for young adults.

Although there’s a great deal of dispute about this specific hypothesis/number, more recent research supports the existence of “overchoice” and its effects on human psychology. The concept has informed a lot of modern theories on UX design and how to hit that sweet spot.

Q. In what specific ways have you seen “overchoice” create problems within the context of enterprise technology usage or adoption?

A. If we narrow in on data, it can be incredibly difficult for any decision-maker to understand how they’re supposed to control, manage, and access information with so much choice.

For example, maybe you’re trying to figure out what labels to put on the information that’s residing on your laptop. You’re looking to create a new document, and you need to assign it a security label. If you’ve got 17 choices, and you’re not a cybersecurity expert, you’re taking a risk. That can be massively stressful for a user, not to mention dangerous for the business.

Going up maybe one level higher to infrastructure—say, a storage array or hyperconverged infrastructure—having too many configuration options can be overwhelming. Same thing with online flows, even outside of configuring complex infrastructure. If the flow has too many steps and too many options within those steps, the abandonment rate tends to increase exponentially.

With the APEX console, we spent a lot of time on the online configuration flow that presents users with options that are most likely to match their business needs, yet still provides a variety of choices on those elements we know are important, such as allowing users to select the number of terabytes they need to support their workloads. Really, it boils down to making the effort to understand customer needs and serving them accordingly, rather than putting the onus on them to decipher what is and isn’t relevant.

Q. How about on the individual user level more generally—how have you seen the “paradox of choice” manifest here?

A. Let’s say a company is trying to collect feedback, and there’s a dropdown list of 100 options that a user can choose from. We often see people picking within the first five. They focus on what their working memory can handle at the time and ignore the other options because there are simply too many of them. The downstream impacts of that can be huge, especially when a company is using that data to make decisions about their products or services.

What are some of the repercussions of inaction caused by too many choices?

A. On an enterprise level, it can lead to companies falling behind in their digital transformation. There can also be cybersecurity implications, as with the example of data labeling. The individual implications are crucial, too. If people can’t consume and understand something, they won’t adopt it. There’s a risk of creating a technological knowledge divide if people are too overwhelmed to make a decision, learn the technology, and move forward.

Q. How do you study this phenomenon or guard against it? For instance, what’s your process for creating informed UX studies?

A. Say we’re looking at configuring a new laptop or product… First, I’ll generate research questions–what is it we really need to know and what data do we need to address these questions? From there, I’ll pick a method, or methods, that are most appropriate to get the kind of data we need. Often, in the online space, we start with a heuristic evaluation of the current experience. I look at some of the standard patterns that should be present in particular types of online interactions. Then, I develop hypotheses around where I think users might struggle or areas to be improved.

A lot of the time, I start with qualitative, or generative methods, such as in-depth user interviews. So, I’ll develop a discussion guide aimed at answering some of those questions and providing data to address some of the hypotheses. Then I’ll recruit users, take them through the flow and see where they struggle. I’ll have them think out loud and talk through what they’re experiencing. From there, I’ll synthesize and package that up to provide recommendations.

It’s so important that these processes exist, and for companies to have rigorous research practices in place. Otherwise, what they’re putting out there may be creating the data/choice conundrums you mention.

Q. How does this kind of research help companies improve transparency? And how does it tie back to user control?

A. Being highly transparent is incredibly important for creating trustworthiness—you want to be able to show users the information that is collected about them. But you don’t necessarily want to provide that information as a choice. The availability of information is different than having too many choices as part of the process. Sometimes, those things get mixed up. Companies say, “Oh, we don’t want to create too many choices; therefore, we won’t show people information.” And I firmly believe those two things must be separated. Ethical decision-making always requires transparency, but it can be done without burdening users with overchoice.

Ultimately, customers want choice and flexibility but without infinite options, right? We need to provide the most compelling sets of features and functionalities that our users need based on insights and findings from the research.