I’m taking a class on “Web Usability”, and our first assigned reading is The Design of Everyday Things by Don Norman. This is a very readable tour through design principles that can help us create devices and systems that are easy and even enjoyable to use.
The book is peppered with interesting examples of bizarre or cryptic designs. Norman seems particularly fond of talking about light switches (such a simple device, and yet so many are hard to use!). As a pilot, I also enjoyed the frequent examples he cited from the world of aviation, where a bad interface can mean the difference between life and death. However, he also says that his personal rule is to avoid criticizing unless he has a solution to offer. Now there’s a high bar!
Norman identifies “discoverability” (can you figure out what actions are possible?) and “understanding” (do you know what the controls/displays mean?) as key components of good design. He also emphasizes the importance of a user having a good “conceptual model” of the device – even if that model is inaccurate in a technical sense. A successful model is one that allows the user to operate the device successfully.
I also found his discussion of the balance between “knowledge in the head” (memory and learned skills) versus “knowledge in the world” (objects, signs, instructions) to be thought-provoking. It makes sense to try to strike a good balance between how much advance training/prep the user needs versus how much they’ll have to read/learn/absorb while using the device. Going too far in either direction makes things harder to operate.
One of the biggest takeaways for me was his encouragement to remove the concept of “error” from an interface. He points out that when we don’t understand something another human says, we don’t say “You made a speaking error.” Instead, we interact and try to figure out what meaning was intended. Similarly, devices (and computer programs) could shift from “error” feedback to help or guidance that aids the user in specifying their intent in the form that is needed. He suggests that we think of a user action as an approximation to what is desired, and help the user to improve it. Great idea!
Chapter 5 is devoted to an analysis of errors: different types, different causes, and different remedies. I like the suggestion to treat errors as learning opportunities (for the user and for the designer); we can brainstorm ways that the error could be entirely precluded in the future. I will be on the lookout for ways to apply this in my ongoing flight training.
Some quotes I enjoyed or found insightful:
- “Machines require us to be precise and accurate, things we are not very good at.”
- “We have to accept human behavior the way it is, not the way we wish it to be.”
- “We use logic and reason after the fact, to justify our decisions to ourselves (to our conscious minds) and to others.”
- “How can the designer put knowledge in the device itself?”
- “Expert [users] minimize the need for conscious reasoning.”