(Papers are linked to their pdf downloads, if available.)
Usability evaluation considered harmful (some of the time) [abstract]
Authors: Saul Greenberg (University of Calgary) and Bill Buxton (Microsoft Research)
Abstract: Current practice in Human Computer Interaction as encouraged by educational institutes, academic review processes, and institutions with usability groups advocate usability evaluation as a critical part of every design process. This is for good reason: usability evaluation has a significant role to play when conditions warrant it. Yet evaluation can be ineffective and even harmful if naively done ‘by rule’ rather than ‘by thought’. If done during early stage design, it can mute creative ideas that do not conform to current interface norms. If done to test radical innovations, the many interface issues that would likely arise from an immature technology can quash what could have been an inspired vision. If done to validate an academic prototype, it may incorrectly suggest a design’s scientific worthiness rather than offer a meaningful critique of how it would be adopted and used in everyday practice. If done without regard to how cultures adopt technology over time, then today’s reluctant reactions by users will forestall tomorrow’s eager acceptance. The choice of evaluation methodology – if any – must arise from and be appropriate for the actual problem or research question under consideration.
Defending design decisions with usability evidence: a case study
Authors: Erin Friess (Carnegie Mellon University)
Abstract: This case study takes a close look at what novice designers discursively use as evidence to support design decisions. User-centered design has suggested that all design decisions should be made with the concern for the user at the forefront, and, ideally, this concern should be represented by findings discovered within user-centered research. However, the data from a 12-month longitudinal study suggests that although these novice designers are well versed with user-centered design theory, in practice they routinely do not use user-centered research findings to defend their design decisions. Instead these novice designers use less definitive and more designer-centered forms of evidence. This move away from the user, though perhaps unintentional, may suggest that design pedagogy may need to be re-evaluated to ensure that novice designers continue to adhere to the implications of user-centered research throughout the design process.
Using participants’ real data in usability testing: lessons learned [abstract]
Authors: Todd Zazelenchuk, Kari Sortland, Alex Genov, Sara Sazegari and Mark Keavney (Intuit, Inc.)
Abstract: In usability testing, we place great importance on authentic tasks, real users, and the appropriate fidelity of prototypes, considering them carefully in our efforts to simulate people’s real-life interactions with our products. We often place less importance on the data with which we ask participants to interact. Commonly, test data are fabricated, created for participants to imagine as their own. But relating to artificial data can be difficult for participants, and this difficulty can affect their behavior and ultimately call our research results into question. Incorporating users’ real data into your usability test requires additional time and effort, along with certain considerations, but it can lead to richer and more valid usability results.
Revisiting usability’s three key principles [abstract]
Authors: Gilbert Cockton (School of Computing and Technology)
Abstract: The foundations of much HCI research and practice were elaborated over 20 years ago as three key principles by Gould and Lewis: early focus on users and tasks; empirical measurement; and iterative design. Close reading of this seminal paper and subsequent versions indicates that these principles evolved, and that success in establishing them within software development involved a heady mix of power and destiny. As HCI’s fourth decade approaches, we re-examine the origins and status of Gould and Lewis’ principles, and argue that is time to move on, not least because the role of the principles in reported case studies is unconvincing. Few, if any, examples of successful application of the first or second principles are offered, and examples of the third tell us little about the nature of successful iteration. More credible, better grounded and more appropriate principles are needed. We need not so much to start again, but to start for the first time, and argue from first principles for apt principles for designing.