Design research is tormented by rigor anxiety. There is a sense that design research is a bastard child of the social sciences; that what we do is a sloppy approximation of what anthropologists, behavioral scientists, social or cognitive psychologists do correctly. We feel that maybe because we must work so rapidly or with limited resources, we sort of do things a slightly wrong way. Or maybe our design educations omitted some technical know-how or esoteric theoretical knowledge that real social scientists have.
This anxiety is compounded by a phobia of bias and a fetishization of anti-bias techniques purported to neutralize or counter-balance these biases. Scientific techniques are understood to be our best hope for undoing these subjective biases that distort objectivity, by selectively noticing, ignoring and twisting what we think we perceive to play nice with our own preconceptions, preferences and cognitive predispositions.
I’d like to challenge this view — or at least the social-scientistic remedy part — and to point out that differences in purpose, funding and form of output make academic social science and design research as different as they are alike, and that each genre of research achieves rigor in its own way.
I’m already getting bored with this post, so I’ll make it really damn quick.
Most academic social science work is done with the goal of contributing new knowledge to the field. The work’s ultimate form of output is a paper published in some kind of juried academic journal. The key success indicator is how many other social scientists find the paper valuable enough that they cite it in their published papers. But in order for any of this to happen, the knowledge must be defensible, if not unassailable. Otherwise, the paper is less likely to be selected and published. If it is published, it will meet even more challenges, as other academics test it and attempt to discredit it with their own critiques or research. It is a high-stakes game, and the game is played in single-shots. An academic must be rigorous to make the work stand, and also to show that the work deserves attention, so it makes a lot of sense to take time, do everything possible to remove doubts, uncertainties and soft spots vulnerable to attack.
Let’s call this kind of rigor “single-shot rigor”.
Designers on the other hand, are often forced to show results days into their research. Stakeholder are impatient to see progress and evidence that the work will produce value. Others in the organization clamor to get something useful as early as possible. Directional truth is sometimes very useful, especially when an organization’s general direction is in question. For design researchers, usefulness is most important, and unassailability is valuable only to the point where the research will be assailed at that particular point in its lifestyle. But that point is not single-shot. There will be more points, not only in the research, but also in whatever work the research informs. Everything is, as designers say, “wet clay”, moldable and adaptable, open to further learning as it is applied. (To extend this clay analogy, academic social sciences fire their work in the kiln of publication, and if it is pressed too far, it shatters.) Only when the final product is released, and this is true only for some kinds of products (like material products), most stay pliable after release (like software and services).
But also, because design research is fast and relatively cheap, it can be done iteratively, with each cycle informing different stages of the design’s development, each building on and stress-testing the previous iterations. This means that any misunderstanding or oversight in one cycle of research will be discovered in a future cycle. For instance, if a need of a user or customer is misinterpreted during foundational research (research used to help teams understand the people and contexts where a design intervention might be useful), when the insights from foundational research are applied in the design of generative research (research used to produce innovative concepts for design interventions) or evaluative research (research used to assess the usefulness and desirability of design interventions) omissions and misconceptions will be brought to light. Through successive cycles of learning and application, each cycle slightly less open-ended and formally exact than the last, the research gets more and more complete, specific and certain.
So, let’s call design research’s rigor “iterative rigor”.
Given infinite time and resources, perhaps one-shot rigor could have some value of a certain kind, but that time and those resources might be more wisely used adding more iterations. But also — especially in early stages of design research where research is used to inspire intuitive leaps into unknown possibilities — premature rigor can introduce trade-offs against innovation by closing off the intuitive hunches, reckless speculation and informed imagination that make it possible. Here, trading off possible opportunities for certainty is unwise.
Excellent post. It sounds like an interesting variation of waterfall vs agile. Instead of BDUF, you could call it BRUF (Big Rigor Up Front) instead of iterative rigor.
Or to be more Peircean, it’s the perennial tension between uberty and security.