All posts by anomalogue

Explaining away

Worldviews include within them accounts of alien worldviews held by others. These accounts sometimes also include reasons for why these alien worldviews are invalid and do not require consideration and understanding.

Such invalidating accounts protect one’s worldview from the consequences of understanding rival worldviews and experiencing their validity. It is as if worldviews have life of their own that they preserve as biological organisms do, protecting their outer skin, taking in only that which it can digest and incorporate and repelling everything else.

*

Monism, the belief that there is a singular and ultimate truth to be found, inclines people to assume that if something appears self-evidently true, that whatever appears to conflict with it is necessarily false. It is the casual and mostly unconscious tendency of people who have never experienced a shift in worldviews. But those who have experienced a single shift are the fiercest adherents of monism, because they’ve experienced this shift as a conversion from a world of illusion to one of overwhelming truth, which is taken as a discovery of the true world. This discovery is not experienced as the acquisition of new facts about the world, but as a transfiguration of the world itself. The experience is so deep and so dramatic (and pleasant) that is often fails to occur to the convert that the process could occur again, re-transfiguring the transfigured, so the convert fails to look for clues that this is the case. If it does, another conversion is likely to occur: from monism to pluralism.

Pluralism lives on practical terms with the properties of worldviews — the fact that they have “horizons” of intelligibility (which can be characterized as the set of questions the worldview knows how to ask), that they project specific patterns of relevance and irrelevance onto phenomena and fact, that the perspective by which the worldview sees always appears absolute and self-evidently right, and most importantly that worldviews naturally and perhaps inevitably generate misunderstandings which can only be detected with effort.

Consequently, a pluralist always harbors a certain amount of suspicion even toward pluralism, which inclines a pluralist to respect even monistic views, and to attempt to learn from them. But again, pluralism is practical, which means it lives on terms with reality as it experiences it, with the understanding that the surprise of transcendence is a permanent possibility, and that there is no way to predict when such events will occur and what will result from them. Pluralism, unlike skepticism, doesn’t throw up its hands, saying “what can I know?” It doesn’t think of learning as a means to the end of final knowledge. (Arendt identified orientation toward means-and-ends as belonging to the middle stratum of active life, which she called “work”, whose primary activity is the fabrication of artifacts. The stratum above work is “action”, the realm of politics which both presupposes and preserves pluralistic conditions. See Arendt quote below.) What matters, rather, is the desire for particular kinds of knowledge, which signals the next intellectual development, both for individuals and groups of people.

*

Arendt, from The Human Condition:

With the term vita activa, I propose to designate three fundamental human activities: labor, work, and action. They are fundamental because each corresponds to one of the basic conditions under which life on earth has been given to man.

Labor is the activity which corresponds to the biological process of the human body, whose spontaneous growth, metabolism, and eventual decay are bound to the vital necessities produced and fed into the life process by labor. The human condition of labor is life itself.

Work is the activity which corresponds to the unnaturalness of human existence, which is not imbedded in, and whose mortality is not compensated by, the species’ ever-recurring life cycle. Work provides an “artificial” world of things, distinctly different from all natural surroundings. Within its borders each individual life is housed, while this world itself is meant to outlast and transcend them all. The human condition of work is worldliness.

Action, the only activity that goes on directly between men without the intermediary of things or matter, corresponds to the human condition of plurality, to the fact that men, not Man, live on the earth and inhabit the world. While all aspects of the human condition are somehow related to politics, this plurality is specifically the condition — not only the conditio sine qua non, but the conditio per quam — of all political life. … Action would be an unnecessary luxury, a capricious interference with general laws of behavior, if men were endlessly reproducible repetitions of the same model, whose nature or essence was the same for all and as predictable as the nature or essence of any other thing. Plurality is the condition of human action because we are all the same, that is, human, in such a way that nobody is ever the same as anyone else who ever lived, lives, or will live.

All three activities and their corresponding conditions are intimately connected with the most general condition of human existence: birth and death, natality and mortality. Labor assures not only individual survival, but the life of the species. Work and its product, the human artifact, bestow a measure of permanence and durability upon the futility of mortal life and the fleeting character of human time. Action, in so far as it engages in founding and preserving political bodies, creates the condition for remembrance, that is, for history. Labor and work, as well as action, are also rooted in natality in so far as they have the task to provide and preserve the world for, to foresee and reckon with, the constant influx of newcomers who are born into the world as strangers. However, of the three, action has the closest connection with the human condition of natality; the new beginning inherent in birth can make itself felt in the world only because the newcomer possesses the capacity of beginning something anew, that is, of acting. In this sense of initiative, an element of action, and therefore of natality, is inherent in all human activities. Moreover, since action is the political activity par excellence, natality, and not mortality, may be the central category of political, as distinguished from metaphysical, thought.

Generative versus informative (again)

Re-summarization of an old idea:

Both pre-artifact and post-artifact research turns up two kinds of findings: facts and insights.

Fact is observational data, the reporting of attributes and behaviors, from the perspective of the observer, without anthropological thickness. The interpretation is left to the intuition of the observer.

Insights, on the other hand, uncover the perspective of those being researched, and yield new modes of interpretation, new fields of relevance and significance: new ways to understand what otherwise appears self-evident.

Most UX professionals call all pre-artifact research “generative” but the generative value of pre-artifact research lies in the insights it turns up, not the facts it gathers. The facts are valuable, and do guide the design, but the facts themselves do not inspire innovation. The factual dimension of pre-artifact research is better characterized as “informative” research. And likewise, post-artifact research, known as “evaluative” tends to focus on the suitability of the artifact — a sort of QA process. But post-artifact research is also a test of a team’s understanding of its users. Every tested artifact is a hypothesis: “If I understand you, this design will make sense to you, be valuable to you, speak to what you care about, and resonate with you.”

Both are needed, but due to the ontic (objective, thingly) orientation of the average non-philosophical mind which predominate in most business settings, only the factual is recognized. “Insight” tends to be dismissed as “bullshit,” “fluff,” and the like, or it is reduced to synonymity with mere fact.

And for this very reason a lot of what has been passed off as “generative” research has in fact been nothing but informative research, which is experienced by designers as “dry”, and which has done nothing to inspire innovation.

This does much to explain the current trend of dismissing generative research as passe. Nonetheless, to anyone experienced in doing or consuming real generative research this whole meme is nothing more than the opining of the ignorant to one another. People never show their blindness quite as starkly as when they parade their cynicism and tell “the Emperor wears no clothes” stories.

World-alienation

One last passage from Hannah Arendt’s Between Past and Future:

The modern age, with its growing world-alienation, has led to a situation where man, wherever he goes, encounters only himself. All the processes of the earth and the universe have revealed themselves either as man-made or as potentially man-made. These processes, after having devoured, as it were, the solid objectivity of the given, ended by rendering meaningless the one over-all process which originally was conceived in order to give meaning to them, and to act, so to speak, as the eternal time-space into which they could all flow and thus be rid of their mutual conflicts and exclusiveness. This is what happened to our concept of history, as it happened to our concept of nature. In the situation of radical world-alienation, neither history nor nature is at all conceivable. This twofold loss of the world — the loss of nature and the loss of human artifice in the widest sense, which would include all history — has left behind it a society of men who, without a common world which would at once relate and separate them, either live in desperate lonely separation or are pressed together into a mass. For a mass-society is nothing more than that kind of organized living which automatically establishes itself among human beings who are still related to one another but have lost the world once common to all of them.

Reality-creation community

Another passage from Hannah Arendt’s Between Past and Future:

In my studies of totalitarianism I tried to show that the totalitarian phenomenon, with its striking anti-utilitarian traits and its strange disregard for factuality, is based in the last analysis on the conviction that everything is possible — and not just permitted, morally or otherwise, as was the case with early nihilism. The totalitarian systems tend to demonstrate that action can be based on any hypothesis and that, in the course of consistently guided action, the particular hypothesis will become true, will become actual, factual reality. The assumption which underlies consistent action can be as mad as it pleases; it will always end in producing facts which are then “objectively” true. What was originally nothing but a hypothesis, to be proved or disproved by actual facts, will in the course of consistent action always turn into a fact, never to be disproved. In other words, the axiom from which the deduction is started does not need to be, as traditional metaphysics and logic supposed, a self-evident truth; it does not have to tally at all with the facts as given in the objective world at the moment the action starts; the process of action, if it is consistent, will proceed to create a world in which the assumption becomes axiomatic and self-evident.

Arendt is clearly someone who would have been a member of the reality-based community.

Meaning, means and ends

From Hannah Arendt’s Between Past and Future:

Marx’s notion of “making history” had an influence far beyond the circle of convinced Marxists or determined revolutionaries. … For Vico, as later for Hegel, the importance of the concept of history was primarily theoretical. It never occurred to either of them to apply this concept directly by using it as a principle of action. Truth they conceived of as being revealed to the contemplative, backward-directed glance of the historian, who, by being able to see the process as a whole, is in a position to overlook the “narrow aims” of acting men, concentrating instead on the “higher aims” that realize themselves behind their backs (Vico). Marx, on the other hand, combined this notion of history with the teleological political philosophies of the earlier stages of the modern age, so that in his thought the “higher aims” — which according to the philosophers of history revealed themselves only to the backward glance of the historian and philosopher — could become intended aims of political action. …the age-old identification of action with making and fabricating was supplemented and perfected, as it were, through identifying the contemplative gaze of the historian with the contemplation of the model (the eidos or “shape” from which Plato had derived his “ideas”) that guides the craftsmen and precedes all making. And the danger of these combinations did not lie in making immanent what was formerly transcendent, as is often alleged, as though Marx attempted to establish on earth a paradise formerly located in the hereafter. The danger of transforming the unknown and unknowable “higher aims” into planned and willed intentions was that meaning and meaningfulness were transformed into ends — which is what happened when Marx took the Hegelian meaning of all history — the progressive unfolding and actualization of the idea of Freedom — to be an end of human action, and when he furthermore, in accordance with tradition, viewed this ultimate “end” as the end-product of a manufacturing process. But neither freedom nor any other meaning can ever be the product of a human activity in the sense in which the table is clearly the end-product of the carpenter’s activity.

The growing meaninglessness of the modern world is perhaps nowhere more clearly foreshadowed than in this identification of meaning and end. Meaning, which can never be the aim of action and yet, inevitably, will rise out of human deeds after the action itself has come to an end, was now pursued with the same machinery of intentions and of organized means as were the particular direct aims of concrete action — with the result that it was as though meaning itself had departed from the world of men and men were left with nothing but an unending chain of purposes in whose progress the meaningfulness of all past achievements was constantly canceled out by future goals and intentions. It is as though men were stricken suddenly blind to fundamental distinctions such as the distinction between meaning and end, between the general and the particular, or, grammatically speaking, the distinction between “for the sake of…” and “in order to…” (as though the carpenter, for instance, forgot that only his particular acts in making a table are performed in the mode of “in order to,” but that his whole life as a carpenter is ruled by something quite different, namely an encompassing notion “for the sake of” which he became a carpenter in the first place). And the moment such distinctions are forgotten and meanings are degraded into ends, it follows that ends themselves are no longer safe because the distinction between means and ends is no longer understood, so that finally all ends turn and are degraded into means.

In this version of deriving politics from history, or rather, political conscience from historical consciousness — by no means restricted to Marx in particular, or even to pragmatism in general — we can easily detect the age-old attempt to escape from the frustrations and fragility of human action by construing it in the image of making.

It seems obvious to me that most people — or at least most people one is likely to encounter in a corporate environment — think exclusively in terms of fabrication.

Tweaking our way to greatness

Mere competence cannot surpass mediocrity, no matter how perfectly it achieves its goals.

This is because mediocrity conceives of excellence in negative terms: as an absence of flaws.

Excellence, however, is a positive matter, and it consists in the presence of something valuable.

*

The frank display of flaws can be a way to flaunt excellence.

The excellent, despite being deeply problematic or grossly distorted, is always preferable to those things about which nothing bad nor good can be said.

*

Many romantic relationships persist unhappily for the sole reason that nobody produce a flaw sufficiently terrible to justify it.

Thwarted fault-finding produces even deeper contempt than successful fault-finding.

*

Mere competence results from seeing only the commonplace, commonsense questions.

The questions are barely even noticed. Usually they are simply taken to be self-evident — implied by reality itself.

All effort is put into re-answering the questions a little better than last time. With each recitation, the answer is tweaked, refined, polished, paraphrased, flavored or garnished a little differently — but the answer is substantially the same, which is why it finds easy recognition.

*

Innovation doesn’t come from inventing better answers; it comes from discovering better questions.

Few people seem to know how to discover new questions, and this has much to do with the aversion most people have to the conditions necessary for finding them. People go about things in ways that actively prevent new questions from arising. Everything presupposes the validity of the old questions, and reinforces re-asking and expert re-telling.

We don’t actually love the old questions and we’re not really that enamored with the answers we produce. We only like the predictability of it all.

But is it that we hate new questions? Actually, no. As a matter of fact, once a new question is posed clearly, people love it. The essence of inspiration is feeling the existence of a new question.

What people really hate is the space between the old and new question — the space called “perplexity”, that condition where we are deeply bothered and disoriented by a something we can’t really point to or explain. We cannot even orient ourselves enough to ask a question.

This is the space Wittgenstein claimed for philosophy: “A philosophical problem has the form: “I don’t know my way about.'”

*

How do we enter perplexity? By conversing with others and allowing them to teach us how their understanding differs from our own. What they teach us is how to ask different questions than we’d ordinarily think to ask. But before we can hear the questions they are asking –usually tacitly asking — we to quiet our own questions. (Interrogations are only good for getting answers out of people.)

How do we avoid perplexity? By not allowing the other to speak. Instead we observe their behaviors, look for patterns, impose different conditions and look for changes. We may feel puzzled by the behaviors we see, but we can answer this puzzlement by trying out one answer after another until one turns out good enough, like a child trying to hit upon the correct multiple choice answer to a math problem without really understanding the material.

*

It appears that generative research has gone out of style. There’s a widespread belief that assembling a frankenstein of best practices parts and subsequently using analytics to detect and correcting all the flaws will somehow produce the same results, but more cheaply and reliably — and less harrowingly.

But, here’s the question: Can anyone produce even one example where tweaking transformed something boring into something compelling?

And then consider how many times you’ve watched something compelling tweaked to mediocrity.

Behavior tweaking

In general, people’s interest in one another is practical and behavioral. The minimum knowledge required to elicit desired behaviors and to prevent undesired behaviors from occurring is about all people want.

If we feel we have to understand a person’s experiences to accomplish this, we will make the effort, but otherwise, we will avoid these kinds of questions, because understanding experiences requires a kind of involvement in the other’s perspective resembling immersion in literature, where one’s own worldview is temporarily suspended and replaced with another. And sometimes we don’t come back, fully. Something of the literary world stays with in our own, and we see things differently. An understander stands a good chance of being permanently and sometimes profoundly changed by such modes of understanding.

What most people prefer is the kind of relationship scientists have toward matter. The behaviors of objects are observed in various conditions from a distance, and the knowledge is factual: when this happens, this follows. The matter doesn’t explain itself to the observer: the observer does all the explaining. Whatever intentional “thickness” is added to the behaviors is taken from the observer’s own stock of motives. This kind of objective knowledge doesn’t change us or how we see the world; it changes only our opinions about the things we observe.

For a brief moment, the business world felt it needed to understand other people as speaking subjects as opposed to behaving objects. And for a brief moment it appeared that business itself could be changed through the experience of this very new kind of understanding. But now analytics has developed to such a degree that businesses can return back to their comfort zone of objectivity, and tweak human behaviors through tweaking designs, until they elicit the desired behaviors.

Parental authority

Parental authority stands on two conditions: 1) the parent’s actual possession of superior knowledge of the child’s needs, and 2) the parent’s intention to apply that knowledge to benefit the child.

Parents sometimes use coercion outside of parental authority, often for the sake of the smooth operation of the household. This in itself is not illegitimate. The problems start when coercion is confused with authority. The primary perpetrators of this are those who actually do not know the difference, and therefore lack authority.

Why qualitative research?

Quantitative research methods (as valuable as they are) can never replace interviews and ethnographic research. Despite what many UXers think, the essential difference between ethnographic research and other forms of qualitative research is not  merely that it observes behavior in context, but rather, as Spradley notes in The Ethnographic Interview, that in ethnographic research the person being researched plays a role in the research quite different from that of other methods: the role of informant (as opposed to subject, respondent, actor, etc.). An informant doesn’t merely provide answers to set questions or exhibits observable behavior. An informant teaches the researcher, and helps establish the questions the researcher ought to attempt to understand — questions the researcher might never have otherwise thought to ask. An informant is far more empowered to surprise, to reframe the research, and to change the way the researcher thinks. In ethnographic research the researcher is far less distanced and intellectually insulated from the “object” of study, and is exposed to a very real risk of transformative insight.

This attitude toward human understanding goes beyond method, and even beyond theory. It implies an ethical stance, because it touches on the question of what a human being is, what constitutes understanding of a human being, and finally — how ought human beings regard one another and relate to one another.

*

The passage that triggered this outburst, from Hannah Arendt’s The Human Condition:

Action and speech are so closely related because the primordial and specifically human act must at the same time contain the answer to the question asked of every newcomer: “Who are you?” This disclosure of who somebody is, is implicit in both his words and his deeds; yet obviously the affinity between speech and revelation is much closer than that between action and revelation, {This is the reason why Plato says that lexis (“speech”) adheres more closely to truth than praxis.} just as the affinity between action and beginning is closer than that between speech and beginning, although many, and even most acts, are performed in the manner of speech. Without the accompaniment of speech, at any rate, action would not only lose its revelatory character, but, and by the same token, it would lose its subject, as it were; not acting men but performing robots would achieve what, humanly speaking, would remain incomprehensible. Speechless action would no longer be action because there would no longer be an actor, and the actor, the doer of deeds, is possible only if he is at the same time the speaker of words. The action he begins is humanly disclosed by the word, and though his deed can be perceived in its brute physical appearance without verbal accompaniment, it becomes relevant only through the spoken word in which he identifies himself as the actor, announcing what he does, has done, and intends to do.

*

The dream of quantitative research rendering qualitative research obsolete might be one more instance of an age-old fantasy: a world of people who are seen and not heard, who obey our predictions and commands, to whom we can dictate terms. Such beings cannot remind us of the difference between reality itself, and one’s own conceptions of it — and they leave the mind in peace to to be “its own place, and in itself can make a Heaven of Hell.” Hell is not other people, per se. It is speaking people showing us what we’d rather not know, which can strip us of what we knew but can no longer believe.

*

(Maybe we lack faith in our capacity to recover from loss of faith?)

Useless (or worse)

When chaos is experienced, a failure of reason has already occurred. In chaos we encounter realities our reason is not equipped to order and make sense of. This is the experience of perplexity, where we relive the horror of birth.

The only people in the world perverse enough to find meaning in such meaninglessness are philosophers. Wittgenstein said it best: “A philosophical problem has the form: I don’t know my way about.”

*

We prefer to believe the world is discovered bit by accumulated bit in a vacuum of space and knowledge. We want to believe in a world that is created ex nihilo. What is we have is established, and what isn’t is nothing.

We hate to believe in a world that is articulated from chaos, because we hate the consequence: the order we have lent to the world which has made it familiar and predictable could suddenly recede and  shock us with raw alienness.

*

This possibility — that the world can be revealed as strange — that makes people hate their neighbor. It is the neighbor, with his strange views, peculiar habits, and outlandish tastes, who jointly holds the potential to defamiliarize the world. The potential, though, is only actualized voluntarily by ourselves. Each person holds the power either to open the door to the neighbor, or to bar it. If the neighbor is invited in, if his views are seriously entertained, the two gathered in such a spirit of hospitality and truth are in a position to recognize that reality and our idea of reality are not identical. In some deeply disturbing and inexpressible way, reality transcends idea. Without the disruption of the neighbor, idea eclipses what is beyond idea, and becomes idol.

But the door can be barred. We are free to abide in the mind. “The mind is its own place, and in itself can make a Heaven of Hell.” By withholding the status of “neighbor” from all but the like-minded — those who ditto our opinions, who agree with us that the details of reality that appear to contradict our views (or more subtly the exclusive validity of our views) are irrelevant (if not outright deceptions), who share our antipathy toward our non-neighbors and agree with us that entertaining their ideas is fruitless at best (and possibly corrupting) — we find willing partners in reducing the world to pure idea. The impurity rejected is that of reality who transcends mere idea.

*

We stabilize our sense of reality through a variety of intertwined methods. One of these methods is by successfully observing and describing the world to ourselves. Another is to reliably anticipate or predict events, or even better to influence or control them. But perhaps the most important method for creating a solid sense of reality is to find agreement with others. This last method can compensate for the absence of the others.

*

[Solipsism] “is rare in individuals–but in groups, parties, nations, and ages it is the rule.”

*

When a group agrees with itself that whatever appears to be an anomaly is mere noise, or error, or deception, or irrelevance, it is able to avoid (or at least postpone) confrontation with anomalies, which are the sparks of chaos, the pinholes in our knowledge. Anomalies remind us how much more there is to things than we possess as individuals, or as members of a particular group.

It is easier to love the reality we have made for ourselves — our own sense of truth — than it is to love reality. Reality challenges us, makes claims on us, changes us. If we think of ourselves as discrete, unchanging, self-consistent beings, reality threatens our mortality. If we think of ourselves as connected, evolving, expanding creatures, reality offers us perpetual natality.

*

We hate the possibility of the situation that requires the aid philosophy, so we deny that possibility and we deny the use of philosophy. Philosophy is a waste of time at best, and most likely corrupting.

But perhaps there’s some validity to the suspicion. Like generals thrive on outbreaks of war, and doctors thrive on outbreaks of disease, philosophers thrive on outbreaks of disillusionment.

The slipperiest slope

The slippery slope argument is the slipperiest slope. In fact, it is the slipperiness itself, a universal lubricant that creates a friction-free abstract world where the slightest tilt automatically dumps whatever sits on it into an abyss of catastrophic consequences. The “friction” it removes is that of human judgment and responsibility — our ability to decide to change course.

Supra-individual mind

Every thought thinkable by an individual mind has already been thought. Future thoughts will come from people who know how to think collaboratively beyond their own individual capacity as responsible participants in a supra-individual mind.

This idea should not be mistaken for common “collectivism”. It is the very opposite of the mob mentality, where each individual is reduced to what all human beings have in common, becoming roughly identical, and behaving according to animal tribal instinct. Supra-individual thinking makes use of intellectual differences as well as commonalities. It is also different from hierarchical team thinking, where one mind understands the problem completely and then enlists the help of others to manage and execute. Supra-individual thinking means more than one person is required to participate if an idea is to be fully understood, so no one person has the “vision” in its entirety. Supra-individual thinking is also different from the kind of thinking that comes from (relatively) homogeneous groups, where once an idea is conceived by one member of the group, all are instantly and effortlessly able to grasp the idea, because arriving at the idea was simply a matter of quickness or luck. Supra-individual thinking arrives at agreements, but not agreements where each person holds an identical conception and opinion, but rather where each person holds conceptions and opinions compatible with the others in guiding collaborative action. And finally supra-individual thinking is not a division of labor among experts in different disciplines. The coherence is not mere systematization of separate black-box parts, but organic, conceptual coherence. Supra-individual thinking is unified intuitively and tacit-practically as well as rationally.

In collaborative thought, the group somehow comes to know something coherently, which is only later completely understood by some or all of the group, but in the meantime is effectively applied to real-world problems.

*

Supra-individual mind is similar to common sense, in the meaning of “the sense of reality arising from the five senses perceiving together”. It’s the blind men and the elephant story, except with temperamental/psychological differences substituted for circumstantial ones.

*

Supra-individual mind is the concrete actualization of pluralism. It begins with tolerance and skepticism, but then moves far beyond them.

Geertz on irony

Geertz: (From his essay “Thinking as a Moral Act”):

“Irony rests, of course, on a perception of the way in which reality derides merely human views of it, reduces grand attitudes and large hopes to self-mockery. The common forms of it are familiar enough. In dramatic irony, deflation results from the contrast between what the character perceives the situation to be and what the audience knows it to be; in historical irony, from the inconsistency between the intentions of sovereign personages and the natural outcomes of actions proceeding from those intentions. Literary irony rests on a momentary conspiracy of author and reader against the stupidities and self-deceptions of the everyday world; Socratic, or pedagogical, irony rests on intellectual dissembling in order to parody intellectual pretension.”

It seems to me that systems thinking — at least thinking about systems in which the thinker is a participant — might require a certain degree of irony. Our experience of being caught up in a system is one thing, but what is required to adjust or change the system is another — and the connection is rarely obvious. That experience is an intrinsic part of the workings of many systems, particularly management systems.

Limits of the explicit

Explicit forms of understanding and communication (explicit truth) can represent only some aspects of reality. In conflicts between rationalism and irrationalism, enlightenment and romantic ideals, suits and creatives, what is at stake is the leftover reality — its nature, its unity and/or multiplicity, how/whether truth can be established/shared, and how it relates to those realms of reality that can be known and spoken of explicitly.

My own hunch is that the non-explicit aspects of reality are precisely those that matter to us, and the near-universal requirement that things be known and spoken of in an explicit mode serves as a filter that systematically filters the non-explicit from consideration in most collective endeavors.

I also think the non-explicit aspects of reality are precisely those that most need to be agreed upon and shared, but this agreement and sharing is different from agreement on fact or sharing a belief in the validity of an argument.

Conserving, simplifying, forgetting

When a person calls himself a “conservative” what precisely is it that is conserved? Is it ideas? Do conservatives wish to keep valued ideas intact and pure?

Or is it a wish to conserve our limited store of moral energy? Despite what we would like to believe, we cannot just will this energy into existence, because will itself is constituted of this energy.

And even if energy were unlimited, time is indisputably limited. If we so expend most of our energy and time sifting through a near-infinite number of details, then wrestling to organize the mess into something clear and cohesive, wouldn’t the result of this effort be so complicated and unwieldy that our efforts would be hopelessly encumbered (not to mention pleasureless)?

It seems our choice is somewhere on a continuum ranging between “analysis paralysis” in the face of innumerable disorganized facts on one hand an or decisive, energetic action based on simplification verging on willful ignorance on the other. To put it in Yeats’ words, “The best lack all conviction, while the worst / are full of passionate intensity.” I think this tendency grows more and more exaggerated as the old fundamental thought-structures of a culture begin to give out under the pressures of new social conditions, and new underdeveloped and over complicated ones vie (lamely) to replace them.

*

Does change resulting from consideration of new and multiple perspectives necessarily mean appending and complicating our idea-world, and making it increasingly unlivable? Probably at first. But thinking deeply can also have a simplifying effect. But this simplification itself takes time and energy, and modes of thinking many people find even more uncomfortable than dealing with baroquely-rehacked, elaborately epicycled and recycled concepts.

Perhaps it is not over-simplification that makes ideologies so damaging to the world — since, after all, all thinking and all abstraction involves selective forgetting and remembering (what we call discerning relevance and discovering generalities) — but rather that the simplifications take into account only what one group or another considers relevant.

Shibbolethargy

Shibbolethargy: A form of intellectual laziness which uses the tools of thought (ideas, concepts, arguments and symbols) to create an appearance of rigorous thought, when in fact the true aim is to signal one’s membership in some particular tribe (and consequently unconditional opposition to other tribes).

At the root of shibbolethargy is the desire to evaluate ideas and actions ad hominem rather than on their own merits, while appearing to rely on principle and reason.

The attitude a shibbolethargic critic strikes is this: when confronted by an uncomfortable, semi-/un-comprehended idea, the most efficient means to evaluate it is to trace it back to the root, to see from what ground the idea has grown (rather than take the opposite course — which requires more trust, time and work — to judge the tree by its fruits). The root of the idea is the believer. If the believer is found to be a victim/perpetrator of some pernicious, delusional ideology, then by extension the idea is contaminated, and all efforts to understand the idea will at best be unfruitful and at worst can result in ideological contamination.

In the end, while many words may be used, many elaborate arguments, memorized and recited, many stories told both anecdotal and historical, no thought has been done and no new understanding has been found. The old understanding is defended and preserved, not so much through understanding and responding to other ideas, but rather through proving (solely to the satisfaction of the defender) that understanding and responding to other ideas is unnecessary — and probably dangerous to boot. In other words, that one is unwilling to see why he ought to think something he has not already thought.

Decision-making scenarios

Scenario 1 (thesis)

A: “Maybe this will work…”

B: “Before we commit the effort, can you explain how it will work, assuming it might, keeping in mind we have limited time and money?”

A: “I think so. Give me a day.”

B: “We don’t have a day to spare on something this speculative. Let’s come up something a little more baked.”

… and [eventually, inevitably]

B: “So, what are the best practices?”

Scenario 2 (antithesis)

A: “I have a hunch this will work. Let’s go with it.”

B: “Can you explain how it will work?”

A: “Trust my professional judgment. My talent, training, experience, [role, title, awards, track record, accomplishments, etc.] distinguish my hunches.”

Scenario 3 (synthesis)

A: “I have a hunch this might work. Hang on.” … “Whoa. It did work. Look at that.”

B: “How in the world did that work?”

A: “I don’t know. Let’s try to figure out why.”

Shhhhhhh

Here’s what I learned from the Pragmatists (mostly via Richard J. Bernstein, who has probably had a deeper and more practical impact on how I think, work and live than any other author I’ve read): An awful lot of what we do is done under the guidance of tacit know-how.

After we complete an action we are sometimes able to go back and account for what we did, describing the why, how and what of it — and sometimes our descriptions are even accurate. But to assume — as we nearly always do — that this sort of self-account is in some way identical to what brought these actions about or even what guided them after they began is an intellectual habit that only occasionally leads us to understanding. Many such self-accounts are only better-informed explanations of observed behaviors of oneself, not reports on the actual intellectual process that produced the behaviors.

To explain this essential thoughtlessness in terms of “unconscious thoughts” that guide our behavior as conscious ones supposedly do in lucid action is to use a superstitious shim-concept to maintaining this mental/physical cause-and-effect framework in the face of contrary evidence. I do believe in unconscious ideas that guide our thoughts and actions (in fact I’m attempting to expose one right here), but I do not think they take the form of undetected opinion or theories. Rather they take the form of intellectual habits. They’re moves we just make with our minds… tacitly. Often, we can find an “assumption” consequent to this habitual move and treat this assumption as causing it, but this is an example of the habit itself. It is not the assumption there is a cause that makes us look for the cause, it is the habitual way of approaching such problems that makes us look for an undetected opinion at the root of our behaviors. We don’t know what else to do. It’s all we know how to do.

*

I’m not saying all or even most behavior is tacit, but I do believe much of it is, and particularly when we are having positive experiences. We generally enjoy behaving instinctually, intuitively and habitually.

*

Problems arise mainly when one instinct or intuition or habit interferes with the movements of another. It is at these times we must look into what we are doing and see what is unchangeable, what is variable and what our options are in reconciling the whole tacit mess. The intellectual habit of mental-cause-physical-effect thinking is an example of such a situation. Behind a zillion little hassles that theoretically aren’t so big — no bigger than a mosquito buzzing about your ears — is the assumption that we can just insert verbal interruptions into our stream of mental instructions that govern our daily doings without harming these doings. As I’ve said before, I do think some temperaments operate this way (for instance, temperaments common among administrators and project managers), but for other temperaments such assumptions are at best wrong, and at worst lead to practices that interfere with their effectiveness.

Software design and business processes guided by this habit of thought tend to be sufficient for verbal thinkers accustomed to issuing themselves instructions and executing them, but clunky, graceless and obtrusive to those who need to immerse themselves in activity.

*

It is possible that the popular “thinkaloud” technique in design research is nothing more than a methodology founded on a leading question: “What were you thinking?” A better question would be: “Were you thinking?”

*

The upshot of all this: We need to learn to understand how the various forms of tacit know-how work, and how to research them, how to represent them in a way that does not instantly falsify them, and how to respond to them. And to add one more potentially controversial item to this list: how to distinguish consequential and valuable findings documentation versus mere thud-fodder which does nothing in the way of improving experiences, but only reinforces the psychological delusions of our times. If research can shed this inheritance of its academic legacy — that the proper output of research is necessarily a publication, rather than a direct adjustment of action — research can take a leaner, less obtrusively linear role in the design process.