All posts by anomalogue

Redescription versus renarration

Often, when I try to talk about codesigning philosophies, people compare (or even equate) what I’m doing with the kind of thing therapists, new age counselors  and life coaches do, when they help people “tell themselves different stories”. Byron Katie is a notable example of this kind of “renarration” therapist.

This is a fair comparison. What I am doing is similar in some important ways. Both my approach and renarration is the basic concept of social constructionism applied to individual life. It is inspiring stuff. The promise of change, freedom and self-determination in social constructionism was one of the most intoxicating active ingredients in postmodern thought. It liberates us from having “hard truths” forced on us by convention or by aggressive, clever, fact-armed polemicists, who think that stripping a person of legitimate objections to their arguments is the same thing as persuasion. We’ve learned  that any truth, even a self-evident one, can, in principle, be interrogated and deconstructed to smithereens, clearing space for other truths. The process of deconstruction exposes the fact that they were constructed — by someone. So, why not construct our own custom truths? Renarration is individualized social constructionism, packaged for popular consumption.

None of this is wrong, even in popularized, vulgarized form. But I think it is not right enough.

It is not right enough in the way that it was right enough to claim that deposing Saddam Hussein would bring freedom to Iraq. Removing what is bad, unstable or oppressive is a necessary condition of replacing it with something better, but it is not sufficient. As some of us learned from the hard lesson of Iraq, something better must be constructed.

Further, the replacement must be constructed well and broadly accepted by the populace, or it will be factionalized, unstable and non-functional. It may even collapse and be reconquered by the old power. We cannot just go in, clear ground, and replace what was bad with something we imagine should be great.

Likewise, removing beliefs that feel bad does not ensure that anything will replace it, or that the old bad belief won’t come back and reassert itself. We have to make something that persuades us through and through — mind, heart and body.

As designers say, “We must make the right thing, and make the thing right.”

Here are the key differences between the approach where we tell ourselves different stories “renarration”, and the kind of philosophy codesign I’ve been experimenting with “collaborative redescription.”

I’ll argue/explain them more fully someday, but for now I’m just going to make a list, so I can finish this up and go ride my bike.

  1. Collaborative redescription is not therapy. It is not meant to heal psychological wounds or build senses of empowerment, or anything like that. It is a method for designing philosophies that work well for the purposes of the person using it. “A point of view is worth 80 IQ points”, so there’s a chunk of self-help benefit to doing it, but it’s less a “being my authentic self” benefit than the kind of benefit you get from acquiring and using better tools for a job.
  2. Collaborative redescription goes beyond beliefs. It is about changing conceptions that produce beliefs. But changing conceptions also changes how we perceive, how we respond, what and how we value, how we feel, how we experience life, where we detect patterns or analogies. Our inner dialogue or narration or our personal doctrine are just a tiny part of this, and they must be integral parts of our life experience, not words we recited in an effort to shout over unwanted perception, feelings, etc. When conceptions change, they holistically change the entire field of experience. They ripple through our being, reconfiguring — transfiguring, in fact — our existence in radically surprising ways. Descriptions give us a handy way of seeing conceptions, and redesciptions are an effective way to experiment with new conceptions, but the redescriptions are a means, not an end. Ideally, the redescriptions end up being superfluous, and can be discarded and forgotten. The conceptions remain and work wordlessly behind the scenes.
  3. Collaborative redescription takes adoption seriously. We cannot directly control our beliefs, and decide what we will believe or what we will disbelieve. All we can do is try on alternate ways of thinking to see if they produce persuasive beliefs, and investigate unwanted beliefs until they break down. The beliefs are believed or not, and trying to act otherwise is courting intellectual dishonesty, delusion and bullshit.
  4. Collaborative redescription is not centered on the self — not how one thinks of oneself, nor one’s own history, nor one’s relationships, nor one’s history of relationships. It is likely to affect these things, but it focuses on whatever indicates problems with how one conceptualizes, not with emotional needs or distress.
  5. Collaborative redescription is not self-discovery or self-empowerment. The insights aren’t necessarily supposed to come from within, or from any particular source. Origin does not matter.  The main thing that matters is what you find persuasive — or unpersuasive. Both parties in the collaboration are doing their best to come up with something new that might work. It is about developing a personal philosophy that actually works. It has to be adopted and used, and it should work well.

I haven’t even gotten into methods, so there may be more practical differences than the ones I’ve listed, but this should suffice to establish that collaborative redescription is not just a flavor of renarration.

But if you’ve actually read Byron Katie or Marianne Williamson or anyone who urges people to tell different stories, and what I’m saying seems off-the-mark, please let me know. I’ve know them mostly second-hand because theirs is a genre I don’t enjoy reading.

Casting about, meandering toward my book

Right now I am in a painful flitting-about, casting-about intellectual mode.

I was reading Garfinkel’s classic Studies in Ethnomethodology, which I am excited to say I was able to understand clearly this time. However, once I picked up the logic of the method, the technical details of his studies strained and eventually broke my patience. At least now I can add sociology to my “academic disaster averted” file, along with architecture, computer science, HCI and philosophy, as graduate degrees I would have never made it through.

I was originally reading Garfinkel because I started feeling the importance of indexicality in my own project of trying to redescribe philosophy not as a search for truth but as a process of conceptual adaptation of who we are to the conditions we find ourselves in — a process that is perhaps most fruitfully conceived as design and best approached with design practices. A central piece of this project is accounting for how we perceive elements (people, objects, locations, words, symbols) in our environment and spontaneously intuit their significance within their context. Ethnomethodology provides a sociological lens for seeing how this meaning-making/-conveyance happens in particular social settings, and offers a vision for how this happens in general.

My interest is focused on conceptions, which I define as “mind moves” of various kinds, the intellectual equivalent of learning a dance or a tennis swing, which once we acquire it, immediately becomes an extension of our mind, and intercepts our sense data and assigns it relevance, all without any explicit intention or verbalization. In fact, I think conceptions direct our verbalizations by exactly the same means that it directs our use of tools.

I see perception, intuition of whole-and-part, interaction, communication as guided by conceptions, any of which might be changed, and which, when changed, can alter the meaning and experience of everything — that is, transfigure it. I want to outline a philosophy of intentional, responsible transfiguration of the world around us, as we inhabit it, understand it, interact with it, and shape it, what I’m calling enworldment. I see it as a sober variety of existentialism, with the adolescent recklessness, self-absorption and melodrama that dogs existentialism matured out of it, tempered by a cultivated sensitivity and respect for transcendence.

My main text now is Susanne Langer’s Philosophy In a New Key, which I am rereading the first time in ten years. I recall the impression that her thinking was pretty close to my own, and affirmed many ideas that I’d acquired elsewhere, perhaps influenced by her (for instance, Geertz, whose quotes from her book inspired me to read her) but that the big novel takeaway for me was her insight that non-discursive language-defiant forms of knowledge can be embedded or performed in art and religion. This also is an attempt to reckon with conceptions, which Langer conceptualizes in terms of symbols.

But this time through, at this time in world history, I’m attuned to the presence of one of her influences, Ernst Cassirer. I know him best as a central figure in a book I bought years ago and never read, A Parting of the Ways. I’ve been poking around trying to get a sense of him, and he seems like a good hero for a person like me in times like these. In his time, the twilight of the Weimar republic, he was perceived as a hopelessly idealistic and out-of-touch liberal. At that moment, the world was dividing into extreme ideological factions, all of whom agreed on nothing except one thing: the irrelevance of liberalism. Liberal-democracy was regarded by all advanced intellects as a played-out failure, and all those who remained loyal to it were backwards. The future belonged to either Marxism or Fascism, and the only remaining question was which was destined to be on the right side of history.

I picked up A Parting of the Ways and sampled it to see if I ought to read it, and this passage jumped out at me:

Heidegger’s interpretation of Kant aimed to show that the Critique of Pure Reason does not present a theory of knowl­edge and, in particular, that it does not present a theory of mathematical natural scientific knowledge. The real contribution of the Critique is rather to work out, for the first time, the problem of the laying of the ground for metaphysics — to articulate, that is, the conditions of the possibility of metaphysics. On this reading, Kant argues (in remarkable agreement with the main argument of Being and Time) that metaphysics can only be grounded in a prior analysis of the nature of finite human reason. As finite, the human intellect (unlike the divine intellect) is necessarily dependent on sensible intuition. Moreover, and here is where the true radicalism of Heidegger’s interpretation emerges, Kant’s introduction of the so-called transcendental schematism of the understanding has the effect of dissolving both sensibility and the intellect (the understand­ing) in a “common root,” namely, the transcendental imagination, whose ultimate basis (again in remarkable agreement with the argument of Being and Time) is temporality. And this implies, finally, that the traditional basis of Western metaphysics in logos, Geist, or reason is definitively destroyed.

In the ensuing disputation Cassirer begins by announcing his agreement with Heidegger concerning the fundamental importance of the transcendental imagination — interpreted, however, in accordance with Cassirer’s own philosophy of symbolic forms, as pointing to the fact that the (finite) human being is to be defined as the “symbolic animal.” But Cassirer strongly objects to the idea that we as “symbolic animaIs” are thereby limited to the “arational” sphere of finitude. For Kant himself has shown how the finite human creature can nevertheless break free from finitude into the realm of objectively valid, necessary and eternal truths both in moral experience and in mathematical natural science. On this basis, Cassirer asks Heidegger whether he really wants to renounce such objectivity and to maintain instead that ail truth is relative to Dasein (the concrete finite human being). Heidegger, for his part, acknowledges the importance of this question, but he continues to reject the idea of any “breakthrough” into an essentially nonfinite realm. On the contrary, philosophy’s true mission — and our true freedom — consists precisely in renouncing such traditional illusions and holding fast to our essential finitude (our “hard fate”).

This put me on the edge of my chair. But I had questions about some of Kant’s terminology. What exactly is a “sensible intuition”? That led me to a paper by Marcus Willaschek, “The Sensibility of Human Intuition: Kant’s Causal Condition on Accounts of Representation”, and this slab of clarity, which I feel sure will allow me to make better use of Kantian language.

(SU1) Human beings can come to entertain mental representations in one of two ways: either (a) as a result of an object’s causal impact on our minds (an affection of our “Gemüt”) or (b) as a result of some “spontaneous” activity of “uniting” various representations into a new one (cf. A 68, B 93).

(SU2) The capacity to come to represent something as a result of (SU1a) is a kind of “receptivity” that Kant calls “ sensibility” (A 19, B 33).

(SU3) The capacity to come to represent something as a result of (SU1b) is a kind of “spontaneity” called “understanding” (A 19, B 33).

(SU4) There are two basic kinds of “objective” representations (i.e. represen­tations that purport to represent objects other than a subjective state of mind), namely intuitions and concepts (A 19, B 33; cf. A 320, B 377).

(SU5) Intuitions are singular representations (that is, representations of par­ticulars as such); through intuitions our minds do not refer to objects by means of general marks and therefore refer immediately (A 19, B 33).4

(SU6) Concepts are general representations (that is, they represent objects only indirectly insofar as they exhibit “marks” potentially shared by other objects) (A 19, B 33).

(SU7) All intuitions in humans are sensible (A 51, B 75, cf. A 68, B 93); that is, they arise from affections of our “sensibility” (A 19, B 33).5 Thus, human intuitions essentially involve a moment of passivity; through them, objects are “given” to us (A 19, B 33, cf. A 68, B 93).

(SU8) All concepts are intellectual; that is, with respect to concepts, our minds are spontaneously active. Through them, objects are actively thought by us by uniting various representations of them under a common one (A 19, B 33, cf. A 68, B 93).

(SU9) Human cognition requires both intuitions and concepts (A 51, B 75). (Very roughly, concepts provide cognition with a content that can be true or false and stand in rational relations; intuition provides the link to reality or, as Kant puts it in the Critique of Judgment, to “objects” corresponding to our concepts; cf. 5:401.)

Sensible intuitions are the stuff of indexicality, which are, in turn, the stuff of understanding — and all of these are constrained by conceivability — our reperoire of conceptions. I think Kant’s famous table was meant as an exhaustive inventory of possible conceptions, but my taste inclines me to treat the table as a beginning of an expanding set with no determinate limits.

So now I’m curious about Willaschek. I see he has a new book out, which looks interesting and useful: Kant on the Sources of Metaphysics: The Dialectic of Pure Reason. I’ve downloaded a copy to read, and I can already tell I’m going to need this in my library.

Anyway, anyone who has made it this far, can see why I am perpetually out of both time and money.

I hope this also sheds a little more light onto what I am hoping to get at in my Philosophy of Design of Philosophy book project. What I am after reading Langer, Cassirer and others, including maybe Kant, is to offload the burden of arguing a theory of conceptualization and instead to build upon a platform of existing theory to advocate approaching philosophy as a design medium, to develop an outline for how it is done, and to describe first-person what can be expected practicing philosophical enworldment this way, because it is truly weirder than hell to go through and demands explanation.

iPad undo needs a redo

Back when the iPhone was released, shaking to undo was a pretty cool interaction.

True — it lacked cues to help users discover how to undo, and no alternative method for undo was available in some key apps (such as Mail), so this was never a perfect design, but it was usually ok.

But, where the original iPhone had a 3.5″ screen, weighed 4.8 ounces,  and had a generous bezel, my 12.9″ iPad weighs 2.4 pounds in a protective case and has an edge-to-edge screen, which means no space on the front for gripping it, and this changes the shake-undo experience

Undoing an action now requires a user to hold the tablet around the outer edges, while carefully avoiding the buttons. This is made harder because the edge to edge screen gives no orientation indication. Depending on which side is up, the buttons could be located on any of the four corners, usually exactly where one of my hands are when I try to shake it. Half the time the device turns off before I can get the undo to happen. Once I get my grip exactly right, I heave the iPad back and forth with a two-arm movement until the sensors register a shake. Sometimes there is a several seconds delay before the undo activates so this can take awhile. But sometimes it just doesn’t ever work, for unknown reasons.

There are other ways to undo actions, but most of them are only for typing, and these are available only in some apps.

It is sad to watch Apple degrade this way, version over version. Where the Macintosh UI matured and became more systematic over time, iOS has been declining for the last 10 years. Yet, Apple seems to be leaning toward making the Mac more iOS-like rather than the reverse.

Things do not look good. It feels like UI design and digital experiences in general have fallen back into the hands of technologists, and consumer expectations of digital design have fallen to lows not seen since the early 90s, with no Apple to serve as a shining counter-example to point the way out of it.

Normal and abnormal ethics

Where a community is homogeneous, everyone in the community shares a common worldview and “speaks the same language”. In such communities ideas tend to be readily understood by all members and proposals are commensurable enough that they can be compared and debated without preliminary work to establish a baseline understanding.

Where a community is heterogeneous, however, multiple worldviews overlap and, at points, clash in incommensurability.

These regions of incommensurability have been studied and described, most famously by Thomas Kuhn. His accounts were objective and behavioral, viewed from an outside perspective: A crisis occurs in a community. The parties in the conflict understand the world according to different paradigms and not only think differently, but also perceive phenomena differently and talk differently. They talk past each other, and communication breaks down. What Kuhn called “normal science” can no longer be counted on to resolve the crisis. Much is at stake: reputation, resources, interests, so the conflict intensifies and as things get personal, people misbehave. But eventually, one of the paradigms prevails and there is a revolution. A paradigm shift has occurred.

Kuhn’s insights were themselves revolutionary. In his domain of interest, the history of science, it caused many people to re-understand science in a less linear way. Progress is less straightforward than we thought, and this, for many, weakened the mid-century’s popular faith in science as a guarantee of permanent, steady social progress. And it also loosened the grip of scientific positivism ( the conceit that scientific knowledge is ultimate knowledge, and that other ways of knowing only approximate the knowledge of science). This make Kuhn’s insights highly abusable, and these abuses probably account for most of Kuhn’s popularity. (It certainly accounts for Kuhn’s popularity with me.) Vice always outsells virtue in the marketplace of ideas.

One of the finest abusers of Kuhn’s theories was another hero of mine, Richard Rorty. Rorty expanded Kuhn’s framework to interactions outside the scientific community, to the broader academic community, especially those in the community who engage in philosophical discourse. Most notably, Rorty observed that philosophers, too, had crises, and in crises engaged in “abnormal discourse” a mode of discussion quite different from the “normal discourse” to which most of us are accustomed.

A place where I’d like to explore further, which I think needs to be more fully developed, is the experience of crisis — especially everyday crisis — where ordinary people find themselves encountering incommensurable worldviews and must learn to navigate them. What is is it like to participate in such a crisis? What is it to experience the crisis firsthand from within it, which means to be a partisan on one side of the struggle? — one side which only seems to comprehend both sides?

Most importantly, what are the ethics of situations where the only possible discourse is abnormal discourse? We are quite used to normal ethics, where right and wrong is a mostly settled issue, and the core issue is resolve to do the right thing. But do any of us really know how to navigate abnormal ethics? Don’t most of us “double-down” on our principles in these situations? Don’t most of us feel that our moral resolve is being put to the test, and now is the time to stand on principle? We must learn to ask: Is doubling down the right response in an abnormal ethical situation?

And what are the challenges to skillful navigation of abnormal ethical situations? As a design researcher and strategist my entire life is spent in abnormal discourse, I have accumulated an above-average abundant stock of primary experience of this phenomenon. These range from suspiciously intense disagreement, to apprehension at other people’s conceptions, to full disorientation and perplexity where problematic situations cannot even be framed as problems, questions cannot be asked, and participants in the situation are gripped in the existential anxiety.

Abnormal ethical situations induce existential crisis.

I believe very few of us are equipped to recognize such situations, nor to conceptualize them, nor speak about them, much less navigate them skillfully. So we suffer not only the perplexity itself, but perplexity about what is happening to us.

In my own experience, the simple ability to diagnose the terrible feelings as perplexity reduces the pain considerably, and makes the suffering bearable. Being able to agree with others in the group that the group is in perplexity reduces the suffering enough that the remaining pain becomes a bearable — even interesting — discomfort, almost a stimulant.

Finally, knowing what can be gained by traversing perplexity — both radical innovation and deeper personal relationships — provides a genuine this-worldly reward for good-faith struggle through everyday crises.

This is all stuff I’ve been obsessed with and written about extensively over the last couple of decades.

Some new ideas I am having:

  1. 1) We should redescribe today’s breakdown in public discourse in terms of “abnormal ethics”.
  2. We should understand that one consequence of this strange social change where “we all live on campus now” is that abnormal discourse has escaped the lab of academia and is now rampant everywhere — which means paradoxically that abnormal discourse is now normal.
  3. We should recognize that skill in navigating abnormal ethics might be a new moral frontier the next milestone of human progress. By this view, to be ethical in our new social conditions requires “leveling up” and becoming good at both normal and abnormal ethics, and knowing how to mode-switch appropriately. Similarly we might need to do an ethical meta-leap with the electrum rule (my combo golden and silver rule) and ask ourselves not only “is this fair?”, but also “is my standard of fairness imposed fairly?” “Is this just?” must also be meta-interrogated with the question “Is my justice just, and is it imposed justly?” And we must look for prejudices in where we see prejudice (or where we see good or permissible prejudices against some identities and where we see unacceptable, oppressive prejudices) and the logic by which we justify prejudices.

The biggest prejudice of all we will need to overcome is our conceptions of empathy — that understand people is primarily a matter of feeling, caring, valuing. These things are important, but they are part of something bigger and more influential, that basic set of conceptions we use to make sense of the world even before we emotionally respond to it. We need to activate not only our hearts, but also out minds and probably our hands and feet, and all our senses if we want to form better understandings of one another and the world we share.

The good news is that we may have been accidentally preparing the last couple of generations for this by teaching them design thinking. Design practice truly does equip people for abnormal ethics — if they have the wisdom to use their design thought for thinking politics, and to set aside the political indoctrinations they also received.

Seems like there’s something here to work with.

Conceptions and concepts

I’m noticing that I am using the terms “concept” and “conception’ very differently from how Langer uses them.

In my way of thinking and talking about them, conception is a conceiving move — and what is conceived through the conception is a concept. When a student is presented with a new kind of math problem and does not “get it”, in my terminology what the child lacks is a particular conception that makes it possible to get the concept.

From my perspective, Langer looks at it inside-out. For her concepts are the essential structure of meaning (which I agree with), but then makes conceptions the messier particulars that form around the structure and flesh it out as a particular object. She first makes the distinction in a footnote: “I have called the terms of our thinking conceptions, not concepts. Concepts are abstract forms embodied in conceptions; their bare presentation may be approximated by so-called ‘abstract thought,’ but in ordinary mental life they no more figure as naked factors than skeletons are seen walking the street. Concepts, like decent living skeletons, are always embodied — sometimes rather too much.”

A few pages later, while discussing how abstracted or stylized images can be recognized as representing a real object, she elaborates.

That which all adequate conceptions of an object must have in common, is the concept of the object. The same concept is embodied in a multitude of conceptions. It is a form that appears in all versions of thought or imagery that can connote the object in question, a form clothed in different integuments of sensation for every different mind. Probably no two people see anything just alike. Their sense organs differ, their attention and imagery and feelings differ so that they cannot be supposed to have identical impressions. But if their respective conceptions of a thing (or event, or person, etc.) embody the same concept, they will understand each other.

A concept is all that a symbol really conveys. But just as quickly as the concept is symbolized to us, our own imagination dresses it up in a private, personal conception, which we can distinguish from the communicable public concept only by a process of abstraction.

I’m still pretty early in the book, and I havent read it in 10 years, and last time I read it I was less sensitized to this distinction, so I am not sure whether I will adopt her terms or stick to mine. But I do have two questions forming in my mind:

  1. Will she discuss acquisition of capacities for using new concepts and the experiences we have struggling to understand concepts we are not yet equipped to conceive? Or, failing to recognize the failure and simply misunderstanding? Or the experience of suddenly acquiring a new concept and having the uncanny experience of instantaneously re-understanding the world as a whole (which I connect with conversion and transfiguration)? These are crucial questions for me, and the word “conception” does a lot of work in my way of accounting for these experiences.

  2. Is the formal concept hidden within the messy flesh of objects a vestige of Platonic form? It seems that making concepts the outcome of a conceiving operation rather than a recognition of pre-existent underlying structure might be a cleaner break with Platonism (as well as correlationism), and make Langer’s thinking play nicer with Pragmatism.

“Nothing Gold Can Stay”

I heard about a Frost poem last night, “Nothing Gold Can Stay”:

Nature’s first green is gold,
Her hardest hue to hold.
Her early leaf’s a flower;
But only so an hour.
Then leaf subsides to leaf.
So Eden sank to grief,
So dawn goes down to day.
Nothing gold can stay.

This poem chimes with something I’d written earlier in the day, another comment on the emergence of a conception, too new for language.

“Everything is at its best just before it figures out what it is.”

A good use for leading questions

In the field of design research, leading questions are generally considered undesirable and are carefully avoided, especially in foundational research (where the design team is trying to develop a basic understanding of some domain of actors, their goals, behaviors, attitudes, contexts, etc.), and in generative research (where the team looks for opportunities to introduce improvements to the actors’ situations).

Researchers try instead to ask open-ended questions, which tend to have the form of a request — for a story, a description, an opinion a lesson, etc..

Leading questions, and even more neutrally-stated closed-ended questions, tend to direct the research participant toward responses anticipated by the researcher.

The greatest value of foundational and generative interviews, however, is often in the unanticipated responses — ones that never would have occurred to the researcher or to anyone else on the design team, had the research not been conducted.

These responses are surprising precisely because they are answers to surprising questions — questions that nobody knew to ask, but which the open-ended question coaxed into the open. These questions point to different conceptions our participants use to make sense of their worlds, conceptions that give things distinctly different patterns of significance and relevance, and which make the difference between designs that hit home and designs that seem off-the-mark.

When designers think with the conceptions used by their users — or even better, on revelatory improvements to their conceptions — the design concepts make immediate and profound sense to the users. Ideally, users experience it as: “They get me!”

But if closed-ended questions are asked (questions that imply a limited set of answers), or worse, if leading questions are asked (questions that point to a narrow range of desired answers) the participants will use their social competence to answer the question ask, according to its implied conceptions. The researchers get their answers — but those answers conceal something more valuable: the key to how the participant thinks, their own conceptions, which is the entrance to their worldview.

I am currently rereading Susanne Langer for the first time in 11 years, and I am realizing just how much I gained from reading her:

The “technique,” or treatment, of a problem begins with its first expression as a question. The way a question is asked limits and disposes the ways in which any answer to it — right or wrong — may be given. … Such implicit “ways” are not avowed by the average man, but simply followed. He is not conscious of assuming any basic principles. They are what a German would call his “Weltanschauung” [worldview], his attitude of mind, rather than specific articles of faith. They constitute his outlook; they are deeper than facts he may note or propositions he may moot… But, though they are not stated, they find expression in the forms of his questions. A question is really an ambiguous proposition; the answer is its determination. There can be only a certain number of alternatives that will complete its sense. In this way the intellectual treatment of any datum, any experience, any subject, is determined by the nature of our questions, and only carried out in the answers.

But what is really inspiring this line of thought is something subtler I am picking up in Langer’s words, a fascinating use of “leading questions’. The first example is example: “The concepts that preoccupied them had no application in those realms, and therefore did not give rise to new, interesting, leading questions about social or moral affairs.” The second example really drives it home:

The rise of technology is the best possible proof that the basic concepts of physical science, which have ruled our thinking for nearly two centuries, are essentially sound. They have begotten knowledge, practice, and systematic understanding; no wonder they have given us a very confident and definite Weltanschauung. They have delivered all physical nature into our hands. But strangely enough, the so-called “mental sciences” have gained very little from the great adventure. One attempt after another has failed to apply the concept of causality to logic and aesthetics, or even sociology and psychology. Causes and effects could be found, of course, and could be correlated, tabulated, and studied; but even in psychology, where the study of stimulus and reaction has been carried to elaborate lengths, no true science has resulted. No prospects of really great achievement have opened before us in the laboratory. If we follow the methods of natural science our psychology tends to run into physiology, histology, and genetics; we move further and further away from those problems which we ought to be approaching. That signifies that the generative idea which gave rise to physics and chemistry and all their progeny — technology, medicine, biology — does not contain any vivifying concept for the humanistic sciences. The physicist’s scheme, so faithfully emulated by generations of psychologists, epistemologists, and aestheticians, is probably blocking their progress, defeating possible insights by its prejudicial force. The scheme is not false — it is perfectly reasonable — but it is bootless for the study of mental phenomena. It does not engender leading questions and excite a constructive imagination, as it does in physical researches. Instead of a method, it inspires a militant methodology.

Leading questions are ones that contain a conception — a generative idea! The reason we ask open-ended questions in design research, and avoid our own closed-ended or leading questions, is to make it possible to uncover better leading questions — ones that are either familiarly relevant or inspiringly novel to our users!

The goal of design research is to upgrade our leading questions. These new leading questions are then posed as opportunity formulas (such as “how might we?” questions) or as design briefs that conceptualize a problem in a way that generates solutions compatible with how the research participants conceive their worlds. As John Dewey put it, “A problem well put is a problem half solved.”

I enjoy calling what I do “precision inspiration“. Precision inspiration replaces stale, lifeless, irrelevant questions with novel, living, relevant questions, which activate new and better conceptions capable of imagining and producing novel, relevant, exciting solutions.

Strong fallibilism

Fallibilism, as originally formulated by Peirce, claims that “people cannot attain absolute certainty concerning questions of fact.” Wikipedia sums it up with three claims: “1) No beliefs can be conclusively justified. 2) Knowledge does not require certainty. 3) Almost no basic (that is, non-inferred) beliefs are certain or conclusively justified.” In sum, “fallibilism is an admission that, because empirical knowledge can be revised by further observation, any of the things we take as empirical knowledge might turn out to be false. However, fallibilists typically accept that many beliefs can be considered certain beyond reasonable doubt and therefore acted upon, allowing us to live functional and meaningful lives.

Fallibilism supports practical action without certainty of truth.

A strong fallibilism, however, would support progress with certainty of falsehood.

It would see truth as essentially partial (shot through with tradeoffs), as tentative (useful only to some finite degree) and fragile (breakable through scrutiny).

The strongest fallibilism, however, sees falsehood in truth as a guarantee of freedom. Any oppressive truth can, at least in principle, be interrogated, weakened, overthrown and replaced, though not with a truth of one’s choice, but rather with whatever successor truth ascends. That truth, however, is also vulnerable.

Truths are secure only when they govern with our assent.

With this in mind, however, we should not frivolously overthrow truths solely because they can be overthrown. Of course they can be overthrown. But what we love we protect, maintain and honor.

Exnihilist manifesto

What is inconceivable to you is nothing to you.

Conception of the formerly inconceivable makes existence immerge (sic!) from nothingness.

*

A witness of ex nihilo creation no longer trusts nothingness: infinity’s backglow betrays it.

Any apparent nothingness might be a blind-spot concealing a novel everythingness.

Witnessing ex nihilo creation transforms nihilists into exnihilists.

Art

Art announces that a new way to exist as a person is possible. It is a beacon from the center of a novel enworldment. It is a peculiar quality of art that is promises something unsaid and unsayable that cannot be doubted.

What is fruitfulness?

Nick was asked by an old colleague to provide a simple, universally applicable definition of fruitfulness.

Earlier, I would have pointed to Thomas Kuhn’s paper on theory choice, where fruitfulness, along with accuracy, consistency, scope and simplicity, was a characteristic that might make a theory more attractive to a scientist, depending (scandalously!) on that scientist’s taste in theories. About fruitfulness Kuhn said “a theory should be fruitful of new research findings: it should, that is, disclose new phenomena or previously unnoted relationships among those already known.” In a footnote he added “The last criterion, fruitfulness, deserves more emphasis than it has yet received. A scientist choosing between two theories ordinarily knows that his decision will have a bearing on his subsequent research career. Of course he is especially attracted by a theory that promises the concrete successes for which scientists are ordinarily rewarded.” He could have added the point that a fruitful theory is likely to win attention from other scientists seeking fertile ground for their own work, and consequently generating citations, the currency of academia.

But Nick uses the term “fruitful” in a distinctly different and more interesting sense. His usage goes beyond simply showing new phenomena or connections among known phenomena, or even pointing to new areas to research. What he means is close to what I’ve talked about in terms of conceiving what was, prior to the conception, inconceivable — a conception which frees insoluble problems to solve themselves.

Nick, however is less interested in the production of novel solutions, than he is in the discovery of novel problems. Of course, each novel problem has the potential to yield novel solutions. But, also, inside novel solutions are the seeds of an unforeseen novel problems. Fruitful production produces products that contain the seeds of future production. (No wonder we call fruit “produce”.) It is like Hegel turned inside-out, where instead of new ideas containing the seeds of their destruction converging to one Absolute, they instead contain the seeds of invention diverging to an infinite Plurality. Where Hegel sees decaying fruit, Nick sees another generation of sapling born to effloresce.

Since Nick gave the word “fruitful” this new tilt, Susan and I have both adopted it, and it has become a household term in our odd household.

Of course, I had to go on and name a philosophy whose aim is fruitfulness “fructivism” — a word with unavoidable phonetic associations with other reproductive language, which polite souls see as a drawback, but for me seals the deal.

So, given this conception of fruitfulness, how can we define fruitfulness simply, universally (meaning not only for philosophers or design innovators) and accessibly?

Yesterday, Nick and I collaborated on this problem. We worked iteratively, starting with the essential elements — conceptions, reconceptions, unforeseeability/surprise, novelty, an inexhaustible, perpetual process of production, creativity generativity — and we spiraled in on a two-word definition.

Spiral 1: The generation of new practical possibilities through reconception of a problem space.

Spiral 2: Creativity born of conception of what was inconceivable.

Spiral 3: Reconceptive creativity. Creative reconception.

Spiral 4: Generative reconception.

*

The more I think about fruitfulness and fructivism the more I realize that its significance exceeds its definition. At its unsayable core is a taste — a taste for inexhaustible possibility, for non-determination, for radical unpredictability, for freedom.

I feel certain the last two generations of Americans have been deprived of this feeling and are starving for it without even knowing it. A taste of it will go up like a spark in a granary.

Susanne Langer on questions and conceptions

Susanne Langer calls conceptions “generative ideas”:

The limits of thought are not so much set from outside, by the fullness or poverty of experiences that meet the mind, as from within, by the power of conception, the wealth of formulative notions with which the mind meets experiences. Most new discoveries are suddenly-seen things that were always there. A new idea is a light that illuminates presences which simply had no form for us before the light fell on them. We turn the light here, there, and everywhere, and the limits of thought recede before it. A new science, a new art, or a young and vigorous system of philosophy, is generated by such a basic innovation. Such ideas as identity of matter and change of form, or as value, validity, virtue, or as outer world and inner consciousness, are not theories; they are the terms in which theories are conceived; they give rise to specific questions, and are articulated only in the form of these questions. Therefore one may call them generative ideas in the history of thought.

We recognize conceptions (or concepts or generative ideas — Langer appears to use these terms interchangeably) less by the answers they give, than by the questions they know how to ask, and these are deeply and emotionally bound up with our sense of reality:

The “technique,” or treatment, of a problem begins with its first expression as a question. The way a question is asked limits and disposes the ways in which any answer to it — right or wrong — may be given. If we are asked: “Who made the world?” we may answer: “God made it,” “Chance made it,” “Love and hate made it,” or what you will. We may be right or we may be wrong. But if we reply: “Nobody made it,” we will be accused of trying to be cryptic, smart, or “unsympathetic.” For in this last instance, we have only seemingly given an answer; in reality we have rejected the question. The questioner feels called upon to repeat his problem. “Then how did the world become as it is?” If now we answer: “It has not ‘become’ at all,” he will be really disturbed. This “answer” clearly repudiates the very framework of his thinking, the orientation of his mind, the basic assumptions he has always entertained as common-sense notions about things in general. Everything has become what it is; everything has a cause; every change must be to some end; the world is a thing, and must have been made by some agency, out of some original stuff, for some reason. These are natural ways of thinking. Such implicit “ways” are not avowed by the average man, but simply followed. He is not conscious of assuming any basic principles. They are what a German would call his “Weltanschauung,” his attitude of mind, rather than specific articles of faith. They constitute his outlook; they are deeper than facts he may note or propositions he may moot.

But, though they are not stated, they find expression in the forms of his questions. A question is really an ambiguous proposition; the answer is its determination. There can be only a certain number of alternatives that will complete its sense. In this way the intellectual treatment of any datum, any experience, any subject, is determined by the nature of our questions, and only carried out in the answers.

In philosophy this disposition of problems is the most important thing that a school, a movement, or an age contributes. This is the “genius” of a great philosophy; in its light, systems arise and rule and die. Therefore a philosophy is characterized more by the formulation of its problems than by its solution of them. Its answers establish an edifice of facts; but its questions make the frame in which its picture of facts is plotted. They make more than the frame; they give the angle of perspective, the palette, the style in which the picture is drawn — everything except the subject. In our questions lie our principles of analysis, and our answers may express whatever those principles are able to yield. 

There is a passage in Whitehead’s Science and the Modern World, setting forth this predetermination of thought, which is at once its scaffolding and its limit. “When you are criticizing the philosophy of an epoch,” Professor Whitehead says, “do not chiefly direct your attention to those intellectual positions which its exponents feel it necessary explicitly to defend. There will be some fundamental assumptions which adherents of all the variant systems within the epoch unconsciously presuppose. Such assumptions appear so obvious that people do not know what they are assuming because no other way of putting things has ever occurred to them. With these assumptions a certain limited number of types of philosophic systems are possible, and this group of systems constitutes the philosophy of the epoch.”

Some years ago, Professor C. D. Burns published an excellent little article called “The Sense of the Horizon,” in which he made a somewhat wider application of the same principle; for here he pointed out that every civilization has its limits of knowledge — of perceptions, reactions, feelings, and ideas. To quote his own words, “The experience of any moment has its horizon. Today’s experience, which is not tomorrow’s, has in it some hints and implications which are tomorrow on the horizon of today. Each man’s experience may be added to by the experience of other men, who are living in his day or have lived before; and so a common world of experience, larger than that of his own observation, can be lived in by each man. But however wide it may be, that common world also has its horizon; and on that horizon new experience is always appearing….” . . .

The formulation of experience which is contained within the intellectual horizon of an age and a society is determined, I believe, not so much by events and desires, as by the basic concepts at people’s disposal for analyzing and describing their adventures to their own understanding. Of course, such concepts arise as they are needed, to deal with political or domestic experience; but the same experiences could be seen in many different lights, so the light in which they do appear depends on the genius of a people as well as on the demands of the external occasion. 

This material is highly relevant to my own project of designing conception-systems.

  1. Conceptions are generative ideas and should not be confused with the content they generate.
  2. Conceptions generate worldviews, and are fundamental to how we see the world and ourselves situated within the world.
  3. Refusal to participate in the conceptions of a worldview (“rejecting its questions”) is offensive to its members. Putting it in ethnomethodology language, it is a form of ethnomethodic breach, and creates the same kind of confusion, discomfort and alienation typical of such breaches.

Later in the book, Langer also discusses the phenomenon of perplexity — of lacking all conception to filter and organize chaos — and of the apprehension perplexity induces in a person.

[Man] can adapt himself somehow to anything his imagination can cope with; but he cannot deal with Chaos. Because his characteristic function and highest asset is conception, his greatest fright is to meet what he cannot construe — the “uncanny,” as it is popularly called. It need not be a new object; we do meet new things, and “understand” them promptly, if tentatively, by the nearest analogy, when our minds are functioning freely; but under mental stress even perfectly familiar things may become suddenly disorganized and give us the horrors. Therefore our most important assets are always the symbols of our general orientation in nature, on the earth, in society, and in what we are doing: the symbols of our Weltanschauung [world view] and Lebensanschauung [life view]. Consequently, in a primitive society, a daily ritual is incorporated in common activities, in eating, washing, fire-making, etc., as well as in pure ceremonial; because the need of reasserting the tribal morale and recognizing its cosmic conditions is constantly felt. In Christian Europe the Church brought men daily (in some orders even hourly) to their knees, to enact if not to contemplate their assent to the ultimate concepts.

This brings me to a theory I’ve been developing about conspiracy theories. By this, I don’t mean the theories about conspiracies normal people may form. I mean those theories that function as central symbols to a certain conspiracy-oriented worldview. Conspiracy theorists are notorious for their compulsive need to discuss their theories, cornering people with no knowledge or interest in the material, forcing them to endure lectures or engage in one-sided debates. Why the drive to foist unwanted conversations on others? I believe these “conversations” function as rituals, and barely as personal interactions.

Interactive turn and its metaphysics

Have I mentioned my belief that our worlds are constructed primarily of interactions? It was Bruno Latour who made this real to me about ten years ago, and this was my last really big philosophical breakthrough. I suppose I could call it my “interactive turn”.

Latour’s descriptions of the conduct of science, and of everything, in terms of networks of interacting human and nonhuman actors changed how I understood both subjectivity and objectivity, and finally broke down my ability to keep those two categories discrete.

We are constantly interacting with our environments in myriad ways — physically, socially, linguistically, reflectively — reactively, deliberately, creatively, imaginatively, prospectively, habitually, absently, selectively. What we make of what is going on, that is, how we conceive it, has everything to do with how we respond to it, and how it responds back challenges us to make sense of it.

We respond to “the same” reality as related to us by other trusted sources, as passed off to us rumors from sketchy sources, as experienced as a participant in a real-life situation, as conveyed to us by a member of our own community following methods of the community, as taught to us during decades of education, as reported to us by journalists on varying integrity and ideological agendas, and as recalled by our own memories formed from different stages of our lives — and our response assumes some common phenomenological intentional object, some metaphysical reality, some commonsensical state of affairs on the other side of our interactions. But this is constructed out of interactions with innumerable mediators — people, things, thoughts, words, intuitions — who are included within or ignored out of the situation as we conceive it.

We lose track of the specific interactions that have amounted to our most habitual conceptions — our syneses (our takings-together taken-together) — which shape our categories of things, our expected cause and effect sequences in time, of our social behaviors and how they will be embraced, tolerated or punished.

Science is one variety of these interactions, but one we tend to privilege and to habitually project behind the world as our most common metaphysics. But once I learned to see scientific activities, scientific reporting, scientific explaining and scientific believing as a social behavior useful for helping us interact with nonhuman actors with greater effectiveness, somehow the relieved by need to rely on the metaphysical image science projects. I can believe in the effectiveness of the interactions and remain loyal to the social order established by science to do its work without feeling obligated to use a scientifically explicable reality as the binding agent for all my other beliefs to keep them hanging together. I see many good reasons not to!

Guenon on Chinese coins

From Rene Guenon’s The Great Triad:

It is also said that Heaven, which envelops and embraces all things, presents a ‘ventral’ — that is to say inward — face to the Cosmos while the Earth that supports all things presents a ‘ dorsal’ or outward face.  This can be easily grasped by simply looking at the diagram below, in which Heaven and Earth are respectively represented by a concentric circle and a square.

It will be observed that this diagram reproduces the shape of Chinese coins, which also happens originally to have been the shape of certain ritual tablets. The part that the characters are inscribed on — that is, the solid area between the circular outline and the square empty space in the centre — clearly corresponds to the Cosmos comprising the ‘ten thousand beings’. The fact that this area is bounded by two voids is a symbolic expression of the fact that what is not between Heaven and Earth is for that very reason not a part of manifestation.

Everso

I was sent an image of an everting sphere.

Notice how the sphere becomes a shell-like torus midway through the eversion.

Note that we human beings can view reality from an inner first-person and outer third-person and experiences at once a metaphysical behind and a metaphysical beyond.

Recall that the Chinese coin was understood to be the negative space of Tao, the inner square, yin, the outer infinity, yang — but it is obvious these two are one and the same from everywhere beyond the coin.

In the creation myth this everting sphere just spawned, human being, human existence exists everywhere that the infinite sphere whose center is everywhere and periphery is nowhere forms a torus at mid-eversion, creating a unique everything, a soul, a person.


I wonder if I could make a book on images of eversions and the torus. I would make a chapbook, a second signature, to Geometric Meditations, and it would be called Everso.

Here’s the material I have so far, starting, of course with a dedication to the gorging torus, who I am now wondering is more complicated than I thought only days ago



Ouroboros,
Gorging torus,
Rolled up like an egg
Before us.


Definition of evert:

I have needed the word “evert” many times, but had to resort to flipping, reversing, inverting, turning… inside-out.

Evert – verb [with obj.]

Turn (a structure or organ) outward or inside out.

DERIVATIVES

eversible – adjective.
eversion –  noun

ORIGIN mid 16th cent. (in the sense ‘upset, overthrow’): from Latin evertere, from e- (variant of ex-) ‘out’ + vertere ‘to turn.’

*

Now I can say things like:

  • Everything in the world is the world everted.
  • A comedy is an everted tragedy. A tragedy is an everted comedy.
  • A pearl is an everted oyster shell. An oyster coats the ocean with mother-of-pearl. Outside the shell is ocean, inside the pearl is ocean. Between inner-shell and outer-pearl is slimy oyster-flesh, which ceaselessly coats everything it isn’t with mother-of-pearl. It is as if the flesh cannot stand anything that does not have a smooth, continuous and lustrous surface. We could call the flesh’s Other — that which requires coating — “father-of-pearl”.
  • Imagine Pandora’s box as a pearl everting to an all-ensconcing shell as Pandora opened it, and Eden as an all-ensconcing shell everted to a pearl upon Adam’s eviction.
  • An object is an everted subject.

 


In the end:

In the end,
the trees will grow like snakes,
splitting and sloughing bark,
bending in coils of green heartwood;
and the snakes will grow like trees,
stuffing skin under skin,
and in their turgid leather casings,
they will lie about on the ground
like broken branches.


Shells and Pearls (a collection of previous pearl posts):

An oyster’s flesh is delicate. It cannot tolerate anything harsh, abrasive or threatening. So it coats everything around it with a lustrously smooth surface of nacre.

The harshest, most abrasive and threatening thing for an oyster is the ocean. The oyster coats the entire ocean with a mother-of-pearl inner shell. And anything from the outside that gets inside the shell is also coated, until it becomes a pearl.

Outside the shell is ocean; inside the pearl is ocean.

A pearl, then, can be seen as an everted oyster shell; a shell’s inner lining, an everted pearl. That which requires coating can be called father-of-pearl.

*

Minds secrete knowing like mother-of-pearl, coating reality with lustrous likeness.

*

Nacre

You are absurd. You defy comprehension.

That is, you defy my way of understanding. I cannot continue to understand my world as I understand it and also understand you.

That is, you do not fit inside my soul.

I am faced with the most fundamental moral choice: Do I break open my soul? or do I bury you in mother-of-pearl?

*

Father-of-Pearl

(A meditation on Levinas’s use of the term “exception” in Otherwise Than Being.)

We make category mistakes when attempting to understand metaphysics, conceiving what must be exceived.

Positive metaphysics are objectionable, in the most etymologically literal way, when they try to conceptualize what can only be exceptualized, to objectify that to which we are subject, to comprehend what comprehends — in order to achieve certainty about what is radically surprising.

In my own religious life, this category mistake is made tacitly at the practical and moral level, and then, consequentially, explicitly and consciously. Just as the retinas of our eyes see things upside-down, our mind’s eye sees things inside-out. We naturally confuse insidedness and outsidedness. By this view, human nature is less perverse than it is everse.

*

Imagine, with as much topological precision as you can muster, expulsion from Eden as belonging-at-home flipped inside-out.

That galut in the pit of your gut: everted Eden?

*

A garden is an everted fruit, and a fruit, an everted garden.

The nacre inner lining of a shell is an everted pearl, and a pearl, an everted nacre lining.

The exception is the everted conception, and the conception, the everted exception.

*

Nacre

Pearls are inside-out oyster shells. Or are oyster shells inside-out pearls?

The oyster coats its world with layers of iridescent calcium. With the same substance it protects itself from the dangers concaving in from the outside and the irritants convexing it from the inside.

*

Irridescent Irritation

Some random notes on the inner topology of oysters…

*

A pearl is an inside-out oyster shell.

*

An oyster coats the ocean with mother-of-pearl.

Outside the shell is ocean, inside the pearl is ocean.

Between inner-shell and outer-pearl is slimy oyster-flesh, ceaselessly coating everything it isn’t with mother-of-pearl.

It is as if the flesh cannot stand anything that does not have a smooth, continuous and lustrous surface. We could call the flesh’s Other — that which requires coating — “father-of-pearl”.

*

Every pearl is an iridescent tomb with an irritant sealed inside. We love the luster of the outer coat, but inside is what was once known as filth.

*

We could also think of the oyster shell as the fortress walls and the pearl as a prison cell.

*

We make pearls of what is Other, then love what we’ve made of the Other, which is ourselves.

*

We love our misunderstandings. We never cut into what we love with critique. Inside is just a grain or a fragment, of interest only to other grains and fragments.

*

Sometimes an alien bit of beyond gets inside one’s horizon, but it can always be explained.

*

Imagine Pandora’s box as a pearl turned outside-side in upon its being opened, and Eden as an oyster’s interior turned inside-out into a pearl with Adam’s eviction.

The politics of personal lives

Another excerpt from Sebastian Haffner’s, Defying Hitler, explains why a first-person account of what it was like to be an ordinary person during a moment in history is valuable, and perhaps more valuable than the usual third-person epic historical survey:

What is history, and where does it take place?

If you read ordinary history books — which, it is often overlooked, contain only the scheme of events, not the events themselves — you get the impression that no more than a few dozen people are involved, who happen to be “at the helm of the ship of state” and whose deeds and decisions form what is called history. According to this view, the history of the present decade is a kind of chess game among Hitler, Mussolini, Chiang Kai-shek, Roosevelt, Chamberlain, Daladier, and a number of other men whose names are on everybody’s lips. We anonymous others seem at best to be the objects of history, pawns in the chess game, who may be pushed forward or left standing, sacrificed or captured, but whose lives, for what they are worth, take place in a totally different world, unrelated to what is happening on the chessboard, of which they are quite unaware.

It may seem a paradox, but it is nonetheless the simple truth, to say that on the contrary, the decisive historical events take place among us, the anonymous masses. The most powerful dictators, ministers, and generals are powerless against the simultaneous mass decisions taken individually and almost unconsciously by the population at large. It is characteristic of these decisions that they do not manifest themselves as mass movements or demonstrations. Mass assemblies are quite incapable of independent action. Decisions that influence the course of history arise out of the individual experiences of thousands or millions of individuals.

This is not an airy, abstract historical construction, but indisputably real and tangible. For instance, what was it that caused Germany to lose the Great War in 1918 and the Allies to win it? An advance in the leadership skills of Foch and Haig, or a decline in Ludendorff’s? Not at all. It was the fact that the “German soldier,” that is, the majority of an anonymous mass of 10 million individuals, was no longer willing, as he had been until then, to risk his life in any attack, or to hold his position to the last man. Where did this change of attitude take place? Certainly not in large, mutinous assemblies of German soldiers, but unnoticed and unchecked in each individual soldier’s breast. Most of them would probably not have been able to describe this complicated and historically important internal process and would merely have used the single expletive “Shit!” If you had interviewed the more articulate soldiers, you would have found a whole skein of random, private (and probably uninteresting and unimportant) reasons, feelings, and experiences, a combination of letters from home, relations with the sergeant, opinions about the quality of the food, and thoughts on the prospects and meaning of the war and (since every German is something of a philosopher) about the meaning and value of life. It is not my purpose here to analyze the inner process that brought the Great War to an end, but it would be interesting for those who wish to reconstruct this event, or others like it, to do so.

Mine is a different topic, although the mental processes it involves are similar. They may perhaps be of greater consequence, interest, and importance: namely, the psychological developments, reactions, and changes that took place simultaneously in the mass of the German population, which made Hitler’s Third Reich possible and today form its unseen basis.

There is an unsolved riddle in the history of the creation of the Third Reich. I think it is much more interesting than the question of who set fire to the Reichstag. It is the question “What became of the Germans?” Even on March 5, 1933, a majority of them voted against Hitler. What happened to that majority? Did they die? Did they disappear from the face of the earth? Did they become Nazis at this late stage? How was it possible that there was not the slightest visible reaction from them?

Most of my readers will have met one or more Germans in the past, and most of them will have looked on their German acquaintances as normal, friendly, civilized people like anyone else — apart from the usual national idiosyncrasies. When they hear the speeches coming from Germany today (and become aware of the foulness of the deeds emanating from there), most of these people will think of their acquaintances and be aghast. They will ask, “What’s wrong with them? Don’t they see what’s happening to them — and what is happening in their name? Do they approve of it? What kind of people are they? What are we to think of them?”

Indeed, behind these questions there are some very peculiar, very revealing mental processes and experiences, whose historical significance cannot yet be fully gauged. These are what I want to write about. You cannot come to grips with them if you do not track them down to the place where they happen: the private lives, emotions, and thoughts of individual Germans. They happen there all the more since, having cleared the sphere of politics of all opposition, the conquering, ravenous state has moved into formerly private spaces in order to clear these, too, of any resistance or recalcitrance and to subjugate the individual. There, in private, the fight is taking place in Germany. You will search for it in vain in the political landscape, even with the most powerful telescope. Today the political struggle is expressed by the choice of what a person eats and drinks, whom he loves, what he does in his spare time, whose company he seeks, whether he smiles or frowns, what he reads, what pictures he hangs on his walls. It is here that the battles of the next world war are being decided in advance. That may sound grotesque, but it is the truth.

That is why I think that by telling my seemingly private, insignificant story I am writing real history, perhaps even the history of the future. It actually makes me happy that in my own person I do not have a particularly important, outstanding subject to describe. If I were more important I would be less typical. That is also why I hope my intimate chronicle will find favor in the eyes of the serious reader, who has no time to waste, and reads a book for the information it contains and its usefulness.

Some descriptions of what drove Haffner to leave Germany:

The world I had lived in dissolved and disappeared. Every day another piece vanished quietly, without ado. Every day one looked around and something else had gone and left no trace. I have never since had such a strange experience. It was as if the ground on which one stood was continually trickling away from under one’s feet, or rather as if the air one breathed was steadily, inexorably being sucked away.

What was happening openly and clearly in public was almost the least of it. Yes, political parties disappeared or were dissolved; first those of the left, then also those of the right; I had not been a member of any of them. The men who had been the focus of attention, whose books one had read, whose speeches we had discussed, disappeared into exile or the concentration camps; occasionally one heard that one or another had “committed suicide while being arrested” or been “shot while attempting to escape.” At some point in the summer the newspapers carried a list of thirty or forty names of famous scientists or writers; they had been proscribed, declared to be traitors to the people and deprived of their citizenship.

More unnerving was the disappearance of a number of quite harmless people, who had in one way or another been part of daily life. The radio announcer whose voice one had heard every day, who had almost become an old acquaintance, had been sent to a concentration camp, and woe betide you if you mentioned his name. The familiar actors and actresses who had been a feature of our lives disappeared from one day to the next. Charming Miss Carola Neher was suddenly a traitor to the people; brilliant young Hans Otto, who had been the rising star of the previous season, lay crumpled in the yard of an SS barracks — yes, Hans Otto, whose name had been on everyone’s lips, who had been talked about at every soiree, had been hailed as the “new Matkowski” that the German stage had so long been waiting for. He had “thrown himself out of a fourth-floor window in a moment when the guards had been distracted,” they said. A famous cartoonist, whose harmless drawings had brought laughter to the whole of Berlin every week, committed suicide, as did the master of ceremonies of a well-known cabaret. Others just vanished. One did not know whether they were dead, incarcerated, or had gone abroad — they were just missing.

The symbolic burning of the books in April had been an affair of the press, but the disappearance of books from the bookshops and libraries was uncanny. Contemporary German literature, whatever its merits, had simply been erased. Books of the last season that one had not bought by April became unobtainable. A few authors, tolerated for some unknown reason, remained like individual ninepins in the wreckage. Otherwise you could get only the classics — and a dreadful, embarrassingly bad literature of blood and soil, which suddenly sprang up. Readers — always a minority in Germany, and as they were daily told, an unimportant one at that — were deprived of their world overnight. Further, since they had quickly learned that those who were robbed might also be punished, they felt intimidated and pushed their copies of Heinrich Mann and Feuchtwanger into the back rows of their bookshelves; and if they dared to talk about the newest Joseph Roth or Jakob Wassermann they put their heads together and whispered like conspirators.

Many journals and newspapers disappeared from the kiosks — but what happened to those that continued in circulation was much more disturbing. You could not quite recognize them anymore. In a way a newspaper is like an old acquaintance: you instinctively know how it will react to certain events, what it will say about them and how it will express its views. If it suddenly says the opposite of what it said yesterday, denies its own past, distorting its features, you cannot avoid feeling that you are in a madhouse. That happened. Old-established democratic broadsheets such as the Berliner Tageblatt or the Vossische Zeitung changed into Nazi organs from one day to the next. In their customary, measured, educated style they said exactly the same things that were spewed out by the Angriff or the Völkischer Beobachter, newspapers that had always supported the Nazis. Later, one became accustomed to this and picked up occasional hints by reading between the lines of the articles on the arts pages. The political pages always kept strictly to the party line.

To some extent, the editorial staff had been replaced; but frequently this straightforward explanation was not accurate. For instance, there was an intellectual journal called Die Tat (Action), whose content lived up to its name. In the final years before 1933 it had been widely read. It was edited by a group of intelligent, radical young people. With a certain elegance they indulged in the long historical view of the changing times. It was, of course, far too distinguished, cultured, and profound to support any particular political party — least of all the Nazis. As late as February its editorials brushed them off as an obviously ephemeral phenomenon. Its editor in chief had gone too far. He lost his job and only just managed to save his neck (today he is allowed to write light novels). The rest of the editorial staff remained in post, but as a matter of course became Nazis without the least detriment to their elegant style and historical perspective — they had always been Nazis, naturally; indeed better, more genuinely and more profoundly so than the Nazis themselves. It was wonderful to behold: the paper had the same typography, the same name — but without batting an eyelid it had become a thoroughgoing, smart Nazi organ. Was it a sudden conversion or just cynicism? Or had Messrs. Fried, Eschmann, Wirsing, etc. always been Nazis at heart? Probably they did not know themselves. Anyway, I soon abandoned the question. I was nauseated and wearied, and contented myself with taking leave of one more newspaper.

All the same, the temptation to seal oneself off was a sufficiently important aspect of the period for me to devote some space to it. It has its part to play in the psychopathological process that has unfolded in the cases of millions of Germans since 1933. After all, to a normal onlooker most Germans today exhibit the symptoms of lunacy or at the very least severe hysteria. If you want to understand how this came about, you have to take the trouble to place yourself in the peculiar position in which non-Nazi Germans — and that was still the majority — found themselves in 1933, and try to understand the bizarre, perverse conflicts they faced.

The plight of non-Nazi Germans in the summer of 1933 was certainly one of the most difficult a person can find himself in: a condition in which one is hopelessly, utterly overwhelmed, accompanied by the shock of having been caught completely off balance. We were in the Nazis’ hands for good or ill. All lines of defense had fallen, any collective resistance had become impossible. Individual resistance was only a form of suicide. We were pursued into the farthest corners of our private lives; in all areas of life there was rout, panic, and flight. No one could tell where it would end. At the same time we were called upon, not to surrender, but to renege. Just a little pact with the devil — and you were no longer one of the captured quarry. Instead you were one of the victorious hunters.

That was the simplest and crudest temptation. Many succumbed to it. Later they often found that the price to be paid was higher than they had thought and that they were no match for the real Nazis. There are many thousands of them today in Germany, Nazis with a bad conscience. People who wear their Nazi badges like Macbeth wore his royal robes, who, in for a penny, in for a pound, now find their consciences shouldering one burden after another, who search in vain for a way out, drink and take sleeping pills, no longer dare to think, and do not know whether they should rather pray for the end of the Nazi era — their own era! — or dread it. When that end comes they will certainly not admit to having been the culprits. In the meantime, however, they are the nightmare of the world. It is impossible to assess what these people might still be capable of in their moral and psychological derangement. Their history has yet to be written.