As I’ve gone though my first week at ITP, I keep returning to Slavoj Zizek’s critique of the philosophy of Donald Rumsfeld. Rumsfeld states that, in the leadup to the war, we were faced with known knowns (things we know we know), known unknowns (things we know we don’t know) and unknown unknowns (things we don’t even know that we don’t know), where presumably the real danger lies. Zizek points out that he misses the logical fourth state, which would be “unknown knowns.”
In Zizek’s reading, the “unknown knowns” are suppositions, tendencies, reactions and practices that are not acknowledged or scrutinized (hence “unknown”) but which are integrally part of our operations (“known” in the sense of fixed/assured and not subject to change). It creates a picture of an actor who presumes he is working from a defined set of parameters to create an ostensibly predictable outcome, while he is in fact working with some other parameters which he neither acknowledges nor understands, but which nevertheless affect the outcome and render it unpredictable, in Rumsfeld’s case catastrophically so.
I feel like this dovetails quite handily into discussions of interactivity. As a complete amateur, I see building interactivity as the process of determining how a person enters parameters to affect the outcome of an event, which is then in turn edifying, entertaining or in some sense meaningful to the person. Our job as designers is both to come up with the skeleton of an interaction (desire -> process -> result) and figure out how to allow the participant to “create” that interaction for him- or herself.
Code on a screen is terribly appealing because, in its stripped-down form, it ostensibly allows the programmer total control over what goes into an event, the processes that take place within it, and the output/result. By limiting operations to strictly defined and digitally enforced mathematical operations, it presumes to get rid of both the unknown unknowns and unknown knowns (though of course, that’s an illusion too, as anyone who has ever used buggy software can attest).
But when we begin to deal with devices that will accept input from the physical world, we encounter these other problems, a giant host of unknown unknowns and unknown knowns on every level, which are paradoxically the most difficult and the most exciting aspect of interactivity. From the environment (ambient light, temperature, background noise/vibration) to the psychology of the user (what does he expect from this machine? what does she want that tool to do? is that a natural motion for someone who wants that outcome?), devices that draw on physical cues need to be intuitive and flexible to process both things that the machine is not expecting and things that the user doesn’t realize are happening.
And they can also play with this background of unconscious operation to illuminate things that have never been shown before. How many people ten years ago realized how many steps they took in a day, or what their resting heart rate was, vs. now?
But at the same time, how does a human use this information? Ten years ago, I never had to worry over how many steps I took in a day – what does it mean to me? What should it mean to me? I was always taking steps and my heart was always beating at some non-zero rate – what good does it do me to know the figures?
As we progress into a world of intensive data surrounding every aspect of our lives, we will begin to need mental/spiritual cleaning systems that order and organize the cacophony and help us to forget things. In some sense, it’s a shift from addressing diseases of privation to diseases of plenty.
Turning to the readings, I was struck by how different the world is now from the world Crawford describes in his article – amazing to see how through-the-looking-glass we’ve gone in only a dozen years. He speaks dismissively of the unthinkable case of “Nintendo Refrigerators” and yet we now have refrigerators with screens in them, refrigerators that can become freezers at the touch of a button, ovens that refrigerate your food all day until they start baking it in the afternoon…
I am intrigued by his model of a device acting as a participant, which listens, thinks and returns a new stimulus to the other participant, as the definition of interaction. It’s a good one, though in contrasting it with a non-interactive book or painting, Crawford credits the program/process/device itself (which can listen and respond) rather than the human creator or creators of the program (who can’t), as “thinking,” which is sort of a dangerous assumption. I suppose on a deeper level of interactivity (and one that we’re coming ever-closer to) the “thinking” can itself be defined by the user/participant, though that risks becoming something like a funhouse mirror.
But I guess I’ve also been thinking about a technological interaction as exclusively being between a single human participant and a nonhuman device or series of devices, which is flawed. There are of course interactions with multiple human participants, in which the device is there to modulate the contact between them. And this is obviously the lion’s share of the interactivity we see today.
The Rant is brilliant, so brilliant that I don’t have much to say about it. I think that “things your hands know” (as opposed to “things your eyes know”) defines a huge and meaningful subset of the “unknown knowns,” and probably the ones that are the most fun and rewarding to dig out and play with. Of course, the challenge it posits is how to create physical, hand-knowable inputs that are as mutable as points of light on a screen, but that’s the challenge we need to take up. And maybe mutability isn’t the be-all and end-all of design. Maybe there is a place for old-fashioned permanence in the world of Things.