Beneath the Surface...

Douglas Edric Stanley

2007.05.31

Ok people, you can stop sending me emails about Microsoft Surface. I’ve seen it already. And as I mentioned in this interview and this one the experimentation phase of interactive surfaces is over. Everyone knows that Microsoft is the pretty much the last cog in the technology wheel. When they’ve figured it out, well that means that just about everyone else has already figured it out some time ago.

I love that historical timeline on the surface web page. NO REALLY EVERYONE, LOOK, WE THOUGHT OF THIS BEFORE THE IPHONE. NO, HEY, WHY ARE YOU LAUGHING? IT’S TRUE! The funny thing about Microsoft is that they are still actually sincere after all these years. They just don’t get the joke. They really do think that they have invented all these technologies, only they just weren’t savvy enough to make people realize it. For example, to them, OS X’s interface is actually a rip-off of ideas they were already working on in Vista and not the other way around. Back in my little bubble world, we digital artists are always suffering from the same illness — it’s in fact our favorite sport (oh, I was doing that years ago) — but it’s even funnier to see one of the richest companies in the world fretting over their public image: gosh, if people only knew!

But kidding aside, this is a really good thing. I said in the above interviews that when Jeff Han’s solution was shown, it was officially over for surface innovation. I called them Hypertables, Hypersurfaces and Object Oriented Objects, MIT people called them Things That Think amongst other terms (and ages before me), and then before all that there was Bill Buxton and Myron Kruger. So none of this is new. But what we needed was a starting block, a sort of ok, fiddling’s over, time to use this stuff. Jeff solved the fundamental visual-gestural language, and all we had to do from there was to start using it.

I also should mention here what got cut out of the Fast Company interview, in response to the question « are hypertables the replacement for the keyboard/mouse combination? » My answer to that was « look at the Wii ». You cannot seperate the iPhone introduction from the introduction of the Wii controller. Both are looking to phsyicalize algorithms, make algorithms maleable physically, and as far as that goes, the field is still wide open. Keyboards and mice are still workable, so they probablly won’t die, no, beacause people will be writing things for a long time to come. Neither the Wii, nor the iPhone, to Surface, will help you write your blog. Maybe your video blog, but not your text blog.

Or maybe a million little things will complement the keyboard and mouse, or maybe just a half-dozen solutions will turn out to be modular enough to solve most of the things we will want to do. Or maybe Cronenberg is right, and it’ll be your body itself. But in my opinion 1) phyiscal objects are good for abstract thinking, and 2) no single object will be fully modular enough for all uses. There will not be one single system, although touch will indeed solve quite a few of the old ones. But whatever the case, the interfacing will require interfacing algorithmically. And when it comes to interacing algorithmically, nothing beats the Rubik’s Cube.

So now are finally seeing real-world hypersurfaces that we can work with. Personally I was expecting Apple to solve the commercialization problem first, and maybe they will. With that $5000+ tag, Surface still feels like vaporware. But I don’t think Microsoft will have any problems shipping at the end of the year as they predict. Trust me, this is very easy technology. For my installation at the Pompidou Center in 2004, for example, I solved my lighting problem with a 5€ bathroom lamp from the BHV down the street. Now, if I can make Hypertables with household appliances, Microsoft can probably commercialize the thing with more professional processes.*

I’m also intrigued that so many people are offering the same solution. That more or less solves the patent problem right there.

Also, Vista is running behind Surface, and while I think Vista is oh-so Mac 10.2 (which is still just a fancy NeXT machine), it’s ultimately great news that there’s a boring old operating system sitting under that coffeetable. Running Processing or Flash or vvvv or whatever on top of it shouldn’t be all that hard.

This is going to sound bad, but personally I’ve got about a five-year start on what works and what doesn’t in these touch-contexts, and plenty of ideas that have just been waiting for the technology to become a reality. But I’m also a little bored with it as well, so we’ll see if I invest a new round in this technology. Our crew has it’s work cut out for it whatever the case: neither Microsoft, nor Apple, nor Perceptive Pixel for that matter, have proposed any tangeable experience with this technology. So far, we’re just talking about « interfaces ». So artists still have a lot to offer in this field.

So thanks Microsoft. I guess I’m trying to say thanks for being so reassuringly tweed coat and making this technology feel like Daddy’s old jalopy…

Original Comments:

2007-06-05 17:38:42

Peter Kirn

Yes, done, really? I agree on your head start — and on the boredom with novelty — but I’m still thinking about new ways to use the mouse. Surely there’s a lot of potential in different kinds of gestures and people making actual art … or at least, once things enter the realm of art, just about any technology can become limitless, freed of rules / market realities.

I have known some good people working for Microsoft Research, which is where I think Surface came from. So, tweedy as the big company is, the researchers may have done good things.

Maybe novelty wearing off is a good thing. :)

2007-06-05 20:04:20

Douglas Edric Stanley

I should qualify what I mean by done, game over, whatever you want to call it. I should also hilite that I think this is more about the beginning of a new phase, than the end of an old one.

Allow me a long rounabout answer: essentially, my thinking is informed by the logic of what in French is called the « dispositif ». If you have studied a little film theory you probably know what I’m talking about — it’s a fairly well known mind-set. Basically, within a thinking informed by the dispositif (apparatus as ideology, as epistemological framework, as thinking machine), there is a complex relationship between the content and the dispositif that houses the content — i.e. the machine that makes the content possible. This relationship is complex because the dispositif also imprisons the content, and makes certain content creators and consumers desire to transgress this framework, which they achieve to a greater or lesser degree, depending on the context, the author, the audience, etc. Whatever the case, there is a dialectic at work.

When it comes to algorithmic forms, I think we are in the same position as early cinema, only in the case of these newer more modular machines, there is an explosion not of one new medium, but of thousands. With each explosion, a new dispositif forms and brings with it its own framework of normalisation, standardisation — which is not the same thing as homogenisation. In the case of cinema, no one confuses Hitchcock for Sokurov, but they are still working with 24 frames per second, at least when it comes to projection. I remember at Cannes a few years back, Sokurov was pleased to have projected his film on one single reel - no cuts. An admirable feat, no doubt, but the audience still sat in the same dark theatre, facing forward, white screen with sound system, etc. Go to any new media festival and you will see similar technological hoop-jumping by the busload, sometimes for one single work itself.

Hypersurfaces, or Multi-touch screens, are therefore not in opposition to artistic forms, they are new possibilities for artistic forms. But the artist therefore has to accept the intrinsic ideology (or framework) of the apparatus, or at least accept, as would Goddard with cinema, to work as a revolutionary within a predisposed apparatus. Radicalizing from within a framework, rather than building a wholly new one. Perhaps in the case of Goddard, he even achieved a re-definition of the apparatus itself, but all within the technological confines given to him nevertheless.

The interesting thing about so-called “new media” is that as artists we have been able to build both the content and apparatus, sometimes before industry — or in my case with Surface, at the same time as industry. At least this is why I’ve been interested in this field. I desire no market freedom, or detachment from market reality. Cinema, for example, is not free of market pressures, it merely creates its own market pressures for its own advantage. I admire cinema for that pragmatic approach to art-making.

When I made my hypertable (hardware), it was coupled with a generative film platform (software), as well as a non-linear narrative (content). When I worked on the project (2001-2004), and its predecessor (1997-2000), I was not interested in one or the other, I was interested in all three. But if Microsoft wants to build a distributed platform of hardware upon which I can distribute my software+content hybrid, shit I’m all for that! They’ve basically done 1/3 of the installation work for me, and honestly, it’s the least interresting!

At one point Apple Research brought some people to see my system, as well as some other big media companies. If I was intelligent, I probably could have joined them in their efforts. But that’s not what I’m interrested in: I’m more interested in this content/software/hardware relationship. Jobs quotes Kay who claims that people who believe in software need to build their own hardware. I absolutely join them in that dream, only I add the content too: anyone who is passionate about content in an algorithmic world, needs to be thinking about the hardware and software it runs on.

To go back to the film-maker metaphor: imagine a 1900 film-maker learning that there are now thousands of movie theatres opening up all over the world. That’s a pretty exciting prospect.

When I said I was bored, it really just meant that I need to re-motivate myself to go fight that fight…

As for Microsoft: of course there are good people doing good work at Microsoft Research. In fact, Microsoft has historically employed some of the true pioneers in the field of computer science.