Press "Enter" to skip to content

What will make this work

So… this entry is a little bit disjointed, having come into being more as a brain-fart than any sort of real composition. There’s some substance in there though, I think.

I don’t like Second Life. I don’t like the idea of Second Life. Second Life life, to me, necessarily entails giving up one’s first life, or at least part of it, to a largly isolated virtual world. But one can do cool things in a virtual world like Second Life, and the integration of real-world objects with counterparts in virtual space constitutes a very interesting recent development. What I want is a system to take the “magical” abilities of modern virtual environments, and bring then into the real world where I can experience them without sitting in front of a computer, or even looking through a magic “portal” device like a phone. I’d also like to be able to place and organize my data around myself without confining it to a 2D plane in a fixed location in front of me.

I see the following technologies as options for a modern prototype AR rig:

Inertially/optically tracked HMD or HUD with stereoscopic cameras (See Vuzix and Lumus Optical). On-chip stereoscopic depth-mapping would be ideal.

An upper-back backpack unit containing something little like a BeagleBoard and SSD. A light netbook, for prototype purposes, would do the trick too. The upper-back unit would include geolocative, inertial, and networking systems. There’s no reason why somebody couldn’t design a tiny system to do this right now.

There are lots of ways to interface with the hardware. Inertial sensors are getting smaller and more advanced. Optical interfaces like projected touch-surfaces or optical gesture recognition are also options. And there are others. Let’s assume near-term inexpensive real-time full-body interaction capture.

The software to drive this system could consist of a 3D GLUT interface sitting on top of a Linux kernel and capable of displaying flat information as an overlay on the HUD, and also positioning and rendering stereoscopic virtual objects within the user’s field of view. Virtual objects should have two modes of existence: one in which they exist solely to the user of a given system, within their local database, and one in which objects are stored with entries to a relational database such that they can be retrieved by their coordinates when a client system is within a certain proximity. These database tables will be divided into multiple channels, so that objects can exist on different “planes” or “channels” of reality to which one can “tune”. Clients within the proximity radius of other clients will make available the visual and positional states of public objects currently within’ their users’ spheres. Interfaces and so-designated data are private and invisible to other users.

Fiducial markers or pulsed LED beacons should be used as spacial orientation references in closed environments where positioning may be difficult. Objects should remain purely virtual, without markers to represent them. Offline virtual objects could be placed as QR-Codes for on-location importation, with position data expressed as geocoordinates, or as relative positioning to fiducial markers in a given space.

There are myriad applications I’d love to see deployed on a wearable platform integrating these features. One more frivolous possibility would be a music control interface which I image as something like an immersive Lemur… A Space-Lemur, if you will. Instead of interfacing with objects on a multi-touch display panel, one could manipulate floating virtual objects that represent different elements of a composition or performance, and do things like distorting a channel or sample by actually grabbing a waveform in space and molding it.

The implications for gaming would be endless, obviously. Rather than the pure first-person shooter (essentially laser-tag) envisioned by so many, my fantasy involves something more like Tom Clancy’s EndWar, with the player positioned with a first-person POV on the ground, directing virtual soldiers and units around them. A game like this could be played, for instance, for territory in Central Park.

Virtual sculpture and artistic “urban enhancement” are also obvious uses for this technology, creating the option of tuning your reality to one populated with content to one’s liking. It would also be nice if something like this integrated Steve Mann’s purported system for identifying and replacing obtrusive branded advertising with soothing ambient images or something else.

In terms of practical applications for such a system, the most obvious use is for displaying contextual information. It would also allow one to project a virtual telepresence avatar into a meeting space anywhere on the globe, assuming that those present were equipped to perceive and similarly project their own avatars back into the shared overlay.

One could also project one’s avatar into the space above a city like New York which has been thoroughly modeled in Google Earth. The user’s friends might have a notification or tracking system, which would show the virtual position of the avatar relative to their real position. They could then project their avatar up to meet and hold a conversation. Other friends’ locations could be denoted with reticules overlaid on the virtual city beneath, aiding coordination and planning. Upon deciding on a destination at which to meet in real life, a “yellow brick road” could then be displayed as a possible route to the meeting spot for others who might wish to join. Notification could then be displayed to the creators of the meeting as others locked onto the virtual breadcrumb paths indicating an intention to join the party.

Or how about being able to spot available real-estate, when in the market, that matches one’s criteria, simply through icons and overlaid information on the exterior of a building? One could see the details of the available units by gesturing to building in a certain way so as to expand the view of data being presented. The “window” displaying this information could be taken in hand, transferring object and coordinate data into one’s local database. At that point the window would become part of the sphere in which objects can be positioned within reach of the user, which travels with the user’s perspective. Otherwise, the window would continue to float in its position and one could walk away from it, leaving it to close the session by itself once the client rig has passed beyond its proximity radius.

This doesn’t even begin to cover incorporation of data from things like Sense Networks‘ CitySense.

Really, the possibilities are endless, and the only things missing are, in my opinion, a matter of integration.

Also, because I said I’d touch upon it:

In William Gibson’s relatively recent novel “Spook Country” (not his best work by any means… though I do love my GSG-9s), he incorporates an Augmented Reality technology addressed as “Locative Art”. Locative Art has picked up some steam as a term in some online research circles, and seems to have already been in some use. In this fictional account, virtual non-interactive sculptures, depicting the deaths of certain celebrities where they occurred, are hosted from hidden on-location WAPs with onboard servers. The sculptures are only visible through the use of a self-contained set of see-through display glasses. The technology is used to visualize other spacial data in the story, but is never developed to its full potential. It’s an interesting fictional study on the notion of geolocatively fixed data manifesting itself in the real world, but the theme doesn’t get the treatment it deserves after mixed with Gibson’s others. He may either not have wanted push his current readership too hard, or perhaps thought that a hack like this is suggestive of the brink on which we find ourselves. I didn’t happen to think that all of his particulars rang true, but any AR in serious fiction is welcome. Those who haven’t been following this line of development might find his writing prescient-seeming in days soon to come, but I thought that the choices that he made for his hypothetical system make for a fine example of why it can be more challenging to get it right with near-future fiction than in stories set in worlds unfamiliar in more ways.

I know that I said I’d talk about companies whom I think could play a big role. Invensense is one of them. Vuzix and Lumus are others (though it’s hard to know, since even Apple has filed a patent for a stereoscopic display glasses). There’re lots more, and I’ll get to them. And incidentally, I like the correlation of the TI Pico Projector module for the Beagle, and the Wear Ur World announcements. There’s no connection obvious to me, but it’s a nice juxtaposition of events. Interesting things are happening.