Shortlink

Is Google Glass an Augmented Reality Device?

Is Google ­Glass an Augmented Reality Device?

No. But it’s close. (See the bottom of this post for a little addendum.)

Augmented Reality, as a field, has been threatened with co-option of its name in the past. There may even have been some angst a few years ago about using the term to describe what most people call Mobile Augmented Reality on smartphones.

AR is, in its true form and ideal implementation, the seamless visual fusion of virtual objects and data with the real environment, by way of overlay through optics that can simulate all of the visual characteristics by which we perceive physical objects in the real world. That’s the ideal. It’s okay to refer to less-than-ideal analogues as AR devices because that ideal doesn’t exist yet. But they’re stand-ins until the necessary hardware exists.

Is a mobile phone a legitimate Augmented Reality device? Yes. One looks through the “magic window” of the screen and that becomes the user’s active Field of View (FOV). It lacks depth and the full FOV of the human eye, but one can hold it directly between one’s eyes and the subject area at which one is looking. Something like the Nintendo 3DS goes one better, since it adds stereoscopy to the experience, but it’s still far from ideal.

I know I’m belaboring this point, but for those who have tried the Oculus Rift, let me make an analogy: Imagine that you’re really standing in the yard of the villa that inspired the Rift’s Tuscany demo scene, but with no fountain there. You’re wearing VR goggles with a selectively transparent display element and lenses that don’t distort anything that’s behind that see-through display. When you aren’t looking at the fountain’s position, the goggles don’t display any of the scene, but the computer to which they are attached has the scene’s model geometry in its memory, and the model is an accurate one-to-one representation of the real space. So here’s what we’re using:

  • GPS (for initial rough positioning so the system knows that the villa model and accompanying data is what it should be using)
  • Head-tracking data from sensors like the MPU-6050 found in the Rift, or the MPU-9150 in Glass (it’s the same chip with the addition of a third-party magnetometer built into the package… incidentally a chip for which I wrote a sloppy but ground-breaking hack). This is mostly to make the next step easier. Because inertial and magnetic sensors are inevitably subject to at least some error, (accumulated integration error for the inertial sensors and magnetic field distortions for the magnetometer), especially when trying to measure linear translation as opposed to orientation, this is not really how you want to determine where the user is looking. but having a good guess reduces, by an order of magnitude, the number of possible perspectives against which you need to try to match data from the visual sensors.
  • Visual data from cameras (stereo cameras, depth cameras using code like Kinect Fusion, single cameras with really slick SLAM algorithms like PTAMM or 13th Lab’s PointCloud™ SDK… interpreted by the CPU, or a dedicated vision processor… whatever… doesn’t matter) to precisely register position of the virtual field of view with what’s actually in front of you.

So the system knows where you are, the direction in which you’re looking, and precisely what your field of view is. The optics have the capability of displaying virtual objects with a real sense of depth, like the Rift, but don’t block out your view of reality except where displaying virtual objects. You look where the fountain should be, the systems draws it with the correct focal depth where it should be, and boom, your perception of the reality that is the yard around you has been augmented with a virtual fountain that looks like it’s really there… until somebody walks between you and said fountain. The system needs to be capable of perceiving that an object has passed into the portion of your FOV where the fountain exists, and that that object exists at a closer depth than the one to which the fountain is registered. With that data, it needs to apply a stereoscopic occlusion mask over the fountain and in the shape of the outer contours of the occluding object. Now the person between you and the fountain is visible through the person-shaped hole punched in the rendering of the fountain. Because the focal depth of the remaining visible portion of the fountain is correct, and your occlusion mask is perfect, the person appears to walk in front of the fountain. Oh yeah, don’t forget to make the lighting of the fountain match the lighting of the real place. And also don’t forget to capture the shadow of that person occluding it and remap it if it would fall on the fountain. And that other person is wearing the same AR system as you and is tuned to the same channel… your system had better show you the virtual splash when they throw that virtual rock into it. Never mind the reflection of the scene in the water 0_0.

Anyhow…

Proceed to populate your virtual environment virtual objects. Don data-gloves or spatially tracked controllers, and whatever haptic feedback systems you have access to, and reach out and interact with those virtual objects. Or use a gestural mouse for a less seamless experience. Or use that depth camera on your head and be content to limit your interactions to those where your hands are visible to it.

And THAT is Augmented Reality. And it sure as hell ain’t easy.

So, back to our original question: Is Google Glass an Augmented Reality device?

Well, what’ve we got? We’ve got GPS. We’ve got an inertial measurement unit with a magnetometer. We’ve got a camera and a host processor capable of running some SLAM analysis on what we’re seeing. We have network connectivity with which to reference an online database of virtual objects and their precise coordinates in the real world. What we don’t have is the display. Google Glass is as much an Augmented Reality device as your phone is… IF you take your phone and hold it out, up, and to the right of your head and then glance over at it to see virtual objects overlaid on the 2D image of what is actually right in front of you. Or Google Glass is as much an Augmented Reality device as the GPS display you have suction-cupped to the windshield of your car beneath your rear-view mirror (which isn’t an Augmented Reality device), not even close to being as much of one as the badass HUDs that are projected onto the windshields in front of the drivers of some newer vehicles. I wouldn’t say that that’s really Augmented Reality, but at least it’s a real see-through overlay. I would say that the automotive HUD exhibited by MVS California a couple of years ago IS a real Augmented Reality system.

So no, Google Glass is not an Augmented Reality device. But a lot of the ingredients are there, and there will be lots of apps that can display useful contextual data up and off to the side of what you’re looking at. But that isn’t Augmented Reality. It’s something useful, it’s something in the same family, and it’s something that should be of interest to everybody who is interested in Augmented Reality, but it isn’t Augmented Reality. Some people think that Glass is a bad thing because the current focus is on the capture of images and video using the onboard camera, and that that is going to creep out the public and give a bad name to head-worn computers. I’m hoping that that focus will have evolved by the time the consumer version launches. Where I think Glass is of great importance to Augmented Reality is that it is set to be the first mass-produced consumer electronics device that places all of the necessary non-display components of a basic AR headset on people’s heads. The only thing missing is the correct display modality.

Addendum:
So I just had a conversation with Steve Feiner while on a conference call to prepare for a panel that will include both of us at Augmented World Expo in Santa Clara next month. He made the argument that, with a rooted device (not limited to the Mirror API) and the eyepiece slightly repositioned, and with the addition of bigger battery (no problem; I sometimes carry an 18Ah backup battery with me anyhow) then sure, the Glass hardware could be used as a legit AR device. Stereoscopy is not a prerequisite for AR. But geometric registration of graphics with the scene is a prerequisite. So, arguably, the Glass hardware is capable of being used as an AR device… just not a very good one. So really, you shouldn’t want Glass to be used as an AR device. But it will be a great contextual data display. And I suspect that the supported programability of Glass will grow far beyond the Mirror API in short order. Keep in mind that there was no App Store on the first iPhone for a long time, and that developers were limited to web apps. I think that this is just Google’s attempt to curate and guide the experience for users and developers who aren’t hardware, interface, and kernel experimenters. It is a technology that will augment the human experience, but not with Augmented Reality. Maybe that will come with Glass Mk II, or from another company in the meantime. We’ll see.

Shortlink

Meta, an Ambitious AR Glasses Startup

Last night at ARNY, the New York Augmented Reality Meetup Group, there was an interesting presentation by a new startup called Meta.

They’re about to launch a Kickstarter campaign for an AR glasses development kit.

My first impulse was that they were biting off more than they could chew and promising something that they couldn’t reasonably deliver. Their current demo uses a uses a set of Epson Moverio glasses and a low-latency camera capable of performing finger-tracking. It’s worth noting that they’ve already done something interesting here by feeding HDMI into the Moverio display. As far as I know, the standard Moverio doesn’t have a user-accessible video input, and Meta had the glasses being driven from electronics in an opaque laser-cut box. That box presumably contained the heavily hacked Moverio handset, or an Epson device specifically for 3rd party hardware developers who want to feed their own signal into the display. And that’s where it gets interesting. Meta’s press release announces a partnership with Epson, which is the first big point to their credit. The second is that the esteemed Professor Steven Feiner, a long-time ARNY member, is also a member of their team. Third, their CEO claims to have a 30-page patent which he’s coauthored with Professor Feiner.

Trying out Meta’s demo hardware in its current experimental state

Meta has input from the their camera running into a fun little Unity demo that superimposes little glowing transparent tracking blobs on your fingertips when you hold them in front of the camera. The demo is imperfect, but my impression is that it was a quick hack so they’d have something to show at the meetup. My biggest critique is that the video output in their demo isn’t scaled and cropped to align to the view through the glasses. I’d forgo displaying the camera feed and just use the camera’s tracking data to superimpose the overlaid tracking indicators on a black background, which should appear close to transparent. Only close, but not perfectly transparent, because all of these transparent display glasses are still using backlit LCD microdisplays, where the backlighting still leaks through pixels that are set to black. Eventually this will be addressed by using emissive-pixel microdisplays like those shown by Microoled.

While he wouldn’t comment directly, when I asked Meta’s CEO, Meron Gribetz, if he’d approached Primesense about the Capri sensor, he said that he’d been at CES, implying that he’d at least gotten a look at it. In it’s current form, the device isn’t suitable for outdoor use, but who knows what new sensor technologies might come along between now and the eventual release of this young company’s consumer product. That’s just me thinking out loud. The Meta folks were pretty hush-hush about what their future plans might hold.

Overall, Meta’s team has great energy and is admirably ambitious. It sounds like they’ve got the right patent, partnerships and people, so I’ve got high hopes for them. I’m looking forward to their Kickstarter campaign.

Hit up Meta’s website and check out their cool launch video.

Shortlink

Running into Sergey Brin on the Subway

Clearly today is the day I should publish my yearly blog post. I think I’ll make a two-parter, since most people coming here today are going to care mostly about my personal encounter with the co-founder of Google.

Sergey Brin on the 3 Train

Last night I ran into Sergey Brin on the subway ride home. I got on the downtown 3 express train at Times Square. Almost got into a different car, but switched to the next because the there were some people exiting slowly from the set of doors at which I was standing. I plopped myself down in an open seat, admittedly looking a little worse for wear after the two-hour bus ride down from a weekend in Woodstock. Now I’ve already encountered a couple of people wearing Glass, and an acquaintance is actually a member of the UX team. I also met and spoke with somebody from Google X who was attending the Invensense Motion Interface Developers Conference at which I spoke last year. So I looked up and there was a fellow wearing a Glass unit. Cool. I’ve been to Google NYC for a tech talk (a great one about Street View) and I see Googlers on the subway periodically, so it wasn’t that much of a surprise. But… that guy sure looks a lot like Sergey Brin.

I asked if I could take his picture and he smiled and consented. I asked how the project was coming along and how he liked where it was right now. Of course he told me that he loved it and that it was coming along really well. Somehow, though, I just didn’t trust my own eyes enough to believe it was really Sergey Brin sitting across from me. I mean, I’ve seen the dude’s private jetliner with my own eyes while working out at NASA Ames in my previous job. What would he be doing on the subway? Aside from the fact that he has a ginormous corporate facility and an apartment here.

Anyhow, I asked if he was part of the core X team and he said that he was. He told me that there are about one hundred other people outside of X who have prototype devices. I told him that I was a Vuzix M100 developer and was looking forward to getting a dev unit and getting to do a side-by-side comparison with Glass. Actually, as it turns out, I inadvertently lied and told him that I was expecting to receive a dev unit shortly. The tracking number sent to me was actually for the M100 SDK, which arrived today. As I’ve signed an NDA, I can’t say anything about it, but it looks really good. I’m not sure when I’ll actually get my hands on the hardware.

Vuzix M100

M100, the Vuzix entry into the class of devices that will include Glass

But seeing as I wasn’t at Google I/O, I know for certain that I won’t be getting Google Glass Explorer Edition anytime soon. I told Mr. Brin that I know a few people who are eagerly looking forward to the Glass Foundry events. He told me that the Explorer Edition would be shipping out to devs in a couple of months. If I’d really been confident that it was him, I’d have given him my card and asked for an invite. I have been told several times today that I’m a punk for not having asked regardless. Oh well.

So we got to the 14th Street station and were still talking when he realized that it was his stop and jumped up. I bid him “take care” (by all accounts, he does), and that, as they say, was that. I took out my phone. Looked at the pictures, and thought “yeah… that really was Sergey Brin, you dummy… couldn’t you have thought of something intelligent to say? Or told him that you’ve been working on building a wearable Human Interface Device accessory specifically suited to HUD applications?”

But I have a funny way of running into people, so I’ve got no regrets. I recently wired up some Hasbro NERF Stampede guns up to some Neurosky headsets from  a Mattel Mindflex Duel game to create a fun little mental face-off game. At CES, my girlfriend and I, by total coincidence, ended up sharing a cab with the designer of Mindflex Duel, who left Mattel and is now at Hasbro. I know that doesn’t quite compare, but I’m just saying that the universe seems to have a funny way of timing my random introductions.

Now it’s rather funny, all of this excitement about the upcoming consumer-ready HUDs. People keep talking about them in the context of Augmented Reality, which seems to cause confusion on several fronts. Yes, Google Glass is a see-through display, but it clearly isn’t the visual overlay that is necessary for “real” AR, and Google isn’t positioning it as such. There are still a lot of challenges to overcome before we can expect those. Those who are new to the term Augmented Reality, and to HMDs in general, seem frequently to lack understanding of what a fixed focal-depth means for these displays.

This post isn’t finished, but I’m hitting publish just to have something up for now.

Check out a small sampling of the work we do at my day-job.

As long as you’re here, check out this just-released music video that I helped make this summer. I used a bunch of Arduino Megas to drive about 250 fluorescent tubes to the cues in Robert DeLong’s first single. :-)

 

 

Shortlink

Google’s Goggles

In case you hadn’t noticed, there has been lots of press in the past couple of days about the rumor that Google is working on a pair of HUD glasses. I don’t doubt it, but having asked a several Googlers about this at CES, I invariably got a sarcastic reply along the lines of “yup, and we’ve also got a space elevator coming out later this year.” But I’ve never spoken to somebody from Google X. I did hear an account of Sergei Brin spending a nice chunk of time at the Vuzix booth at the show, so HUD glasses are clearly on their radar, if nothing else.

One should note that there has been mention of image analysis being performed using cloud resources in Google’s scenario. This is part of the scenario that I envisioned after hearing Microsoft’s Blaise Aguera y Arcas introduce Read/Write World at ARE. While I haven’t heard anything about it since, I wouldn’t be surprised if it pops back up this year. What I think will happen is that a wearable system will periodically upload an image to a server that will use existing photographic resources to generate a precise homography matrix pinning down the location of the camera at the time that the image was taken. The GPS metadata attached to the image will provide the coarse location fix necessary to select a relatively small dataset against which to compare the image. Moment to moment tracking will be done using a hybrid vision and sensor-based solution. But at least in the first generation of such systems, and in environments that don’t provide a reference marker, I expect cloud-based analysis to be a part of generating the ground truth against which they track.

Let’s do a little recap of some of the most notable HUD glasses options these days:

Vuzix STAR1200 – I got to try these out at ARE back in June of last year and was quite impressed. I’ve since picked up a pair and love them, with some caveats. Because they use a backlit LCD micro-displays as opposed to an emissive technology like OLEDs, you don’t get perfect transparency in areas where the signal source is sending black. That means that if the glasses are on and you are sending them a blank black screen, you still see a slight difference between the display area and your peripheral vision. Also, the field of view (FOV) of the display area could definitely stand to be a little larger. The STAR1200 is intended primarily as a research and development device, and is priced accordingly at $5000. The device comes with a plethora of connectors for different types of video sources, including mobile devices such as the iPhone. The STAR1200 is the only pair of HUD glasses that I know of that come with a video camera. The HD camera that it originally shipped with was a bit bulky, but Vuzix just started shipping units that come with a second alternate camera which is much smaller and can be swapped out. The glasses also ship with an inertial orientation tracking module. Vuzix recently licensed Nokia’s near-eye optics portfolio and will be utilizing their holographic waveguide technology in upcoming products that will be priced for the consumer market.

Lumus Optical DK-32 – I finally got to try out a Lumus product at CES, and was quite impressed. I’ve spoken with people who have tried them in the past and, based on my experience, it looks like they’ve made some advances. The FOV was considerably wider than that on the Vuzix glasses, and both contrast and brightness seemed to be marginally superior. That said, you as an individual can’t buy a display glasses product from Lumus today, and they are very selective with respect to whom they’ll sell R&D models. You can’t buy the glasses unless you’re an established consumer electronics OEM, and it would set you back $15k even if you could get Lumus to agree to sell you a pair. I’ve heard that part of the issue is the complexity of their optics manufacturing process. As I was several years ago, I’m looking forward to seeing a manufacturer turn the Lumus tech into a consumer product.

Seiko Epson Moverio BT-100 – I’m rather ashamed that I didn’t know about this device before heading to CES, and so didn’t get to hunt them down and try them. I love that these come with a host device running Android. I can’t, however, find mention of any sort of video input jack. It’s a shame if they have artificially limited the potential of these ¥59,980 ($772) display glasses. Also, with a frame that size, I’m genuinely surprised that they didn’t pack a camera in there. I’m looking forward to getting a chance to try these out.

Brother Airscouter – Announced back in 2008, Brother’s Airscouter device has found its way into an NEC wearable computer package intended for industrial applications.

I don’t mean to come off as a fanboy, but I like Vuzix a lot. This is primarily because they manage to get head-mounted displays and heads-up displays into the hands of customers despite the fact that this has consistently been niche market. I have to admire that kind of dedication to pushing for the future that we were promised. I also love that they are addressing the needs of augmented reality researchers specifically. It will be interesting to see how these rumors about Google will affect the companies that have been pushing this technology forwards for such a long time. I’m hoping that it will help broaden and legitimize the entire market for display glasses, which have long been on the receiving end of trivializing jokes on the tech blogs and their comment threads.