Press "Enter" to skip to content

AR Consortium, ARML Spec, Layar 3D

Lots of big AR news these days. Where to start?

Well, there are two big ones today so far:

Robert Rice and Mobilizy are proposing an ARML Specification for mobile AR browsers to the newly formed AR Consortium. The Consortium, with its distinguished list of members, is big news in and of itself. I really, truly hope that Layar chooses to get on board with this. As the other widely recognized player in the mobile AR Browser game so far, I fear they may have the power to make or break this standard. Between the endorsement of Rice (and so, presumably, Neogence), and adoption by Layar and Mobilizy (maker of Wikitude), we could have a real functional standard. If, on the other hand, Layar fails to adopt the spec, it could go the way of VRML if no new competitive players arrive quickly and with support.

And today, Layar announced the upcoming addition of support for dynamic 3D models embedded in their content layers.

If the ARML Spec is made versatile enough to support Layar’s 3D strategy, we could see a real revolution in AR standardization, interoperability, etc. This all goes back to Tish Shute’s fantastic interview with Robert Rice on UGOTrade back in January. Interoperability, standardization, and shared content are the keys here.

It’ll also be interesting to see if Total Immersion and Int13’s upcoming mobile framework will support ARML. Depending on what they produce, that could establish the standard even without adoption by Layar.

Also, as Sergey Ten was quick to point out to me on Twitter, “ARML should include geometry/models and points descriptors/patches so that locations could be recognized by camera.” Given Layar’s 3D announcement, this would be key to their ability to get on board. (Come to think of it, Layar’s announcement may have been prompted by the prospect of Total Immersion and Int13’s entry into the mobile AR Browser fray and what they would bring to it… but that’s tangential and speculative, so I’ll let that notion sit.)

Also, I hear that Mr. Rice’s Neogence has licensed a certain very impressive markerless tracking algorithm. If this is, in fact, the case, then I’m sure he wouldn’t be opposed to the inclusion of optical data-point sets that could be downloaded, based on proximity, and used to register with views of the real world. I myself have been toying with (conceptually only, mind you) the idea of using Google Earth 3D model textures and StreetView imagery as tiles, generated and retrieved based on GPS proximity and heading, to produce more accurate registration. The plausibility of this approach was only reinforced in my head after watching this sweet piece of work by Lee Felaraca today. (See addendum at bottom of post.)

Keep the augmentation coming folks! I can’t wait to see you all at ISMAR!

I’ll leave you with this, in case you haven’t seen it yet:

[youtube=http://youtube.com/w/?v=Ud8wbrRKPIU]

Addendum:

The reason, incidentally, that I was encouraged by Mr. Felaraca’s work is that a similar technique might be used for generating trackable textures from camera input. Upon revisitation, I’m not exactly sure how that would aid the process of pinpoint registration. My thought is to generate the tiles from previously gathered data and match that against the camera input, as with previously implemented tracking methods. Regardless, the Texture Extraction Experiment is awesome, and would provide an excellent tool for gathering the data used for said tile generation, as well as on-the-fly creation of virtual objects for use in augmented environments.