Here’s the skinny on mobile augmented reality: the apps you’re seeing right now are only using half of what augmented reality will become. In terms of layered information on top of a mobile camera view, location data is powering most of what has been labeled “augmented reality” apps by the press and many pundits.
Some degree of perspective is available from the compass in the new iPhone, and certainly other phones will be getting this additional data source to fuel their own versions of quasi augmented reality apps, but the meat and potatoes of the apps that encourage you to keep an open camera or video stream are not yet using any type of wide-scale image recognition.
At the moment, it’s easy to get confused with this and naturally assume that because you point your phone’s camera at an object and information pops up in the display, it must have something to do with the camera interpreting what it sees, right? Not the case. In fact, in some apps you can actually put your finger over your mobile camera and you’ll still see the same data. Try it!
Of course, it won’t look as cool, but the information will remain unchanged. Actually, if you do something like this while moving in an automobile at about 30 MPH, or faster, you’ll notice the degree of accuracy goes down dramatically. Don’t try this if you’re the driver–just take our word for it!
This latency, or bottleneck, as you might think of it, has nothing to do with the lack of image recognition or the use of location data, but more to do with the multi-functional nature of the mobile phone and the wireless network itself. In time, this will not be such a noticeable issue as data capacities grow faster and faster. Although we suspect that wireless data transfer speeds will have huge variations and “burst” types of technology will become premium services offered by carriers around the world.
As you might imagine, the personal navigation device market and built-in auto-nav systems are not likely to be replaced by mobile augmented reality apps anytime soon… On the contrary, they are more likely to gain additional sensory input mechanisms via cameras connected as accessories streaming data to back-end, cloud-based image recognition systems. It might sound complicated, but it could actually be pretty simple.
You probably won’t see anything like this for the upcoming holiday shopping season, but maybe the year after… The current batch of data being served up by augmented reality apps is definitely relevant when you consider close proximity to the user, but it has little relevance in any sort of dynamic environment or brand-centric promotional world.
Now, thanks to the iPhone and others for allowing easy access to latitude/longitude, there has been a low barrier to entry around creating location-aware mobile applications. The lat/lng combined with a plethora of 3rd party data APIs has been a real source of inspiration for mobile developers. Likewise, with access to the camera, creative developers have been using the lens view — not just on iPhone, but other handset platforms like Android, BlackBerry, Nokia, etc. — to provide a sense of high-tech, Web-Meets-Physical augmented reality experience. Expect to see a lot more of this soon. Yet, far fewer developers have been tapping into the power of pure image recognition. There’s a good reason why you’re not seeing the image recognition component yet; it’s not readily available for the masses and it’s hard to independently develop systems that are robust enough to work in the real-world.
2010 will bring an onslaught of heretofore unknown application developers, brand marketers (e.g., emerging advertising agencies and media integrators), new analytic tools and vertical mobile search engines (special purpose apps that cross more than one mobile platform and likely leverage unified mobile browser standards) to market in a powerful way. Pongr’s own research and development in and around the mobile marketing industry has shown a tremendous ecosystem of media companies, new and old, helping to pave the way for some remarkable new mobile brand-to-consumer solutions. All will be striving to use a few core pieces of augmented reality technology; image recognition and GPS or other non-GPS location information like triangulation, WiFi, and even buoy-tethered mesh networks. The above will be accelerated or hindered against the backdrop of the global economy, but will grow significantly in terms of actual dollars spent on mobile originated image recognition and location awareness capabilities. We base our estimates on our own internally collected data points as well as external indicators showing significant demand for core mobile augmented reality technologies.
The usual tech titans, global marketing agencies, brands themselves, and wireless infrastructure parties will be dipping in and out of the augmented reality waters too, but as is always the case with cutting-edge technology, expect to see the most disruptive advances in platform-oriented augmented reality from startups.
There’s also going to be some opportunities for a few older companies in the GPS space to reinvent themselves, or at least to bring a few new mobile products to market, moving further down the value-chain closer to the end consumer. This could actually breath new life into some of the behind-the-scenes types of companies looking to build livelier consumer brands out of their existing base of users – users that have been faithful and loyal “customers” for years. You can put two and two together here and say that existing brands that quickly figure out an executable strategy around augmented reality will have an advantage as the Internet shifts from being personal computer centric to being mobile and always-on, always-available.
This morning I was listening to Tim O’Reilly on a FutureTense podcast discussing his new concept “Web Squared” and how image recognition via mobile phone cameras will be a major factor in adding additional information about the world to the Web. More specifically, mobile cameras are the sensors that augmented reality requires, with or without GPS, to truly unlock the potential of integrating social networks, real stuff in the physical world, and the vastness of the Web as it already exists today.
Like Facebook, MySpace, Wikipedia, and even Google Maps, the power of augmented reality is not in the technology for the sake of technology, but the data that it provides at a moments notice. Unlocking location awareness capabilities on the iPhone helped Apple go from 0 to 100 MPH in the mobile phone industry, and augmented reality becomes AUGMENTED REALITY when, for the masses, it is unlocked and fully capable of leveraging the real world mobile sensors we have all practically grown as a wireless data bodily appendage. Weird, but true. Some people even have multiple mobile devices they use regularly. Think Edward Mobile-hands.
While the words “augmented reality” are still a little squishy today, meaning many things to different people, the sticky part of “augmented” will occur when mobile users around the world have the ability to share and submit information they collect from their mobile camera sensors into a fully image searchable database. Google, Microsoft, Yahoo and others have automated web crawlers, but the next great search engines will leverage mobile sensors, as well as the users for whom information is a two-way street of giving and getting–be it in a purely social network setting, a wikipedia-esque capacity, or a brand oriented conversation amongst consumers. Some of the forthcoming information silos will emerge as better than others and certain types will lend themselves more readily to business value, but the commonality will involve mobile phone cameras, faster wireless networks, image search and location awareness.
To discuss in more detail or share your own thoughts on what’s going on with augmented reality, find me on Twitter at @Jamie_Thompson and @Pongr.