The Journey Towards Hyper Reality
Look! A pikachu!
Last week I caught the new and highly cinematic trailer for the new iteration of Pokemon Go (the shot of Delcatty walking with the city in the background was beautiful)— without doubt the most widely known example of having the masses armed with AR — and suddenly I considered actually jumping aboard the hype train. Yes, I was one of the non-conformists when it dropped sometime last year. Chasing CGI creatures through public parks, university campuses and my own bathroom wasn’t really on my agenda whilst dwelling in the existential crisis I was facing when finishing up my education and entering, as they say, the real world. But now that I have less free time I want to spend it by doing exactly that. As much as it’s a fun gimmick there‘s an important and simple concept behind it which can unlock a lot more than we can foresee: widespread enthusiasm for an emerging technology without shoving it down people’s throats.
Over the past year my interest and exploration into emerging realities has led me to conclude one main factor: if it is going to cause disruption in the way we live our lives then it has to start, for now at least, with the very thing we all use all day every day — our phones. The biggest bragging right AR has over VR at the moment is that it uses our camera as the starting platform and not a $10,000 piece of equipment.
With our phones in hand accessibility is only limited by functionality and enthusiasm. And, taking a note from Niantic (the mad scientists behind Pokemon Go), the key to gauging interest is through interactivity. You don’t get to 50 million users in 19 days if what you’re offering is boring. It’s fun to model some furniture in your home but it gets old quick; I lasted about three minutes before I closed the Ikea app. Maybe that has something to do with the layout of my apartment, but either way it didn’t hold my attention for very long. Snapchat figured the interactivity part out by reversing the direction of the camera — making you the subject. Simple and effective. I was sitting on a plane recently and the elderly couple sitting in front of me were using the app to put dog ears and noses on their sanguine faces whilst giggling away to themselves. And when I say elderly I mean elderly.
Most apps fail because…well most apps fail, but the ones that don’t usually have three things in common off the bat. The deployment of service is accessible and inexpensive, the product is reliable and does what it says on the tin and the method of growth hacking and user retention is sturdy in it’s execution. In a nutshell, it’s easy to use, works and drives traffic. This is obviously a broad generalization but any app worth it’s salt had these building blocks in place from the get-go. Take a look at any of the apps floating on your homepage and recount the story of how you jumped aboard their hype trains. These foundations must be in place for an AR app or service, especially the second part: making sure it works. How many times have we seen companies over-complicate simple concepts and crash and burn, most of the time with solid business models behind them, because their deployment was buggy or glitchy. Premature release’s like this are common in the age of viral, but hearkening back to the stats that only 16% of users will even try a failing app more than twice, its a pretty grim premonition for founders if they release an unfinished and non-functioning app — a common thread seen in a lot of the AR demo apps I’ve tried.
Ultimately the phone is the first step, the first test for widespread AR deployment. The camera as a homepage is the starting point for having any viewing screen as a launchpad. I got my first smartphone in 2009, a blackberry. Before that I was still perfectly content with my monochrome interface, using the old school texting style reminiscent of payphones, making calls by scrolling through my contact list overstuffed with ‘friends’ that I never spoke to and playing Snake when sitting in waiting rooms. But then things changed. I had to have BBM (blackberry messenger) otherwise I would fall behind in my social life, I had to have a camera on the back otherwise people would never believe what I had for breakfast, I had to have keyboard texting so I could spout messages at a faster speed…or risk falling behind in my social life. I had just started university and these things were essential. But now I use my phone, an iPhone 7, in a much different way. I use it to…well I use it for pretty much everything. It’s my access to the digital world, a world shaped extremely quickly in a very short period of time. When future humans look back on this generation the wealth of material, footage and evidence they will have of pretty much every day since the 90’s will be overwhelming. Our phones are a fifth limb — the panic on someone’s face (or my own) when they pat their pocket and realize it’s not there is now an actualized and natural human emotion. I’m digressing, but the point is that deployment must be app-based for now to ensure maximum accessibility, at least until google glasses or (insert AR glasses startup here) make a comeback and/or arrive. Side note: Some rabbit-holes to tumble down: Daqri, Vuzix, Intel, Solos, Magic Leap
Of course all of this is easier said than done, and perhaps the most difficult hurdle is ensuring cross platform compatibility — generally a minefield for developers. ARCore and ARKit, or Google and Apple, are almost entirely different species designed in turn to allow the same end goal. Currently deploying an AR app on both Android and iOS is as fun as sticking your hand into a beehive, but it’s possible albeit it with a complicated pipeline. Until the general deployment can detach from relying on a single platform the reach will be substantially limited, especially in the case of iPhone users.
However, solutions are emerging to navigate this fork in the road. Amazon recently announced Sumerian, a development platform for VR/AR and 3D experiences that will be compatible with both iOS and Android, and signaled intent to uncluster this current headache. Vuforia, relying on the foundation of Unity, enables experiences to be deployed on both platforms. Escher Reality, who explored cross platform development, marker-less technology and multi-user experiences, were recently acquired by Niantic. Wikitude, one of the biggest independent AR platforms, navigate the problem by simply sidestepping it and offering their own SDK for developers. More recently, 8th Wall, on the heels of an extremely impressive demo in which they demonstrated AR compatibility with an iPhone 4, have developed 6DOF support for all phones and offered up a solution to both ARCore and ARKit — let’s see who buys them first.
Now comes the hard part: making it useful. Does the application have enough escape velocity to break free from the gimmick stage we are surfing through? Trying out tattoos only get’s us so far. The useful applications are few and far between but some of the more blatant areas of exploration are easy to identify so far.
The obvious use case for the time being: exploiting an augmented layer as a navigation tool in real-time. By using the camera similar to how an old conquistador would use a fire-stick, AR allows finding your next location simple, effective and entirely video game inspired. Use cases in airports, foreign cities and museums are already seeing traction. AR dashboards and windscreens in the automotive sector are allowing the prospect of keeping drivers eyes on the roads instead of their devices to be a real possibility as well as improving the driving experience, seamlessly overlaying key metrics onto an undisturbed view of the surrounding environment. Neon, an intriguing early stage AR startup, announced themselves by displaying a demo with their own twist on a way-finding application, finding your friends in a densely packed concert crowd. How many times have we stood there waving our hands around like chickens waiting for someone to grab our shoulder in success? Struggle over!
Training, Education & Enterprise
As much as VR has its place in virtual hands-on training, AR has its highly valuable use cases in the realm of education and learning. When you’re shown how to do something instead of just being told what to do the learning curve is easier to ice-skate up. For the more complicated instruction manuals, AR overlays digital information onto physical objects such as power turbines or medical devices. It also uses the video-game like tutorial stance to walk employees through complicated tasks and procedures, speeding up the entire process of on-boarding, training and even the rate of production itself. Virtual assembly of parts and machinery as well as architectural models enable designers and engineers to grasp viability of concepts before pushing it over the edge of no return in terms of manufacturing. On a lesser note, albeit still a fun one, using AR to teach kids about climate change or history are finding traction in classrooms. The possibilities for its educational value are seemingly unlimited at this point.
Product Visualization / Retail
Menus. Clothes. Cars. Furniture. Tattoos. The list goes on. Also: self explanatory.
On the opposite side of the coin of navigation, AR offers the possibility of ‘unlocking’ the world around you, almost like a video game map, showing you the ins and outs of the environment you inhabit. It can take you on guided tours, show you the top rated destinations and, in effect, overlay your usual internet searches onto your camera feed. Interactive gaming falls somewhere in this territory, but using geo-tagging as a focus point to build a construct around, the possibilities of the tool are only limited by the developers imagination. Its in this space that the potential killer app, if combined with social interaction, might be lurking. It’s also the starting point for the advent of wearable’s being used for day-to-day activities and building a collaborative layer of reality on top of our existing day to day commutes.
With the phone as the first step the natural progression would then be wearable technologies — in fact this is already underway with glasses, headsets and even watches and lamps — that will act as the portal to the hidden layers that are under construction. Eventually it will delve into more sci-fi territory with the advent of optical implants or, more simply, AR contact lenses. The advent of in-eye electronics will open the floodgates for the next level of immersive societies. It could potentially change everything about society as we know it, from the current methodology of advertising, consumer behavior and standard social interaction. And suddenly it’s not that far-fetched. The ‘layers’ of reality are already starting to grow and soon enough navigating them will become less of a problem and more of a natural instinct. If a sustainable model of an enhanced reality is to be obtained in the near future, the disruption will occur somewhere around the intersection of MR, holographic projection and light field mapping.
Its only through collective development, with a healthy amount of constructive competition, that the technology will progress effectively. The sharing economy of SDK’s and open-source projects is a sure-fire step forward in creating an online community dedicated to pushing the boundaries of conceptual technology. The gating item here, it seems, is that the silo mentality still reigns. We’re also seeing a game-show-style bragging war of self-proclaimed experts infiltrating a fresh medium that is still very much in the early development stage. It’s like claiming to be an expert astronaut without actually going to space. Its a dangerous mentality. How many times do you see someone label themselves a rainmaker, (insert buzzword here) expert or a futurist? Please explain to me what these terms mean.
Another challenge of course lies in the fundamental questions of society itself, the good old could vs should debate. Do we want to digitally populate, some would say pollute, the cities of the world with hordes of virtual advertising, pop-up banners and infotainment, effectively transforming the environment into a constantly interactive and activated version of the internet? Do we want to make our lives more convenient, some would say distracting, streamlined and accessible in its functionality, successfully navigating the area in between looking at our phones and the road ahead? It’s very much a matter of opinion. The saving grace however is inherent to the technology itself — you don’t have to participate. There is no physical environmental disruption or destruction, no increase in air pollution or noise, no further crowding of urban locations and ultimately, no extra cost of electricity. It is simply an invisible layer to be explored by those who seek it, to be utilized by those who understand its value, to be constructed by those who believe in it’s potential. In a world increasingly disconnected and hyper-stimulated, the natural progression is to expand it from the bubbles of our own devices and encapsulate the environments we populate.
I’ve always held onto the fact that one day I could potentially occupy, and even help build, the worlds I’ve long considered to be intriguing representations of the future. Tracing it back to the first time my Dad showed me the original Blade Runner, arguably a pure form of inception on my personality, my love of sci-fi films and the mystique of the world building present within them is perhaps the reason I decided to combine my work in film with a fundamental background in engineering, hoping that one day I could figure out a way to contribute. For me, imagining walking through cities and interacting with numerous holographic advertisements and experiential marketing campaigns, being able to indulge and utilize the smart and engrossing yet minimal technology on display in many stalwarts of the genre and ultimately engage in a world that feels bigger than that of concrete and tarmac is a long sought dream.
A word that keeps being thrown around, and appearing every day in my LinkedIn feed, is ‘Metaverse’. It’s a good word — rolls off the tongue nicely, especially when you say it fast. This is ultimately how we can describe the construct that AR/VR and AI will have on the development of society as we start to build the layers of interaction, communication and commerce that will ultimately be heavily affected by widespread adoption. But with it comes a sort of sinister undertone reflecting a metaphor of the real world. It brings to mind the worlds of dystopian cyberpunk novels, but in reality all it signals towards is a shared virtual space with ‘share’ being the key word.
Augmented reality is not only an emerging technology but an active one — its here today. It’s only a matter of time before it breaks free from the gimmick stage and enters the annals of usefulness and productivity improvement. Fortunately, its at a point where, with the technology behind it now freely accessible through numerous SDKs and kits, people are starting to apply their imagination to it’s applications, be it tracking your friends in a concert crowd, exploring portals to other world locations, positioning furniture in your living room or making yourself look like a cat. We can pull out our phones and enjoy the fun factor of the existing demos and hope that the next generation allows us to go further, but the fact of the matter remains the same: how can we use it to not only enhance our reality but disrupt our outdated habits? How can we use it to improve connections, both professionally and socially? Can it be what we want it to be? A good starting point would be to start downloading apps and playing around with the open-source SDKs available now. Apply a DIY attitude to creating the future. Take advantage. Dive into the deep end. The killer app is still hiding out there somewhere.
Let’s find it.