Accordingly, augmented and virtual realities have made enormous strides following their creation. Virtual reality (VR) won the race before it began in 1968 when Ivan Sutherland released the first head-mounted device with which we’ve become so familiar. Dubbed the Ultimate Display, this rudimentary device plugged into a computer and allowed users to bask in a virtual world.
Virtual reality has expanded into different areas since its inception. MIT researchers created an interactive map of Aspen, Colorado in the 1970s. NASA explored virtual means of human-computer interactions merely a decade later. The first virtual-reality 'gear' gained popularity in the consumer market during the 1990s.
Marketed heavily and hyped accordingly, public interest ultimately dwindled as the technology failed to deliver on its promises. Today, that excitement is building once more as VR advances, particularly in the entertainment space.
Augmented reality (AR) officially built traction as a functional technology in 1992. The first system sought to boost human productivity via visual, digital overlays in the workplace. These overlays provided sensory feedback.
Developed at the United States Air Force’s (USAF) Armstrong Research Lab, Virtual Fixtures paved the way for continual advancements. Since then, AR games, design tools, glasses, and headsets have been developed by the likes of Adobe, Google, Microsoft, and Apple. Companies have marketed these technologies at engineers, developers, and consumers.
A virtual reality user sampling the EON Icube Mobile immersive environment. Image courtesy of Wikimedia Commons.
The Evolving Technology Behind AR and VR
Gaming is a growing testing ground for AR and VR’s viability. Consider the lightning rod that’s Pokémon GO—a game predicated on real-world exploration via a virtual environment. That perceptive divide isn’t as great as we think. Map overlays, landmarks, and textures all give us a sense of where we are in the physical world. These are assigned meaning in tandem with their actual locations. GPS, Galileo, and GLONASS (among others) facilitate immersion by tracking player position.
Such software is powered by hardware, resources that engineers dedicate to various AR features. AR features largely depend on cameras, cameras that are almost universally included in today’s mobile devices. AR is often facilitated through the use of multiple lenses which gather crucial information. One camera lens may gather light and the traditional 2D image we’re used to, while another is dedicated to depth mapping. Together, these make projections of 3D renderings possible within the context of the real world. As cameras grow more sophisticated, we may see these features incorporated into a single lens.
Accelerometers and gyroscopes have played key roles in unlocking AR based on tilt, movement, and velocity. GPUs make those objects come alive before us by interpreting complex mathematical parameters and creating complex renderings based on those instructions. Graphical processing is also the driving force behind VR’s future success.
Headsets are still the main conduit through which engineers deliver VR experiences, though. Acting as immersive blinders, in a sense, these sets are fairly cumbersome. As the scale of silicon and key electrical components continues to dwindle, we can expect hardware to do the same in tandem.
As companies push the envelope with video resolutions and refresh rates, these GPUs must become more capable. Higher resolution images are storage hungry, increasing the need for greater onboard memory and bandwidth to compensate.
Longstanding players in the graphics industry like Nvidia realise this and are designing GPUs to handle heavier processes. Chips are becoming increasingly smaller as transistor densities increase. While it seemingly makes sense to add shader cores to powerful GPUs, the physics of chip fabrication may place greater emphasis on making existing cores more efficient.
Furthermore, these VR solutions come both untethered, standalone, or tethered. These standalone models make use of their own internal GPU. Tethered units connect to a computer (for example, the Oculus Rift S). Such headsets depend on that PC’s internal hardware as the engine. We’re accustomed to connecting peripherals to our computers, but an evolving class of VR hardware will push the boundaries of both CPU and GPU requirements moving forward.
An operator using a virtual fixture. Image courtesy of Wikimedia Commons.
Because an individual is still subject to the physical limitations of their surroundings, these must be object free for safety purposes. However, most systems allow users to play from a relatively stationary position. Consequently, VR headsets also require additional hardware to function—typically handheld controllers which facilitate movement and other actions.
VR transports us to new worlds and helps us consider what could be, as opposed to what actually is; with baked-in interactivity, these conceptual realities will help us understand how to incorporate new advancements within our own.
The true power of AR and VR rests in their future ability to make us more productive. How will these technologies help us automate our world? How can we streamline processes, think outside the box, and make things more accessible? The longevity of these technologies will predominantly be determined by their business and commercial applications.
Whereas VR aims to help us envision, AR owes much of its potential to practicality. This has immense promise in the workplace, opines Peter Diamandis in a SingularityHub article from late last year. How we view and interact with the spaces around us is everything. Existence within a defined space, especially in a working environment, depends upon efficiency and organisation.
As Diamandis points out, imagine the positive impacts AR could have during medical procedures and surgeries. What if surgeons could better visualise what they’re operating on? Imagine AR-enabled surgical glasses, which could scan and project labels according to the anatomical layout of a bodily region such as the chest cavity. During high-risk surgeries, even minute missteps can prove fatal. If doctors can better differentiate between key organs, vessels, and more, survival rates will increase. It would also save time, reducing fatigue while allowing for additional patient care during shifts. That potential is huge on its own.
3D4Medical and Echopixel are even pioneering software-based medical training applications. Tablet cameras capture 2D scans and bodily structures, then create a live AR patient based on the gathered data. This is much cheaper and more efficient than traditional medical training.
Let’s say for example that we are presented with an abundance of data, but have no convenient way to organise it: AR could allow companies to visually present this information in a clear way. This can be achieved via virtual labelling, contextual actions, and location-specific interactions. Diamandis mentions AR-powered check-ins for areas with restricted access or spatially-arranged information that makes consumption easier.
An augmented reality app used to monitor robotics function in real time. Image courtesy of Bigstock.
Imagine walking into a room and being presented with the pertinent data based on your profession, clearance, or habits. This extends to any environment, not just the office. For real estate agents, imagine toting your iPad along the streets of New York City, pointing the screen at an empty lot and projecting a potential development there instantly.
What about military applications? War takes place in the real world, so we can’t just strip that perception away entirely for a multitude of reasons. However, advanced heads-up displays and goggles can grant soldiers access to crucial information on the battlefield. We could very well see the day when soldiers can visual orders, targets, navigation, and more through their eyewear. As AR continues to mature, these projections will become more sophisticated.
Marketing and online shopping stand to benefit. It’s not too difficult to envision fully-digital billboards and floating sales advertisements above clothing racks at your local department store. Speaking of which, AR can power active body scanning, making it easier to try on clothing anywhere. By placing oneself in their camera’s viewfinder, a shopping app will project articles of clothing onto their person.
Technologies like Web AR will allow consumers to interact with virtual objects like products while browsing. Though users currently access most AR features in-app, Web AR would unlock that functionality within mobile browsers. This would eliminate the need to download additional software, environments, and digital models onto the device itself. Instead, these renderings would be pulled from cloud-based servers.
Diamandis also expounds upon the merits of virtual reality. Businesses like eXp Realty, with vast numbers of employees and geographical reach will be able to meet using integrated VR.
Workers can patch in remotely with either a head-mounted display or browser, congregating in the same virtual environment. This environment has a plethora of amenities, but can be customised as needed for the discussion at hand. VR technology makes it easy to explore virtual concepts together, making collaboration a breeze, it also plays an important role in slashing overhead.
Businesses are moving away from physical locations. VR makes the virtual workplace possible. It also makes scaling simpler without unneeded complications.
What if your company is relocating or renovating? VR programs allow planners to visualise layouts, objects, and environments with relative ease. It also allows for easy interior-exterior design on the fly. Textures, materials, furniture, and more can be swapped in short order. The result is diminished expense and lead time.
Cities like New York and San Francisco are grappling with high rent and low supply due to heavy influxes of workers. We’ve discussed the power VR has to bring on-site experiences to eXp’s remote teams. What if engineers could take similar technology and bring it to other corporate campuses? Imagine virtual projections of architectural layouts that bring blueprints to life. How about creating and sharing live industrial designs from your home office? If companies work in a distributed fashion, employees could theoretically live anywhere.
This real estate fluidity, Diamandis argues, has the ability to dramatically influence home prices as long as the trend is widespread. This will put less pressure on demanding rental markets, hopefully mitigating price hikes.
An Apple park visitor holding an iPad with an augmented reality app installed. Image courtesy of Unsplash.
Virtual reality has immense cost-cutting potential, though VR headsets and hardware are quite expensive. A VR package can run a consumer anywhere from $129 to $600 and it is unclear whether adoption of these headsets at scale would reduce entry costs for businesses. Depending on a company’s size, this may or may not be beneficial.
However, VR has taught us that not all crucial decisions should be made within the confines of our physical realm. It’s quite tantalising to think our worlds will be based on those conceived within a program. The collaborative angle is quite attractive as well.
Augmented reality positions itself more practically overall. One might argue that AR has more immediate reach across a multitude of industries. There will always be a need for healthcare jobs, thus those workers can benefit greatly from visualised information.
Sure, while development and real estate are VR friendly, AR makes development just as easy. It is not only about how we design buildings but how can we arrange and scale them.
Since we are moving at a breakneck pace towards online shopping, big players like Amazon and Walmart can implement these features to make shopping easier. Consumers will have options both in-app and in-browser. That empowerment is important, not just for businesses but for everyone else.
For right now, we’ll give it to AR on the basis of flexibility and fewer barriers to entry. Our mobile devices are already AR capable. However, we’re thrilled to see AR and VR development take off as hardware and software grow more capable. Besides, we are still probably 5 to 10 years away from maturity.