360° Capture Enables More Than Just VR Video — Here’s 7 Ways It Transforms Content Creation for 2D Video Today

Bilawal Sidhu
Virtual Reality Pop
11 min readMay 21, 2017

--

The Future of Vlogging?? — Sail Video Systems 3rd Person Harness with a Z-CAM S1 360 Camera. Consider this a telling prototype for the very strange future ahead.

There’s a lot of mixed feelings around 360° video in the marketplace today. Some chalk it off to be a gimmick, others cite it to be a radical departure from directed/framed media, while some don’t even consider it to be virtual reality at all, let alone a subset...

Regardless of where you stand, by the end of this piece you’ll see how 360° and spherical video capture not only creates a new form of media (VR video), but also radically impacts existing forms of 2D video we all know and love, giving you content creation superpowers we couldn’t possibly have conceived in the past.

#1: Spherical Video Enables “Overcapture”

Capture everything on-site, frame it live or later in post — the future of vlogging/journalism in a nutshell.

A single 360° camera with a minimal footprint can replace a multi-camera shoot with ease. You shoot everything on-site, framing it in post-production, creating your “directors cut,” if you will. No need to make snap decisions on what to point your camera at… you have everything recorded. As 360 cameras increase in resolution and dynamic range, this is becoming a very pragmatic option.

According to GoPro Product Manager, Jon Thorn, news organizations are already using their Omni camera system for this very purpose — Overcapture. In fact, it’s a key feature advertised for their upcoming 2-lens 5.2K Fusion 360 camera system.

Just think about it, let’s say you’re a journalist out in a remote village in India or Vietnam interviewing a village-dwelling couple. If you show up with a massive RED Dragon or Canon C300 with a slew of crew and equipment, you’re probably not going to get the most natural performance out of the couple. It’s intimidating to say the least.

Instead, if you plop this innocuous looking ball of cameras, it’ll inspire curiosity, but not get in the way of an organic interaction or performance. Later in post, you can do all the cutting and framing you need. You can even bring a pair or trio of 360° cameras, or tag teaming with traditional cameras to give you increased flexibility and coverage

#2: Spherical Capture Will Transform 2D Live-Streaming

Think Mevo Cam on steroids + narrow AI

Building upon the previous example, let’s apply spherical video to livestreaming applications. Since we have the entire “sphere” captured, all we need to do is run rudimentary image/face recognition and Google Home/Amazon Alexa style “acoustic beamforming” to identify who is talking and speaking, letting you crop in and cut dynamically in a live-streamed interview setting for example…. No need for 4 different cameras or someone constantly monitoring the switching board.

Just like the Mevo Cam captures a wider than normal field of view, eliminating the need for a 2 camera setup, with 360 video we even more flexibility, since the entire sphere is at our disposal.

The implications are massive. An entire live-streaming and switching team can be replaced by a couple of 360 cameras and some smart software. The fact that we needn’t deliver the entire 360 image at once, also makes the compute-intensive process of video stitching a lot easier, since we only stitch the video feeds that are needed at any given time for the virtual camera view. The same applies to streaming… at the end of the day, we’ll be piping over a normal 1080P feed, not a massive 4K-6K 360 lat-long file. For the initiated, think of it like viewport-dependent streaming reversed. Nimbler teams will be able to create more compelling livestreamed content thanks to democratization of spherical capture and narrow artificial intelligence.

#3: Spherical Video Enables Easy Vertical & Square Video Production

Capture video for responsive delivery to vertical, square and widescreen video

Currently, vertical video production is done in one of two ways, shoot in 4K and crop down, or put your camera in portrait orientation and shoot in 6:19 directly. It’s a different way of thinking and impacts everything from shot framing and editorial to titling and overlays.

Enter 360° video. Now we’ve got everything captured, allowing us to go in and crop in on just what we need. 360° video is a phenomenal medium to capture in all directions and pull out a beautiful vertical orientation video. Plus, in 360° video, you can do some pretty unique things… like change the field of view subtly to get that actor in frame or to ridiculous extremes, creating the hottest trend on social media right now… tiny planet content (which we’ll cover shortly).

Here’s an example of delivering a single scene 360 video as a tiny planet edit — converted into a multi-cam edit optimized for both vertical and horizontal video orientations.

Shooting with spherical video for social media delivery lets you responsively tailor your content to the unique platform you’re dealing with — whether it’s square videos on Facebook & Instagram, 16:9 widescreen videos on YouTube or 9:16 vertical videos on Snapchat & Instagram stories. And you can do it, all without any pain or losing editorial control, and all off of one timeline.

#4: Spherical Capture Enables Effects Like Tiny Planet & Fractalization

Delivering 360 Content as a Unique Projection Type

Cupid’s Arrow in San Francisco

Delivering 360 content in a unique projection types makes for some very interesting effects. Imagine if you could just keep increasing the field of view of a camera lens, way beyond what optics and physics allow in reality? Well with 360 video you can, giving you this phenomenal tiny planet effect, that’s taking over the interwebz by storm. And it’s an effect that’d be extremely tedious or impossible to do with flat 2D capture.

Left: a flyover of San Francisco I made.. trippy huh? Right: Tinyplanet videos makes for great promo content, like this one I made for the Banda Bahadur VR Experience.

The icing on the cake is, that to create this sort of content, you don’t really need a super high quality 360 camera rig… a $300 Ricoh Theta or Samsung Gear360 are more than adequate to create great collateral for social networks. I have no doubt we’ll continue to see creative uses of this technique. It’s already made a cameo into Kendrick Lamar’s recent music video — Humble.

Left: An experiment of mine to fractalize 360 video content. Right: Tinyplanet Bike ride in Kendrick Lamar’s Humble music video.

#5: Spherical Capture Keeps You in the Moment — First-Person Memory Capture

You don’t worry about vertical or horizontal. Front or back. You get everything for free.

Soon spherical capture will be everywhere. Think Snapchat Spectacles… Yes, they’re not 360x180 yet. The Spectacles currently sport a cool 110 degree lens, and shoots in a spherical format. The benefit? No longer do you need to wonder if you should film something in portrait or landscape — you get both for free thanks to the wide, spherical lens.

My nimble mixed reality setup seen through Snapchat Spectacles

Call me a soothsayer, but this is genius and Snapchat’s baby step towards full 360 capture. I have absolutely no doubt that we’ll be donning wearable 360° cameras in the very near future. And I’m so glad we will. No longer will concerts and critical events be full of people holding up their phones, experiencing reality through a little slab of glass and metal, but instead being present, knowing full well they’ll have exactly the same experience available to go back to.

Snapchat Spectacles users describe watching this footage almost like a reliving a memory, since your hands are free and visible. 360° POV footage will transcend even this, giving you the ability to step back into said memory and notice things you never even bothered looking at the time of recording.

Besides, an array of spherical cameras have the dual benefit of serving as input for computer vision-powered “inside-out” tracking on AR/VR devices — giving your headset/glasses complete spatial awareness of their environment. Think of it like a miniaturized self-driving sensor array for your head.

#6: 360 Cameras Serve As Excellent HDRI Light Probes & Reflection Probes

One click Image Based Lighting (IBL) for your 3D scenes

Take a $300 Ricoh Theta combined with a $10 smartphone app and you can produce bracketed exposure HDR environment maps at a speed and quality that was impossible for even a season panorama pro in the past. So now you can recreate the precise lighting setup captured from a physical space and bring it into the 3D realm inside your computer, all with the click of a button.

Previously, to make a spherical HDR required shooting a series of stills on a panoramic nodal head at different exposures, or photographing a highly reflective ball, with plenty of post-processing as I did below for my Advanced Maya project at USC in 2012 :

The HDR probe image was then used to light this Mechwarrior model on the right. It took me several hours to generate this result.

Producing a HDR 360 still image (called an HDRI map in the 3D/VFX world), even with the crude means of photographing a (not so) shiny ball at different exposures, allowed me to realistically combine CG elements with live-action footage. This will become an imperative need in realistic Mixed Reality experiences, and I have no doubt spherical 360 capture will play a key role in the solution to localizing CG objects to physical spaces.

But what truly blows my mind is that what I did in college painstakingly over several hours just 5 years ago, can now be done in near real-time.

In the experiment below, I use a smartphone connected 360 camera like the Insta360 to not only create an HDR map, but also serve as a real-time reflection probe. I can absolutely picture us putting or integrating small 360 cameras around in a space, so that CG objects overlaid in those environments react realistically. But in the meantime, this is an indispensable tool on any project or set where there will be CG combined with live-action footage. Trust me, your 3D/VFX guys will thank you.

A Recent Mixed Reality Experiment Where I Turned my Insta360 Camera Into BB8

#7: 360 Cameras Will Be Able to Discern Depth Pretty Soon — Giving Us Even More Superpowers in Post-Production

Think faux depth-of-field, fog and green screening without a green screen (what?!)

A simplistic breakdown of how a depth pass works : Pure white represents is objects closest to the camera, pure black is objects on the horizon. Once combined with the input RGB (color picture) feed, we can do some fancy stuff in post, including moving the camera.

Picture the idealized Lytro dream… the ability to capture the entire lightfield change EVERYTHING in post. Sure, it’ll be a while until consumers and prosumers get their hands on that tech, but while the professionals forge the bleeding edge idealized platform, Z-Depth maps are coming to 360 and 2D video capture sooner than we think… and in some cases, already here.

Current 360 capture solutions that provide a depth map. JUMP already does this. Nokia’s working on it & if you’ve got Nuke + Ocula, you can do it with any well aligned stereo 360 video.

No, it’s not just the upcoming Facebook 6-DOF 360 video cameras… The Google JUMP stitcher can already give you a Z-depth map today to go with your 3D 360 footage. Nokia is updating their OZO Stitcher to give you a similar depth map (at least towards the front of the scene).

Thus far, this process of depth map generation has done manually and painstakingly for the 2D to 3D conversion of feature films or required specialized rigs and complex post-production workflows. So, why does this matter exactly?

Lytro demonstrating set replacement without a greenscreen

With an accurate depth map, we get superpowers in post production — you get the ability to add depth of field (bokeh), fog, and even do thinks like re-lighting and green screen (chroma keying) without a freaking greenscreen! No longer do you need to set up that cyclorama stage, worry about immaculate lighting and green spill.

Converting 2D to Stereo Content is a natural advantage, as even flat 2D pieces of content will be viewed in VR/AR headsets in the future and it would be nice to have some depth in the picture (imagine watching a 3D TV in VR). Otherwise menial rotoscoping tasks to make sure you put that BB8 robot *behind* the actor can all be extracted and applied automatically, saving countless hours of post-production. Artists can start focusing on the fun stuff.

Takeaway: Don’t pigeonhole innovations in their infancy. Cross-pollination is inevitable, and will likely bring more benefits to content creators and consumers of existing mediums in the near term, while we all develop “VR-first” and “AR-first” mediums for the future.

Tightly framed content isn’t going anywhere anytime soon. Think creatively about the new tools and capabilities 360 video capture brings into your creative arsenal. It’s not all about “spherical video,” “volumetric video” and the slew of immersive canvases that have arrived with VR revolution… It’s just as much about the creative, technical and storytelling capabilities VR capture affords to good old fashioned, 2D rectilinear video. And this is where I believe a good chunk of the excitement will be in the next few years, as the industry as a whole figures out the language of storytelling and value propositions of VR/AR-first mediums. Not to mention, until HMD install bases hit viable numbers to motivate content creators to go all in.

Forget the hard boundaries that we are oh so tempted to draw. Don’t say, “Oh I won’t bother with 360 video until it’s more feasible.” Instead view 360 capture as a brand new tool in your existing arsenal, with a unique set of capabilities that can deliver a lot of value to the content you’re making today — VR or not. Open your mind to the possibilities and embrace the cross-pollination.

If you liked this, hit that heart ❤ button and show some love. Got questions? Drop em below! Consider Subscribing for more dope/insightful content.

--

--

AI Creator & Ex-Google Maps & AR/VR. 1.4M+ subs & 360M+ views. Tech, art & product. Angel investor. TED speaker. ੴ. 🔗 http://beacons.ai/billyfx