An inside look into the creation of an immersive documentary

VR be aware — Reality is not virtual

Last year, I had the opportunity to shoot and edit a prototype of a unique VR experience. The shoot took place during 1 week in Kinshasa, the capital of the Democratic Republic of Congo (Central Africa).

The experience transports the viewer in a first person viewpoint of a young boy who finds himself thrown in the street. Besides being completely immersive, the project also presents the viewer with interactive choices.

The camera system

Many 360 camera systems are being developed, lots of them still unavailable though. Our choice was determined by a limited budget, the genre-a partially improvised docudrama-, and the particularities of filming in the city of Kinshasa.

iZugar Z2X-C 
official website
A system made in Hong Kong, consisting of 2 Modified GoPro’s with fisheye lenses snuggled together in a custom made rig.

Z2X-C mounted on a bike during preliminary tests in Toronto


Economical, compact, light, not too visible, and with fewer hard drive space demands.

Only 2 lines of stitches to manage (well 1 seam crosses the image as a big smile)but the overlap zone is very small (That last point is it’s main minus with different implications further down the road).

KOLOR advices following settings for using GoPro’s in a 360 video setup;

Video formats supported by Z2X-C; (FOV W / 4:3);
4K 3840x1920@30fps (static scenes)
2K 2800x1400@60/80fps (fast moving content)
Fisheye image directly from one of the camera’s.

Take note of the lens flair and blue vignette around the borders. These will influence the results of the stitch.

Creating a template

Creating a template for your specific rig can accelerate the stitching process in autopano video (AVP). _Lens_calibration_-_Camera_preset

This profile is created in conjunction with another software package, called autopano giga (APG).

The importants numbers to include in your xml document template.

K1–3 are the deformation coefficients.
FoV (field of view) shows you how wide the camera sees.

Imagine that your video image is projected on a sphere and you are looking to it from the inner centre.
Here is a good explanation on the different projection modes;


First it is important to identify the camera’s. With numbers or letters, emoji icons, whatever works best for you.


Once you are rolling you need to synchronize the camera’s by at least 2 methods- in case one of the methods doesn’t work in post-production.

A screenshot from AVP when importing and synchronizing the camera’s. I am clapping for different reasons, to synchronize the camera’s with each other and the sound.
  • SOUND; clap in your hands several times
  • MOVEMENT; quickly turn the rig horizontally
  • or FLASH; quick bright light in front of all the camera’s at exactly the same moment (f.ex using an umbrella)

Verify that all of the camera’s are recording. Without that, your can’t use the take- except with systems of 6 and more camera’s, but then again it will depend on the angle the faulty camera was covering.

Prepare a hiding place or become an extra in the scene- it is a 360 video shoot, so everything/-one is in the shot.

Lights need to be disguised as practicals, marks on the ground should be a part of the scenery, and so on.

You need to adapt your methods to the technology you are using. Which brings me to the MISE EN SCENE (staging)


It is important to keep the main subject and main elements outside the stitching seam.

Deformation gets bigger when moving closer to the rig. The seam will be difficult to hide in one try. This will result in extra stitching work; you will need to stitch the foreground and background in 2 different projects and then have to recombine them in a compositing program.

You got stitched! This is what a stitch looks like when it crosses a character.

It is best to keep the nadir (the lower center) of the image as small as possible. Don’t forget to take a picture of the placement to mask the tripod, monopod or any other device later on.

Fast moving content and moving around with the rig will always make the stitches more visible in the end result.

Chikina, our new found stunt kid carries the Z2X-C on his head.

This installation on a person’s head is the least advantage situation to get seamless shots. One of the most difficult things to realize is a person walking or running with the rig. Stabilized travels (on a slider, in a car, on a motor cycle, …) are easy in comparison.

If it only depended on me I would have hidden the strap under a hat of some sort. Now the top head which is our nadir will be patched with a menu button. In any case, you might want to hide as much as possible the rig and it’s supports.

It was not easy to guide the other characters to look into the camera and not towards the eyes of Chikina. (And I couldn’t stand beside the camera to point the right direction as there is no behind or besides the camera). The latter creates a top down -God’s view- while looking in the camera’s result in a more engaging immersive experience.

Despite this very simple setup, the results came out better then expected.

Other tests


Sephora holding the selfie-stick you don’t need to point to capture everyone.

Cheap and light but turns in the palm of a hand which will make the stitching processes longer.

It is best to acquire the movements of a steadicam operator and to hold the pole as close to the base of the camera’s as possible.

Ideally, you would build your own gimbal selfie stick, which can stabilize 2 or 3 axis to avoid stitching headaches.

The stitching software does offer stabilization methods but they can not compensate all of the movements.

back pack mount

A image from C2 (capturing the behind of the main viewpoint), showing the pole of the backpack.

The pole of the backpack offers different angles to show more or less of the character carrying it.

This solution will require some extra masking work in post-production, to hide the pole.

Again, it is always a good idea to camouflage the backpack under clothes or in another backpack which fits the story setting better.

And the SOUND

Half of a film experience resides in it’s soundtrack and with VR that is even more true.

Ideally you want the sound to be recorded and mixed with surround technology and follow the exploration of the 360 environment of the viewer.

This shoot could only afford the bare minimum, using lapel mic’s to record dialogue and a Zoom to record ambient sound of all the sets. The sound of the GoPro’s functions as back-up sound.

Other solutions are out there — all depending on your budget and shooting situation. (This is not a complete list)

Some more information on immersive audio;

Another blog article analyzing some possible solutions


Order must be

It is extremely important to organize your files clearly; make use of the date and hour settings of your recording devices (which you all set before you started recording, right?). Also, mark the incomplete takes.

GoPro’s spit out their own generated file names. Start by changing the names of the files for each camera, according to their ID.
- Did you know their is a rename function in OSX (right click or ctrl click).

Let’s keep on organizing; you need to copy the files of C1 and C2 (or more camera’s) of the same take in the same folder (that is to make them detectable by the stitching software).

How I organized my files.
The shots of all the camera’s of the same take need to stay together at all times, together with the projects of the stitching software.You don’t want the stitching software to loose it’s media file paths..

The post-production will be covered in another article.

The French version of this article was published earlier.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.