Making a VR Film

In 2017, I made a virtual reality (VR), live action series with a friend (who I worked with at Pocket Gems) who was the creative visionary and conceived the idea. I was new to both VR and filmmaking and this was a cool opportunity to learn a lot in a time boxed project (8 months) with a concrete deliverable.

We made a 4-part series called ‘Playback‘, which was fully funded by Oculus Studios (Facebook). Over 15k users downloaded and watched the series and it was one of Oculus’s featured projects in the Oculus App Store. We were also selected for the Google Jump Start Program and which gave us access to a Odyssey 360 Camera.


Our thesis had a number of components, and we were trying to figure out if we could drastically improve the quality to cost ratio for VR film production by introducing constraints. Ultimately we were looking to create a platform and process for profitable VR content creation (‘Netflix for VR’).

  • First Person: The series is shot primarily from the perspective of the protagonist, which allows the viewer to experience the story from their perspective. We thought that stereo (both eyes have a different video) was important for realism as it mimics our own eyes.
  • Narrative Stories: Focus on storytelling and include some light interaction and choice to give the user agency and improve immersion. Black mirror tried this approach with their Bandersnatch Series (which was produced after this).
  • Freemium: Start with episodic content with the first episode free, together with an upgrade path to unlock the rest of the experience (learned from our time working on Episode)
  • Technology: Build software to create proprietary production techniques that increase the quality and reduce the cost for even low end Android devices (increase the size of the audience).


This was our short pitch for the production, called Playback:

Playback is a revolutionary VR miniseries that pushes the boundaries of narrative storytelling and viewer interaction. It’s a cautionary tale about life-changing technological advancement and the personal and societal implications of its adoption.

You experience life through Alex’s POV as his company is about to launch their new, groundbreaking product: a device that records moments and allows people to re-live those moments in all five senses

As the story progresses, the boundary between Alex’s technology and his own reality begins to fall away, pressing the viewer to question what their own reality will mean as our world becomes increasingly virtual.”


The entire process took around eight months and was broken up as follows:

Month 1: Pitch and Funding

  • High Level Pitch: We wrote up a high level pitch and deck for the Oculus team who then approved a $50k budget for Playback (over another Zombie concept).
  • Budget: Prepare high level budget to make sure that we could actually produce the experience with the $50k.

Month 2-3: Writing and Production Process

We split up the creative part (writing and character development) with the production part (technology, team, production process) and worked on each in parallel:

  • Overall Workflow: How would our overall process work from the point of filming to getting it in a users hands. We went through this entire flow, including creating an app and deploying it to the app store.
  • Vertical Slice: We picked a single scene and tried to make it look ‘production ready’ with a very short test clip. This set the quality bar for the production:
  • Story arc and characters: The pitch needed to be broken down into a story, and the characters needed to be developed.
  • Detailed Script: After the story, a detailed script with dialogue was prepared for 4 episodes, including choice/branching narrative.
  • Core Team and Contractors: We needed to assemble a small team (which was mostly unpaid) and the core team included our Directors of Photography (Sensorium) an NYU film student, and a visual designer who worked with us at nights and weekends.
  • Technology experiments: We ran lots of experiments with both hardware (e.g. cameras, rigs, lighting etc) and software (e.g. Depthkit) to figure out some of the tools that we could use during production.

Month 4: Pre-Production

  • Casting: We decided to hire SAG Actors (under New Media) which meant we had access to better actors, but had to follow certain guidelines. We looked at 100+ audition tapes and 3 finalists for each major role.
  • Prepare for shooting: This required a lot of work – we needed to pick venues, rehearse with actors, design the sets, and create very detailed shot lists to make sure we got all the footage we needed.
  • Shoot week: We planned to shoot the whole series in one week. This was a very structured and organized period starting early in the morning and ending late at night. Our “team” included over 30 people working at different points during the week and this was the most “expensive” part of the production process.

Month 5-8: Post Production

Post production took almost half the total time, and we underestimated how long this part of the process would actually take by a fair bit.

  • Shot selection: We reviewed all the raw footage and picked the shots that we wanted to use for the series.
  • Stitching: For stereoscopic VR, we needed to ‘stitch’ the video and image files together and make sure that each eye had an image that did not make it look like the user was seeing double (very off putting) which was difficult and time consuming.
  • Engineering: We added components like a ‘cyber world’, user AR interaction, branching narrative, and a number of performance improvements which required focused engineering and design time.
  • Film festivals: We chose a few festivals and created pitch decks and submission entries for them.
  • Launch and review metrics: Once launched we needed to analyze the metrics and user behaviour and compare to our original hypotheses.


We had a strict budget, and part of our thesis was to see how high we could push the quality bar with a fixed, restricted budget (comparing ourselves to experiences with 20x+ bigger budgets). We raised $50k from Oculus (as a grant), and we ended up very close to our original budget.

We prepared a detailed budget with 10% flex built in, and tracked the costs vs estimates meticulously in a spreadsheet. Most creative projects go over budget, and having this kind of discipline allowed us to keep costs under control and scrutinize all expenses carefully.

Given that our thesis was to try to create high quality content for a fraction of the price that was typical in the industry, it was important for us to control for costs carefully.


AR Interaction
Photogrammetry – texture mapping to create a ‘cyber world’
Stereo rig for filming

We tested out a number of different technologies in the production:

  • Stereo 4k footage: Each eye has a different video (mimicking our normal eyes) at 4k resolution which makes it feel more real to the user. This is hard to achieve on low end devices.
  • Proprietary image + video cut out system: In order to get the quality of video that we wanted, we built a system to constrain the ‘action’ to a small part of the scene and overlaid video on top of a still image. This required a very precise shooting and post production process that we created ourselves.
  • Photogrammetry: We mapped the inside of one of our scenes (3d + textures) to allow transition between live action scenes and a computer generated world seamlessly.
  • Volumetric video: We used Depthkit to capture volumetric video (3d), for a scene with a ‘ghost’ in the virtual world.
  • AR Interaction: At the start of the first episode we created a HUD which the viewer could interact with – read emails, the news etc which added to the sci fi and first person interaction.
  • Branching Narrative: We created choice where users could send different texts (via their AR HUD) which changed the outcome of the story depending on the choices they made.


We created something that was both innovative in the way it approached story telling, as well as the technologies that we implemented for VR. We decided to apply to a few film festivals (Sundance, Tribeca) in their VR experiences segments. We got fairly far with Sundance but were ultimately not selected.

I went to Sundance in 2018 anyway, and had a great time skiing and watching movies. Our Directors of Photography (Sensorium) had another VR experience which was featured, so it was fun to see them experience some of their other work.


Here is a short video showcasing the first episode (although it’s much better experienced in VR).

We released the final product in the Oculus App Store. We were featured and had reasonable download and view rates (over 15k people). Users got the first episode for free, and had to pay to unlock the next three episodes. There were some bugs in the upgrade process that negatively impacted our ratings but for people who were able to upgrade, many seemed delighted.

We were still a step function off in terms of both addressable audience as well as conversion to payer rates to justify our original hypothesis and invest further. Neither of us are still working in VR.


  1. Scope creep: This was a classic mistake that we should have realized earlier (as we’ve done this many times in games) but we got too excited about some of the tools and technologies and probably added too much (e.g. volumetric video capture, branching narrative) that was not necessary to test our original hypothesis.
  2. Stereoscopic vs. Monoscopic: I think we should have killed the stereoscopic requirement early in post production. Stitching of video so that it does not look warped or incorrect for both eyes is a real pain, and this would have saved us a few months as well as allowed us to ship a more polished experience overall.
  3. Developer experience: We built most of the application in Unity and used tools like Blendr, Adobe Premiere Pro as well. The workflow was pretty cumbersome and it was fairly manual to create scenes and test out in VR. We could have built a lot of automation ourselves but it was not worth it if we were just doing it for one series.
  4. VR User Experience: At the time, we were optimizing for users using Oculus Go devices (Android phones strapped to your face) and this entire UX was terrible (buggy, battery hog, performance and storage issues etc). Standalone devices have improved the experience substantially, but we’re still not at the stage where this will be mainstream.
  5. Running out of energy: Towards the end, it became a real grind to get it out the door. Everyone was tired, and the project took 30% longer than any of us expected.

Overall it was a great learning experience but it made me realize I don’t want to work in the ‘content creation’ business especially in entertainment. I much prefer working on tools for entrepreneurs and business, and hope to spend more of my career building technology for this audience.

Making a YouTube Video

I made a YouTube video, with the goal of understanding what it takes to create something with reasonable production quality, completely on my own. This is a short summary of my process and learnings for others who may want to try something similar.

My subgoal was to generate empathy with YouTube content creators and the best way I know how to do this is to actually go through it. I capped the time investment at one full day, including setting up and learning all the hardware and software.

Before you start

  1. Get a good quality camera and microphone. I used the Canon M50 creator kit with the Rode Mic (see below) as it came highly recommended in a number of YouTube channels and blogs. I just ended up using my Apple AirPod Pros, because it created a simpler workflow and I wanted to save time (so the audio quality was not the best). If I was to do this more frequently, I would just buy a separate USB mic like the Blue Yeti Nano.
  1. Familiarize yourself with the software. I used Final Cut Pro (90 day free trial) for editing, Camera Live to stream my camera to my computer and OBS Studio for recording my screen which are both open source.
  2. Decide what story you want to tell. This is the hardest parts of any piece of media creation, and the main thing that matters.
  3. Write up a rough script. Each take took me way too many times to get right, and so I just memorized what to say (like an actor) and it went more smoothly from that point.
  4. Write up a shot list. I did mine in this spreadsheet, although I would improve it in the future to be something that I could easily share with an editor. Naming the shots lets you more easily edit the footage in post production.
  5. Run through your entire workflow with a short clip. For example I did not realize that OBS was compressing my files into MKV (and at a low quality), which did not play nice with Final Cut Pro and it would have sucked to lose all my footage and start again.

Pre Filming

Here are all the things you should do before filming, so that the filming process goes as smoothly as possible.

  • Story: I decided to do an instructional video for using a DLSR Camera for Zoom and other video calls. I had been looking for an option like this and found many of the videos incomplete.
  • Script: I wrote a script in a Google Doc for what I planned to say. This was really helpful to read from when filming so I made sure to say everything I wanted to and did not lose my place.
  • Shot List: I wrote up the following shot list in Google Sheets and in the future, I’ll add in some editing notes in post. This would allow someone who is editing my video to add any captions, effects or transitions much more easily.
  • Audio set up: I tested a few different mics, including the Rode Mic that came with my creator kit, the MacBook Pro Mic, and the AirPod Pro mics. The Rode Mic definitely sounded the best, but was not a USB mic and made my workflow a bit harder as I could not record the audio and video directly using OBS on my Mac. I decided to go with the AirPod Pros, but would buy a USB mic in the future. I tested the levels to make sure that the audio was good to go.
  • Video set up: I tested the video, the encoding (RAW is best but harder to work with) and the lighting. I only used light from a large window and it worked pretty well.
  • Scene: I used the living room of my house and made an effort to clean up the background of clutter. This kind of thing does make a difference to the overall feeling of quality to your video.
  • Full workflow: Make sure you run through the entire workflow with a short clip so you don’t have to re-do everything because of a mistake. I was having an issue where short clips had no audio due to some encoding issue and it was a real pain to fix in post.


Here are a few things that I learned during filming, and things I’d suggest watching out for when you are making your own video.

  • Long takes: I really struggled to get long takes completed. I would use filler words, or look away and it was frustrating. In the end I shot much shorter takes or just tolerated some worse takes as I ran out of time.
  • Hand waving: I used my hands too much and it made me look a bit manic. I would try a shot that included my torso in the future so this looked more natural (vs. hands popping up on the screen) or just chilling out the hands a bit.
  • Looking into the lens: I was not looking at the camera lens, but at the little preview screen of myself instead. In the future, I’ll stop using that preview screen and make an effort to look at the lens. This makes the viewer feel like you are making eye contact with them, and is more engaging.
  • Smiling: I needed to smile more, as it would make me seem more friendly and likable on video.


I edited the video myself to learn the tools and see what I could do in a few hours. I also tried spending $25 on Fiverr and $50 on UpWork to hire a freelancer to do the video editing for me and to make sure that I understood their platforms. The self edited version is clearly the worst one of the three below.

Self Edit

I used Final Cut Pro, which was pretty intuitive and added some captions, and intro screen, short music clip, some transitions and corrected audio levels. It was fun to learn how to use the software!

Spending $25 on Fiverr

I hired an editor for $25 total on Fiverr. This was much better than my effort. The pro added soft background music throughout, zoomed in and zoomed out shots, and improved color grading and audio levels significantly.

Spending $50 on UpWork

I hired another pro for $50 on UpWork. This edit was by far the best, and I would spend at this price point again in the future.

The editor did good color grading, had clean transitions, added blurred the backgrounds for my screen recordings, added soft background music, integrated some images, text on screen and added a nice intro and outro sequence that made it feel more professional.


I set up a creator account on YouTube and watched some of the videos from the Creator Academy. I would watch more videos if I got more serious, particularly to learn how to get more traffic.

I uploaded the video, added a description and some tags and also some Amazon Affiliate links to the YouTube description to learn that part of the process. No one has bought any of my recommendations just yet and I’ve only had about 120 views after about two weeks.


Overall this was a fun project, and I may make some more videos in the future. It would probably take me half the time to film and prepare the audio and video files and the shot lists.

I would definitely pay someone on UpWork or Fiverr to do the editing for me in the future as they would 1) do a better job than me and 2) it seems worth the $25-50 cost for the time saving.

I would also get a better microphone.

Use your Fancy Camera on Zoom

tl;dr: A better camera, with front facing lighting will make you look much better. A fancy camera is great, but a pain to set up. The best option for most people is to attach an HD camera to your monitor, like the ones recommended by Wirecutter.

This post will summarize how to set up your fancy DSLR or Mirrorless camera with Zoom, and it will work for most video calling or web conferencing tools. It will make you look clearer and better simulate being in person, as we all transition to working from home.

I’d also suggest getting a decent audio set up. The best option for most people is a wired USB headsets with a mic that is a consistent distance from your mouth.

Please note, this guide only covers Macs and Canon cameras. It is meant to be a companion to my Youtube video below.

Fancy camera on Zoom guide

A number of other guides recommended using the Camlink and a HDMI cable, but these were sold out, and required a ‘clean’ HDMI out feed so it’s a little more fussy from a set up perspective but easier once you have it running.


Here is a screenshot of my Macbook Pro Zoom feed, the feed from the built in Camera on my LG 5K monitor, and from the Canon M50 (in that order). I took these screenshots directly from Zoom, and I hope you can see the difference between the three 🙂


The most important thing to get right is the video and audio quality when setting up your home video conferencing kit. Quality video and audio can make interacting virtually feel more natural, and may be worth the investment if you spend lots of time on video calls and plan to work in a distributed fashion for an extended period of time.

This entire set up costs under $1,000, which is still expensive but I think worth it if you’re working from home all the time.

  1. Canon EOS M50 ($400-600): This was highly recommended by a number of blogs and Youtube channels that I follow. It seems to have very good price to value ratio and costs around $450 for the camera and the lens. I bought the ‘creator kit’ from Amazon (linked above) which was $550, and includes a Rode mic as well.
  2. Dummy Battery ($25): The dummy battery just makes it more convenient for you so you don’t have to change the battery often – each battery only gives you about 2-3 hours of video, so it’s pretty essential.
  3. USB micro to USB C cable ($10): This is how you connect your camera to your computer. You could use a standard micro USB to USB cable and a USB to USB C. Try and get a fast USB 3.0 cable as you’ll get some lag otherwise.
  4. Amazon Basic Tripod ($15) : This is a very basic tripod but does the job keeping my camera well positioned behind my monitor.
  5. [Upgrade] Sigma 16mm f/1.4 ($400): I upgraded the stock lens a Sigma 16mm f/1.4 lens which I recommend. It is a prime lens (without zoom) with a large aperture (better in low light) and a low focal length (helps blur the background). I really like this lens, and it takes really nice portrait photos as well. If you use it outside though you’ll need to get ND filters (sunglasses for your lens) as otherwise too much light gets in and your photos are overexposed.


NOTE: Canon just released (May 27, 2020) a beta webcam utility that makes this whole process much easier from a software side. Here is their video to set it up – it saves on all the steps below but the software is still in beta.

The following steps below still work, but the webcam utility is easier!

You need three pieces of software to make this work and they are all open source or free:

  1. Camera Live – Camera Live is an open source tool to create a live video feed from your Camera. Download the latest Alpha (13) if you are on the most recent version of Mac OS Catalina (10.15.4 at the time of writing).
  2. Camtwist: Camtwist allows you to broadcast the live video feed from Camera Live to other tools, like Zoom via a Syphon server.
  3. Zoom: Download the latest version of Zoom. They now allow you to use virtual cameras agin so you should not have any issues.

Office Set Up

I set up the camera above my laptop screen, and don’t use my large screen while on Zoom with the fancy camera. I position the camera above the laptop screen because it keeps the camera at eye level (how a real person would look at me), and allows me the see the person I’m speaking with while making eye contact with the lens.

I hope you enjoy using your new video conferencing set up!

Raspberry Pi Setup

A Raspberry Pi is a super cheap ($35-60) computer. I spent a few hours setting up a Raspberry Pi, connecting it to my home wifi, enabling remote access and setting up WordPress.

My goal was to get a home network set up and give myself a platform to try things like hosting a WordPress locally, play with mini home automation projects (e.g. change the light outside my door when I’m in a meeting), or a long horizon timelapse of each day outside our window with a cheap camera.

What do you need?

I spent around $100 to get all these components (with Amazon links):

  1. Raspberry Pi 4 (4GB Ram)
  2. Micro SD Card (32GB)
  3. USB C SD Card Reader (for Macbook Pro)
  4. USB C Charger (for Raspberry Pi)

What can you do with a Raspberry Pi?

I read a bunch of articles, but here are my a few that I recommend:

  1. A couple on Hacker News with my favorite being the good samaritan who shared the live bus schedule with travelers
  2. Hardware add ons and corresponding use cases
  3. A good write up of all the home automation software options
  4. Home automation ideas here and here
  5. Set up a WordPress site

How do you set it up?

I pretty much followed this guide, which was pretty good, and it’s designed for folks who want to use their Raspberry Pi without a screen, as a stand alone device.

The main steps are:

  1. Install the Raspbian operating system on your SD Card (don’t bother with the Etcher step, you don’t need that)
  2. Set up your Raspberry Pi to connect automatically to your home WiFi (SD card slot is on the other side)
  3. SSH into your Raspberry Pi and change your login credentials
  4. Download Real VNC and set up and update the operating system
  5. Install and set up Docker to allow containerization of applications
  6. Install a web app (I installed WordPress afterwards)

A couple of other useful videos are:

  1. Raspberry Pi getting started beginners guide from Crosstalk solutions, but it assumes you have a screen plugged in.
  2. Useful video guide for explaining Docker and containers but it’s a little more technical and in depth.

How To SET up WordPRess?

I followed this guide which let me set up via command line (not using Docker). This was pretty straightforward except when installing MariaDB which needed s different command to install (updated below):

sudo apt install default-mysql-server php-mysql -y

I’m looking forward to playing around with this some more, and potentially investing in some light home automation in the future.