Planisware

International software company Planisware was preparing to host an event in Philadelphia, and needed a video production company to capture footage and work it into several deliverables. They found Video City Productions, reviewed their list of needs with us, and together we came up with a game plan for completing all of the required projects.

Our main goal was to provide a sizzle recap film that highlighted the benefit of the event, which could be used to promote future Planisware events. The client also wanted us to capture and edit three multi-camera live keynote speeches, which included a software demonstration.

This presented the biggest challenge for our team to overcome. It meant filming a screen where a Planisware representative was walking attendees through the use of this new software in real time. This is easier said than done…if you’ve ever tried using your phone to film something displayed on TV, you’ll notice the differences in color, occasional flickering across the screen, and difficulty focusing on one thing as the images switch back and forth.

While we’re using significantly better gear than a phone, we still have a very deep understanding of how to capture this type of footage.

The reason you get those color fluctuations and flickers in your footage is due to the exposure settings. They’re all wrong, they don’t match the lighting in the room, and you’re forced to choose between the exposure for the projector or the exposure for the surrounding people and other items in the room that you’re filming.

Remember back in the day when your favorite song would come on the radio and you wanted to add it to your mixtape? You wouldn’t sit there with a microphone held up to your boombox speakers, recording the audio off the air. You would try to record the song on a cassette in the boombox itself to get a crystal clear copy of your favorite song. The same thing is true for recording video of a demo being done on the screen. One method is always going to be of much better quality than the other.

Now, if you’ve ever done a presentation before, you’ll note that you have the option of inserting the PowerPoint in post-production. For example, in a basic environment, you might have two cameras filming the stage (one close-up and one wide shot as a safety angle.) You can then take the PowerPoint after the fact and incorporate the slides into the presentation by simply editing them in.

While it may seem simple enough, it’s certainly not an ideal scenario. For starters, somebody has to sit through and listen to the entire presentation, manually confirming when the slides should come in and sync each one up with the audio.

From an editing perspective, after you have synchronized the slides, you then have to listen to the narrative and determine, from a viewer’s perspective, when the most important time to see the slides may be. A viewer needs a good balance of seeing slides as a reference, and seeing the presenter on-stage and connecting with them.

This becomes exceedingly more difficult when you’re talking about a live demonstration, because someone is running the demo in real time. With that comes movement, mouse clicks, drop-downs, and other impromptu visuals that can’t be captured and recreated without screen recording someone doing it.

Let’s take a look at some of the strategies for approaching this scenario, including the good, the bad, and the ugly.

Screen Recording Software Struggles

There are so many different screen recording software options out there. QuickTime Player actually comes built into MacBooks, and there are lots of third-party options out there.

Two issues arise when choosing which software to work with. The first is the video aspect, which is defined by the frame rate. You have to make sure the frame rate of the screen you’re recording is the same frame rate of the live presentation that the cameras are capturing. If you don’t coordinate the two pre-recording, they won’t sync up and you’ll wind up with what’s called “drift”.

When dealing with drift, your PowerPoint slides will get a different time signature than the actual recording, and it’s very difficult to edit those sources together to make it look fluid. Basically, you’re going back to square one, which means manually syncing everything. Needless to say, this is not ideal.

Unfortunately, this level of collaboration is not always available, regardless of how vital it is. One of the biggest issues with this approach is getting all of the different screen recording softwares being used by all of your different clients to sync with the hardware you’re working with as a videographer. This leads us to the second issue: hardware compatibility.

Hardware has to be able to handle the screen recording. While you’re presenting on your laptop, that laptop is recording on its own screen, and saving the file to the hard drive. Well, if you don’t have a fast laptop, it may not be able to perform under these circumstances, and then one of two things will happen. Either your laptop will crash on stage (which is a huge event faux pas,) or your laptop will perform the demonstration just fine, but won’t be able to run the recording software, which will crash and leave you in the same place you were to start with. Except now you’ve wasted the time, energy, and money downloading software to complete a screen recording that you never got.

You can also run into problems when dealing with multiple sources, such as multiple laptops in one environment. This was the issue we faced with the Planisware job. We had to record four different sources from four different laptops at different times throughout the day. Obviously, this requires lots of coordination to execute. Someone (either from our team, from AV, or from the client’s side) had to be responsible for starting and implementing those recordings. Now we run into a logistical issue, a hardware issue, and a technical issue.

Post-Production Problems

One alternative to dodging all of these issues is to do everything after the event. Basically, you find someone to recreate the demo afterwards, in a safe environment where you have a computer dedicated to screen recording with no risk of crashing (and people on hand to pause the demo and perform immediate fixes if any issues arise.)

Obviously, this is not idea. For starters, this just adds more time to the project, as you need someone to actually do the demo. Whether it’s someone on our end, or a representative from the client’s side, someone needs to find the time to redo the software demo exactly the way it was done during the presentation. This means delays in the turnaround time for your video.

A Successful Solution

The third option is to record the feed off of the projector, or from the mixing board. This is the ideal scenario – and the one that we took on the Planisware project – but it’s not as easy as you might think.

First, you need a hardware encoder that can convert the stream from the laptop to your recorder. This means you also need a recorder that can handle the conversion.

Luckily, we’ve got the technology to keep up. Our Atomos Samurai Blade with an HDMI-to-SDI converter and Decimator MDS provide the perfect configuration to get an SDI connection off the back of the projector from the output. From there, we can upscale the signal, and record a 1080p60 signal to 1080p30.

Here’s where we’re going to get a bit technical, but this piece is important. One of the things that we try to do in these situations is to record 24 frames per second, which creates a filmic look. This is an aesthetic that we prefer, and something that causes so many clients to gravitate towards us.

Some may choose to film 30 frames per second, which makes a more TV look, while others choose 60 frames per second to get that very familiar “soap opera look” that you have probably seen many times before.

These different frame rates have other advantages, and we don’t want to discredit them for fulfilling their needs. For example, if you record something in 60 frames per second, and reinterpret it at 24, you can create a truly beautiful slow motion image, which we love to do. But if you’re doing a live recording of something, we find we get the best footage at 24 frames per second.

However, a problem arises when recording a synchronized source from the projector. The computer generates the image at 1080p60 (or at 720 if you’re using an older computer.) This means that, if you’re trying to record something at 24 frames per second, you’re missing key information and winding up with blurry images. That forces you to record at 30 frames per second on the other cameras in order for everything to remain synchronized and to make your post-production process actually work.

If you’ve been paying close attention to all of this technical talk, you’ll notice an inconsistency when we say the signal comes out at 1080p60, but we record at 1080p30. While it may seem a bit confusing to some, you can rest assured that everything balances out. At the end of the day, the ratio is the same, and all you’re left to deal with is the frame drop where every other frame is not recorded and you’re left with the 30 frames per second that you actually need. This allows cameras to record at 30 frames per second, which is a more realistic motion and creates a more preferred aesthetic.