Apple speaks up on creating Cinematic Mode for iPhone 13
We may earn a commission if you make a purchase from the links on this page.
During the launch of the iPhone 13, Apple streamed a short film titled Whodunnit, showcasing one of the biggest camera upgrades any iPhone flagship has yet seen: Cinematic Mode.
This new video mode allows users the ability to continuously shift the focus while shooting 1080p video at 30fps, keeping the subject in focus or changing to a different subject, even while the camera is rolling (among additional features).
Needless to say, this takes a massive amount of computing power, and isn't something that any of the previous generations of iPhones could handle in terms of hardware. However, it's a job well suited for the flagship family's new A15 Bionic powerhouse of an SoC. Yet even with this monster powering the devices, Apple says that figuring out how to make Cinematic Mode work was a serious challenge.
Apple speaks up about the process of creating Cinematic Mode
Apple VP Kaiann Drance, together with Apple's Human Interface Team designer Johnnie Manzari, have tuned in in an interview with TechCrunch to share some more details on how iPhone 13's Cinematic Mode came to be.
We didn’t have an idea [for Cinematic Mode]. We were just curious — what is it about filmmaking that’s been timeless? And that kind of leads down this interesting road and then we started to learn more and talk more … with people across the company that can help us solve these problems.
It takes a lot of processing power to be able to able to track a subject and shift the focus and aperture of the camera in real time, as well as apply lens blur and image stabilization, even while intelligently calculating when and where it is appropriate to actively shift the focus (such as when someone is turning their head, or entering a room, for example).
Because of this, the A15 chip's boost in power along with its 5 GPU cores, and powerful neural engine, will be revved to the max when using Cinematic Mode.
We knew that bringing a high-quality depth of field to video would be magnitudes more challenging [than Portrait Mode]. Unlike photos, video is designed to move as the person filming, including hand shake. And that meant we would need even higher quality depth data so Cinematic Mode could work across subjects, people, pets, and objects, and we needed that depth data continuously to keep up with every frame. Rendering these autofocus changes in real time is a heavy computational workload.
Manzari continues that the Apple design team started out with a "deep reverence and respect for image and filmmaking through history." They asked themselves core questions, such as "what principles of image and filmmaking are timeless? What craft has endured culturally and why?”
In doing this, certain trends emerge. It was obvious that focus and focus changes were fundamental storytelling tools, and that we as a cross functional team needed to understand precisely how and when they were used.”
While cinematographic video shooting is something usually associated with high-level professionals, as it is a serious and difficult craft to learn to do well, Manzari explains that Apple strove to use its hardware to make it simple and accessible to the public, giving even amateur smartphone photographers the ability to dabble in something of this caliber.
We feel like this is the kind of thing that Apple tackles the best. To take something difficult and conventionally hard to learn, and then turn it into something, automatic and simple.
For anyone who missed out on Apple's impressive cinematographic production showcasing Cinematic Mode, fully shot on the iPhone 13, here it is below:
Things that are NOT allowed: