Smartphone photography battle moves from cameras to chips
Several low and mid-range phones now sport more than one camera on the back, but the number of cameras mounted on the rear panel of a smartphone does not equate with the quality of photographs produced by the device. If that were the case, Google's Pixel phones, with their single rear-facing camera, never would have garnered the praise and recognition it has received as one of the best (if not the best) handsets for taking photos.
The secret lies in the specialized chips used by companies like Google, Huawei, Samsung and Apple that include AI capabilities to improve the images snapped by their devices. The Pixels, for example, use a specialized Visual Core chip that has been inside each unit from the Pixel 2 series forward. The chip includes Machine Learning functionality that automatically adjusts the settings of the camera to match the lighting and other aspects of the scene being photographed. It also drives the HDR+ feature that combines several images to produce the best possible shot.
AI/Machine Learning capable chips have become the latest battleground among premium handset manufacturers
During Apple's new product event on Tuesday, the company mentioned how part of the A13 Bionic chipset contains a "neural engine" to help the new iPhones take viewable photos in low-lighting conditions. As the tech giant pointed out during the introduction of the new iPhone Pro models, this is computational photography which relies on digital image processing as opposed to optical processing. An example of this is the Deep Fusion feature that will be enabled on the new units via a software update sometime this fall. As we've already explained, this will allow the phone to capture eight images before the shutter button is pressed. Adding the image captured by pressing the shutter, there are nine images analyzed by the neural engine in a split second to determine the combination of images that creates the best shot. Unlike the HDR+ process that averages out the multiple images, Apple says that Deep Fusion will use the nine images to put together a 24MP image, going pixel by pixel to put together the best picture with high detail and low noise.
"When you press the shutter button it takes one long exposure, and then in just one second the neural engine analyzes the fused combination of long and short images, picking the best among them, selecting all the pixels, and pixel by pixel, going through 24 million pixels to optimize for detail and low noise."-Phil Schiller, Senior Vice President of Worldwide Marketing, Apple
Reuters cites Ryan Reith in a new report published today. Reith, who works for research firm IDC, says that these AI/Machine Learning chips have become the latest battlefield where premium smartphone manufacturers are taking on the competition. Reith notes that the manufacturers competing in this arena are the ones able to invest in the chips and software required to optimize the cameras on their handsets. He says, "Owning the stack today in smartphones and chipsets is more important than it’s ever been, because the outside of the phone is commodities." The IDC program vice president also pointed out that these chips will also be used in future devices and mentioned Apple's rumored AR headset as a future beneficiary of the company's work on neural engines. "It’s all being built up for the bigger story down the line - augmented reality, starting in phones and eventually other products," he said.
The triple-camera setip of the Apple iPhone 11 Pro
While adding features like Night Mode and an Ultra-wide camera might sound revolutionary the way Apple explains it, the company is simply catching up to some of the more innovative Android manufacturers. And with both Huawei and Google about to unleash their latest premium handsets, it will be interesting to see where the major manufacturers stand once all the dust settles.
Things that are NOT allowed: