One of the most important questions that we hear asked about phones is “How good is the camera?” The Pixel 2 phones took smartphone photography to the next level and Google are starting to show the world just how they created the magic.
On the Google Research Blog Google recently explained how the Pixel 2 produces such amazing motion photos. The motion photos on the Pixel 2 can record and trim up to three seconds of video each time a photo is taken with motion enabled.
To prevent shaky photo from resulting the video portion of the capture contains motion metadata from the gyroscope and the OIS sensors. To aid the hardware motion correction Google have also applied software stabilisation.
The software stabilisation aligns the background more precisely than other methods by creating depth maps of the scene and aligning objects at the same depth throughout the video.
Once the background motion for the video is calculated the optimal “stable camera path” is chosen and aligned to the background. The video is then trimmed for any excessive, accidental motion of the camera and playback is begun from the timestamp of the HDR+ photo.
For those who have not seen it the result is amazing. It is a great example of combining hardware, software and their machine learning to create an even better result. With AI beginning to be built into the chipsets there seems to be a lot of scope for companies to even further improve on that. With cameras already rather good in most flagships it is hard, but exciting at the same time, to imagine them improving even more but it seems to keep happening. Who carries a standalone camera anymore?
Do you use motion photos on your Pixel 2 phone? I try to remember to have it turned on and sometimes it gives a great result that I want to keep forever.