Brian: Ever since our first Pixelfive years ago, the Pixel Camera has set the barand re-shaped the industry. Our leadership in computationalphotography and machine learning have led to some remarkablecamera capabilities over the years, and have let Pixel users takesome extraordinary pictures, even when we’ve usedordinary camera components. With Pixel 6, we’re applyingall that software expertise to a fully upgradedcamera system, for the most advancedsmartphone camera in the world. It’s leagues ahead of our previousPixel Cameras, from the hardware, to the software,to the computational photography. For starters, let’s take a lookat the main camera. Both Pixel 6 and 6 Pro have a massivenew 1/1.3-inch, 50-megapixel sensor. We combine adjacent pixels on the sensorto get extra-large 2.4-micron pixels! With Night Sight, the Pixel Camerahas always been able to do a lot with very little light,but now the primary sensor captures up to 2 and a half times as much light,thanks to those huge pixels. This meansyou’re gonna get photos with even greater detailand richer color.
Both phones also have completelynew ultra-wide cameras with larger sensorsthan before, so photos look great when you wanna fit morein your shot. Pixel 6 Pro has a larger ultrawidefront camera that records 4K video. It also has a telephoto lens with 4Xoptical zoom for getting in close. That’s not easy to fit in a phonewithout making it super thick. To get that much magnification, the Pixel Camera useswhat’s called “folded optics” A flawless prismbends the light 90 degrees so that the camera can fitin the body of the phone. And, you can get up to 20X zoom with an improved versionof Pixel’s Super Res Zoom, our advanced computational approachto combining optical and digital zoom.
Finally, the sensor behindthe telephoto lens is even larger than the primaryrear sensor in past Pixel phones, so you can capture great low-light zoomedshots with Night Sight. When this amazing hardwareis paired with Tensor, we can build new camera featuresthat were impossible before. Video is a great example. Video is a hard use casefor computational photography because you’re basically takinglots of photos very quickly. Applying a machine learningalgorithm to a single photo is very different than runningthe same algorithm for each frame,60 times per second. We’ve always dreamed of gettingPixel’s video quality up to the signature photo quality,but it wasn’t possible. The processor just wouldn’tbe able to keep up.
So we spent years on this problemand have made a lot of progress. We started by developingmore efficient methods for applying tonemappingedits very quickly, and doing everything we could toget the most out of the sensor. We also developedan algorithm called HDRnet, which can deliver the signaturePixel look much more efficiently. With Tensor, we’re able to embedparts of HDRnet directly into the ISP and accelerate it to make the processfaster and more efficient.
With this system,Pixel 6 can now run HDRnet and 4K video at 60 frames per second-that’s 498 million pixels each second. And this is what Pixel 6 video looks like. You can see whata huge improvement this is. The color accuracy is excellent,with a big boost to the vividness, the stabilization,and overall video quality. This is all thanksto the bigger camera sensors, Google’s cutting-edge machine learning, and the efficiency gainsfrom the new Tensor chip. It’s a giant step forward. Have you ever had a perfect photo ruinedby something random in the background? Let’s say you want be the only oneon the beach in your photos. If you don’t have accessto a deserted island or don’t want to spend hoursin a photo editing suite, Pixel’s new Magic Erasercan do the job! In Google Photos, you’ll seesuggestions for distractions you might wanna removefrom your photo.
Erase them all at once, or tapto remove them one by one. What really sets this feature apartis how we’re able to figure out what you’re trying to remove and how well we can fill inwhat’s in its place. Even if something is not suggested,you can still erase that distraction. Just circle them, and they disappear. And you can use Magic Eraser on Pixelto clean up all your photos, whether you took thema minute ago or years ago. Here’s Hollywood production designerHannah Beachler to show youwhat’s possible with Magic Eraser.
Hannah: Hey, Hannah Beachler,production designer and world builder. Working on “Black Panther 2:Wakanda Forever.” I oftentimes consider myselfa story designer, and I’m designingtowards moods and tones. I love this building. I have to go back to my dad. We would drive around and we wouldjust make up fantasy places. I can just remember seeingeverything that he would say. Yeah, if I crop that. When I go on a location,I’m photographing hundreds of places. And for me, I have to envisionwhat that certain place is gonna look likefor our story. Oh, my gosh,this was the key gym in “Creed,” and they had this bigworkout station that we had to take out. And this is what I could haveshowed them all along. If I just have that gone. Oh, Magic Eraser. Yes. Oh, wow. That is not the right period car,so let’s get rid of that. So I can present thisas a 1950s space. Game changer!Ooh. It’s so integral and so importantto have a blank canvas to have the creativeconversations, and I think anyoneshould be able to do that.
And I think they should be ableto do it on the spot. Wow. Once you start using that muscleof seeing past something, you’re gonna do it a lot, and then you’regonna see the world and the creative, and I think that it’s a great tool. I know I’m gonna use it. Brian: Here’s a problemeveryone has seen before.
You go to take a picture,but the lighting isn’t great, and the subject is moving around. You can’t quite get the perfect photo.It’s a little blurry. Here’s the same scene with andwithout our new Face Unblur feature. Normally, this great momentwould be a blurry throwaway photo. There’s too much motionand not enough light. It’s a physics problemthat Tensor’s on-device machine learning can solve.
Let’s talk about what’s happening here. Before you even take a picture,the Pixel Camera is using FaceSSD to figure outif there are faces in the scene. If they’re blurry,it spins up a second camera, so it’s primed and ready to gowhen you tap the shutter button. In that moment, Pixel 6takes two images simultaneously, one from the ultrawide cameraand one from the main. The main image uses a normalexposure to reduce noise, and the ultrawide uses a fasterexposure that minimizes blur. Machine learning fuses the sharper facefrom the ultrawide with the low-noise shot from the main camerato get the best of both into the image. As a last step, Pixel Cameratakes one final look to see if there’s any remaining blurin the fused image, estimates the leveland direction of the blur, and then removes it for you. In all, it takes 4 machinelearning models combining data from two cameras,to deliver the scene you know you saw but couldn’t get from your camerauntil now, with Face Unblur.
So most of the time we wanna eliminateblurriness from our pictures, but sometimes a bit of blurcan actually add to the picture, especially for action shotsthat don’t seem to have much action… Pixel 6 introduces Motion Mode,which brings a professional look to your nature scenes,urban photos, or even a night out. Typically, you’d create these effectswith panning and long exposures– techniques that require fancy equipmentand lots of practice. Motion Mode makes it easy. For action shots like this one, the Pixel Camera takes several photosand combines them, using on-device machine learningand computational photography to identify the subject of the photo,figure out what’s moving, and add aesthetic blurto the background.
For a nature shot like this, the camera appliescomputational photography and ML to align multiple frames,determine motion vectors, and interpolateintermediate frames that are blurred so you get this silky-smooth waterfall. That sounds hard,but watch how easy this is… Nothing captures the energy of New Yorklike a fast-moving subway train. With Motion Mode, just waiton the subway platform for the right moment,snap a photo of your friend, and you have a vibrant artistic phototo remember this moment. Now, we know that not every pictureis taken in the Pixel Camera app. Some of these new camera capabilitiesand image quality improvements extend to any app that uses the camera,including your favorite camera apps.
Here’s Snap founder and CEOEvan Spiegel to tell you more. Evan: Hey, I’m Evan. The camera was once a toolfor documenting important life moments. Today, people use the Snapchatcamera for so much more: as a platform for self-expression,creativity, and visual communicationwith friends. For Snapchatters,speed really matters. Billions of snapsare created every day, and our community wants to be readyto Snap everyday moments as they happen.
That is why we are alwaysworking on new ways to help Snapchatters get to our cameraas quickly and easily as possible. We’re excited to announce todaythat we are partnering with Google on a Pixel 6 featurecalled “Quick Tap to Snap.” This Pixel-first featureputs the Snap camera directly into the lock screen for fastand easy access to the Snapchat camera. Just tap the back of your phone twiceand you’re into the camera. This new feature is a speedyand simple gesture that will help our communities Snapmore moments before they disappear. We’ve designed Quick Tap to launchinto “Camera Only” mode so Snapchatters can create Snaps even ifthey haven’t yet unlocked their device. Once you make a great Snapthat you want to share, simply authenticate on your deviceto unlock the full app experience. With Quick Tap to Snap, Pixel 6 will bethe fastest phone to make a Snap, and we’re also working with Googleon exclusive augmented reality Lenses, and bringing other key Pixel features,like live translation, directly into the Chat featureon Snapchat.
Snapchatters can talk to their friendsin more than 10 languages, and conversations will betranslated in real-time. These are the first featurescoming to Snapchat on Pixel 6, and we can’t wait to bringmore innovation to our community with our partners at Google..