A few years ago, smartphone cameras were different. In 2017, the Google Pixel 2, Samsung Galaxy Note 8, and Apple iPhone 8 had 12(ish)-megapixel sensors powering their rear cameras. Fast-forward to today, and most phones, from the Google Pixel 7 Pro to the Samsung Galaxy S23 and Galaxy 23+, have 50-megapixel primary sensors, and the Samsung Galaxy S23 Ultra features a 200-megapixel shooter. Apple even got wise, sticking a 48-megapixel primary camera in its newest iPhone 14 Pro.

Megapixel counts on Android phone cameras have ballooned into the hundreds. If you’ve used any of these high-megapixel cameras, you may have noticed that they don’t kick out 50-megapixel or 200-megapixel images by default. The Pixel 7 series doesn’t even have the option to save full-resolution shots. So where are all those pixels going?

What is pixel binning?

Pixel binning is the process of combining multiple adjacent pixels to create a superpixel. In data processing, binning is a process that sorts data points into groups (or bins). In digital photography, the data points that are binned are individual pixels, sometimes referred to as photosites.

Depending on the full resolution of the image sensor in your phone’s camera, pixels are binned into groups of either four or nine (you might see this described as “tetra-binning” or “nona-binning,” respectively). The Galaxy S23 Ultra uses pixel binning to combine groups of 16 adjacent pixels, using its 200-megapixel image sensor to capture 12.5-megapixel photos (the math checks out: 200 ÷ 16 = 12.5). In the Pixel 7 and Pixel 7 Pro, sets of four pixels combine to create 12.5-megapixel photos (50 ÷ 4 = 12.5).

What does pixel binning accomplish?

But why do this at all? We put the question to Judd Heape, Qualcomm’s vice president of product management for camera, computer vision, and video. The answer comes down to light sensitivity and space constraints.

On their surfaces, camera sensors have thousands of pixels, which are discrete units that sense light. As the resolution of smartphone cameras increases, so does the number of pixels on those sensors’ surfaces. While cramming more pixels into the same physical area makes your phone’s camera more capable of seeing fine detail, it limits how well the camera can function in low light and makes for an inferior dynamic range in photos.

“Small pixels can’t capture as much light,” Heape explains. “It’s basic physics.” And modern smartphone pixels are small. It’s common to see pixel sizes around 1 μm (a single micrometer, or micron). To put that into context, an average strand of human hair is about 80 μm thick.

Larger pixels typically mean better image quality, especially in low light. Pixel size matters because the smaller a pixel is, the less surface area it has to collect incoming light. All else being equal, a sensor with 0.8-μm pixels takes a dimmer picture than a sensor with 1.2-μm pixels.

Manufacturers can do a few things to combat this. Smartphone cameras typically combine information from multiple frames in a technique called computational photography, using software to create a single image that contains data from several photos. There’s also the option to use a physically larger sensor, allowing each pixel more surface area to collect light.

Google used a comparatively huge 1/1.31-inch sensor for the primary camera in the Pixel 7 series, which afforded it a relatively large pixel size and a high megapixel count. However, this approach requires devoting more internal space to camera hardware, which means either less room for other parts, like the battery, or a unique camera bump, like the one seen in Google’s recent Pixel phones.

Small pixels can’t capture as much light. It’s basic physics.

How does pixel binning work?

Pixel binning combines adjacent pixels to create artificially large “superpixels” that have higher light sensitivity than their constituent photosites do on their own. In most digital cameras, each pixel on an image sensor filters light to collect only certain wavelengths. Broadly speaking, 25% of the pixels are tuned to red light, 25% are tuned to blue light, and 50% are tuned to green light (green gets extra representation because the human eye is more sensitive to green light than it is other colors).

When a phone uses pixel binning, its image signal processor (ISP) averages the input from sets of four (or nine, or 16) neighboring like-colored pixels into superpixels to generate image data. The result, Heape says, is a trade-off: “Resolution goes down, light sensitivity goes up.”

You can’t quite replicate the low-light performance of physically large pixels by combining smaller pixels. As Heape put it, “The distance between them isn’t infinitely small,” so pixel binning can cause additional artifacting. Remember, we’re dealing with fractions of a literal hair’s breadth. Distances between pixels are microscopic, and software has become good at filling in tiny data gaps introduced by techniques like pixel binning.


Because pixel binning can make up for the low light deficiencies inherent to camera sensors with small pixels, it also means that features that depend on high megapixel resolutions don’t have to be exclusive to phones with physically enormous camera sensors. For example, many newer flagship phones support 8K video recording, which is impossible with a more traditional 12-megapixel camera sensor.

All those megapixels are also great for punching in without using a dedicated telephoto lens. As Heape explains, “In adequate lighting conditions, the high-resolution capabilities of the sensor can be leveraged to achieve excellent quality digital zoom.” But partially because the S23 Ultra can bin groups of 16 pixels to kick out 12.5-megapixel stills, its low-light performance is better than what you’d see from a lower resolution camera with pixels of the same 0.6μm size.

Pixel binning: Magic, basically

Pixel binning is a creative workaround to the physical limitations imposed by ever-increasing megapixel counts on image sensors that must remain tiny to fit inside our phones. It’s rapidly become the industry standard, and it’s not hard to see why. It gives visually accurate photos in lighting that would otherwise require noise-inducing high ISO or blur-prone long exposure times. It’s not quite magic, but it is clever engineering. And really, aren’t they pretty much the same thing?

Take photos like a pro

Are you feeling photographic? Check out our comparison of the cameras on the Google Pixel 7 and Google Pixel 7 Pro phones. We also have tips on how to edit your shots in Google Photos and how to take and edit RAW photos on Android.

Conclusion on How combining pixels improves your photos

If you have any query let me know in comment section.