Apple’s chip ‘binning’ explained | Macworld

Summary created by Smart Answers AI
In summary:
- Macworld explains that chip bundling is Apple’s practice of disabling faulty cores in processors to create different levels of performance without designing entirely new chips.
- This manufacturing strategy allows Apple to reduce waste, manage costs, and offer diverse product lines using the same basic chip design across all devices.
- Examples include the iPhone 17e’s bundled A19 chip with reduced GPU cores, showing performance impacts directly proportional to disabled components.
Over the past few weeks, you’ve probably heard the term “bundled” in reference to the chips inside the iPhone 17e and MacBook Neo. But what does this mean? Simply put, binning is the process of taking an entire group of something and separating it by characteristics to sell or use it differently.
Its origins date back to agriculture, where the yield of a single harvest was separated into bins. The best pieces would be ideal for individual sale and would go into a bin destined for the market. The pieces that weren’t as visually appealing went into a bin that would be sold in bulk at a discount, for processed food products. The worst food in terms of quality and appearance went to another bin to be sold as animal feed or fertilizer.
Today, binning is used in almost every mining, harvesting or manufacturing industry, from gemstones to clothing and, of course, semiconductors. If a RAM chip is tested and fails when running at a clock speed of 3000 MHz, it is scrapped and sold as a 2800 MHz chip, for example.
All major chipmakers have used “bundling” tactics for years, including Intel, AMD and Nvidia. But Apple made the term more popular by using “bundled” chips in popular products. Here’s how the process works and how Apple uses the clustered chips to its advantage.
The bundling process explained
Processors, including Apple’s, are commonly grouped in two ways: clock speed and design flaws. Chips are tested at frequencies and voltages, and separated into those that pass validation at the desired speeds and those that operate at lower speeds.
Chipmakers can then sell the faster chips at a higher price or, in Apple’s case, integrate them into high-end products where top-notch performance is expected. Apple does not disclose the frequencies of most of its chips, and the final speed at which the chip can operate depends to a large extent on the heat dissipation of the targeted device.
The most obvious method of “binning” is to disable parts of a chip in order to save products that otherwise would have failed during manufacturing.

The iPhone 17e uses a “bundled” version of the A19 chip with one fewer GPU core.
David Price / Foundry
Modern processors feature tens of billions of transistors, etched onto a sheet of silicon by shining high-frequency ultraviolet light through a “mask” of the circuit pattern. This is repeated layer after layer and the precision required is incredible.
A typical silicon wafer — a large, flat, round crystal about a foot in diameter — will produce about 500 chips like an A18, but a large percentage of them will have a defect that prevents them from working properly. If Apple were to throw them in the trash, they’d get maybe 200 usable chips per slice (or less). The percentage of usable chips is the “yield” of a silicon wafer. You pay for chip manufacturing per wafer, so the higher the yield, the more usable chips you get out of it and the lower the cost per chip.
Modern chips are designed with many repeated and functionally identical areas. If there are six GPU cores, each GPU core is exactly the same. This repetition can be used for redundancy in the manufacturing process, allowing manufacturers to make defective chips usable in other products.
With the right design, a chip could be created such that any GPU core with a manufacturing defect could be “fused” and ignored when running software. This can turn your broken chip with a 6-core GPU into a working 5-core chip. This technique can be used anywhere large parts of the chip are repeated: CPU and GPU cores, cache memory, memory interface circuits, etc.
Which Apple products contain stored chips?
Bundled chips have been used to power Apple products for about a decade. In 2018, the 3rd generation iPad Pro arrived, along with a version of the A12 called the A12X. Where the A12 had a 6-core CPU and a 4-core GPU, the A12X chip featured an 8-core CPU and a 7-core GPU.
As we would soon learn, the A12X chip was actually designed with 8 GPU cores. Yields were bad enough that Apple had to disable one GPU core per chip to get enough usable chips per slice to bring costs in line. At the start of 2020, the fourth generation iPad Pro was equipped with the A12Z processor. It was the exact same chip as the A12X, but with that eighth GPU core enabled. Manufacturing yields have improved enough to make this possible.

The entry-level MacBook Air used a “bundled” version of the chip with one or two fewer GPU cores.
Ida Blix
When the M1 debuted on the MacBook Air, the chip featured 8 GPU cores. But the entry-level model had a disabled GPU core, which gave Apple many more usable chips per slice and reduced the cost of the M1.
Today, Apple sells many products containing stock chips. The iPhone Air uses the A19 Pro, just like the iPhone 17 Pro, but one of its 6 GPU cores is disabled. The iPhone 17e uses a bundled version of the A19: you get 4 GPU cores in the 17e while the standard iPhone 17 gets 5. The entry-level MacBook Air has an M5 with two GPU cores disabled (8 instead of 10). And the MacBook Neo uses an A18 Pro with a disabled GPU core.
Bundled chips allow Apple to improve yields and reduce chip costs. It also allows them to make cheaper products with lower-performance chips without having to design a whole new chip just for them. And as one of the only companies making their own chips and hardware designs, that gives them a huge advantage.
How does clustering impact performance?
If you’re using a product that has a “binned” version of a chip, are you really missing out on the full experience? As is often the case with IT product performance, the answer is: It depends.
All things being equal, a bundled version of a chip suffers a drop in optimal performance depending on the modification made to the chip. If you go from 5 GPU cores to 4, that’s a 20% reduction in GPU cores, and you typically see a 20% reduction in peak GPU performance.
The iPhone 17e, for example, delivers roughly 20% lower GPU results than the iPhone 17 because it has 20% fewer GPU cores. The iPhone Air, with 17% fewer GPU cores than the iPhone 17 Pro, delivers approximately 17% slower graphics benchmark results.
But it’s not that simple. Few, if any, applications are limited only by the performance of a single component. The bundled version of the chips is used in different products with different cooling, RAM speeds, maximum clock speeds, and other performance-altering features. So the performance difference is never just the result of a single change in the “bundled” chip.
As a general rule, consider that the worse the performance degradation you will experience with a bundled chip is equal to the part reduction. Going from 10 to 8 cores in the M5 will, at worst, result in a 20% reduction in performance, and only for applications that are particularly affected by GPU throughput and not other things like CPU performance or RAM speed.
Apple could do more to make it clear that products with the same name can have very different performance characteristics, but chip bundling isn’t a sneaky ploy to make you pay more for less. But recycling chips with disabled parts to produce lower-performing variants is standard industry practice, and it gives Apple a huge advantage over competitors who don’t control the entire manufacturing process.

