Micron’s GDDR5X Memory Analysis – Will Nvidia’s Next Generation Pascal Graphic Cards Utilize the Standard?

Usman Pirzada

A few days back, a rumor about Nvidia utilizing GDDR5X memory in some of its upcoming Pascal offerings made the rounds. While we do not know whether or not the report was true, I thought it would be a good idea to get into a bit more detail about the GDDR5X memory standard by Micron as well as clear up a few misconceptions. We will look at its proposed advantages as well as some of the disadvantages.

AMD HBM vs GDDR5An AMD slide comparing the specifications of GDDR5 memory and HBM. @AMD Public Domain

Micron's GDDR5X memory explored - could this be what Nvidia's mid end will look like in 2016?

Lets start with the preliminaries, what exactly is GDDR5X? Ironically, this question has not been answered very clearly by Micron itself and details are very vague in nature. This is what we do know however: GDDR5X is based on the GDDR5 standard and primarily doubles the prefetch of the standard while preserving "most of the command protocols of GDDR5". What that means is that while the bandwidth has been doubled, it is not, strictly speaking an improvement of the GDDR5 standard, rather a new branch of the same and arguably a completely new technology (contrary to what the 'GDDR'X name might suggest). One of the examples given is DDR3 to DDR4, which also happens to be a good approximate analogy to think of the GDDR5 to GDDR5X jump.

Also, contrary to what has been reported in the past by some sources, the Micron GDDR5X standard is not proprietary in nature, infact JEDEC has been approached (by Micron) to make it a universal standard. Given below is the only 'technical slide' released by Micron so far:

We can immediately see that as opposed to a 32 Bit wide memory access, the GDDR5X supports 64 Bit wide (double the prefetch) memory access, theoretically doubling the memory bandwidth. Keep in mind however that voltages will remain exactly the same. Although the foot print or the real estate taken by the memory (something which was one of the problems associated with GDDR5), on the card itself,  will halve in size - thanks to the fact that Micron has managed to double the density of GDDR5. The company is expected to make a formal announcement in 2016 - with availabliity of the standard in 2016 as well. So the question then becomes, will Nvidia use GDDR5X in their upcoming line of GPUs (Pascal and beyond)?

Before we answer that, lets look a bit at the numbers of GDDR5 and GDDR5X.

The bandwidth of GDDR5 can be computed via the following method:

[DDR Clock Speed]*[2]*[Bus Width/8]
*this is the same clock speed shown on popular OC tools such as MSI After burner.
** All calculations given below assume the same umber of GDDR5 or GDDR5X chips.

This means that the GTX 980 Ti, which reads 3505 Mhz (7010 Mhz effective) has a theoretical bandwidth of [7010*384/8=>] ~336 GB/s.

GDDR5X clock rates and bandwidthAn extract from Micron's nomenclature documentation. @Micron

Now while we don't know any other details abut GDDR5X, I was able to find this pdf on Micron's website which sheds some interesting details; details that we can use to estimate speed and performance of this particular piece of technology. Thanks to the PDF, we know that the real clock rates of the memory will be the same. And if Micron's claim is true than all we need to do is add a x2 multiplier. The equation for GDDR5X would therefore become:

[DDR Clock Speed]*[4]*[Bus Width/8]

Please note that the document lists the "real" clock speed and not the DDR clock speed. To change that into the DDR clock speed we will initially multiply the value by 2. So for a chip with a DDR clock rate of 3505 Mhz we will get the following bandwidth:

[3505*4*384/8=>] ~673 GB/s

Now if you remember the original leak, the numbers it stated for GDDR5X was a '256 bit bus width with 7000 Mhz (DDR) clock rate and the actual achieved bandwidth being 448 GB/s'. And consequently, we now have a metric to ascertain whether or not the rumor has even the slightest grain of authenticity. Now the folks over at 3DCentre have included the x2 multiple in the DDR clock rate, which might (or  might not) be a technical inaccuracy since the real clock rates remain the same (only the 'effective' clock rate would change). To get the 448 GB/s number, we are actually assuming a 256 bit wide bus width and a DDR clock rate of 3500 Mhz:

[3500*4*256/8=>] ~448 GB/s

This is pretty close to the 512 GB/s number that HBM1 currently gives. Ofcourse, HBM2 is a whole other ball game and runs circles around the performance advantages offered by GDDR5X. So is this memory standard dead on arrival? Unfortunately, once again, we do not have enough information to categorically answer that question since we are missing several key points. More information on this will be shed in 2016 according to the press release.

Now micron has stated and implied that they avoided creating a brand new standard from the gorund up and instead workied on the GDDR5 standard. They also claim that most of the command protocols have been preserved. However, the most critical question is if the interface itself is the same as GDDR5. To put that into perspective here are some things we feel very confident saying:

  • High Bandwidth Memory (HBM) is the memory standard of the future and the low clocks plus low power consumption makes it ideal for every form of compute.

  • Micron's GDDR5 is not going to offer a lower power solution that HBM.

  • Nvidia is most definitely going to use HBM in its higher end offerings.

  • Nvidia might decide to swap GDDR5 with GDDR5X  on its Mid-End offerings if (and only if) the switching costs (in terms of yield, sampling and development) are not significant. If they are significant, or if initial sampling isn't high, then Nvidia will simply not bother switching to GDDR5X and just switch directly to HBM2 when it achieves economies of scale.

  • Micron's GDDR5 will offer double the bandwidth capability and twice the memory density, with availability in 2016.

This is why it matters a lot whether or not GDDR5X is sufficiently similar to GDDR5 like its name suggests. Nvidia will undoubtedly have offerings next generation which run on GDDR5 so it would make sense to swap the GDDR5 on them with GDDR5X, but only and only if the switching cost is marginal. If GDDR5X is a standard that requires significant development costs before adaptation then it is very unlikely (more like impossible) that Nvidia shifts to this standard.

Another point to note is that graphics processors (and almost every other processor) are years in the making and if Micron has truly just developed this memory (their press release seems to support this fact) then Nvidia will simply not have the choice to switch to them since GPUs would already be in the fabrication stage at TSMC. The only way this can even happen is if we are to assume that Micron had approached Nvidia long time ago and the news about this standard is just becoming public.

And since we are now entering the domain of sheer speculation, I would end on this note: Nvidia's use of GDDR5X is a possibility, but not a probability as far as I can see (for high end chips). Micron wasn't very explicit about it, but I have a feeling that the switching costs (or the sampling) will not be feasible for use in 2016. And considering that HBM will soon achieve economies of scale, it seems a pointless thing to follow after, unless ofcourse,  it takes nothing (for Nvidia) to adopt this standard.

Share this story

Deal of the Day

Comments