New AMD CPU patent reveals 3D-stacked machine learning accelerator design

Jason R. Wilson
AMD Confirms EPYC Milan-X 3D V-Cache CPUs Launching Later This Month

On September 25, 2020, AMD issued a patent for a unique processor that offers a machine learning (ML) accelerator vertically stacked on the I/O die, or IOD. AMD may be preparing a data center-based system-on-chips (SoCs) with incorporated FPGA (Field Programmable Gate Arrays) or machine learning accelerators for specialized GPUs. AMD will possibly add an FPGA or GPU on top of its processor I/O die, similar to how AMD adds specialized cache to their newest processors.

AMD is beginning to focus on 3D-stacked machine learning accelerators in the newest patent innovations

The technology is vital because it will allow the company to add additional classes of accelerators to forthcoming processor SoCs. The patent by AMD doesn't ensure that consumers will see the newly designed processors appear on the market. The company's newest venture does allow users to see what the future might hold with the proper research and development at the forefront. AMD has not expressed any information about the recent patent, which means we can only estimate what the company plans for the new designs.

Related Story AMD To Launch EPYC 4004 CPUs For Mainstream AM5 Platforms: X3D 3D V-Cache & Standard Variants

The 'Direct-connected machine learning accelerator' patent issued to AMD explains the possible uses that the company can initiate with an ML-accelerator stacked onto the processor with the included IOD. The technology will consist of an FPGA or compute GPU to process ML workloads stacked on an IOD with a specialized accelerator connector. AMD can initiate this design by adding a unique accelerator within the local memory, using the memory linked to the IOD or a separate section not attached to the head of the IOD.

When 'machine learning' is discussed, it is usually synonymous with data centers. Yet, AMD will need to boost the workloads of its chips with this new technology. The patent by AMD would allow for workloads to increase in speed without combining costly and customized silicon used in system chips. Advantages would also include more efficiency in power, data transmissions, and more capabilities.

The patent's timing seems strategic due to the filing close to the AMD/Xilinx acquisition. Now that we are a little over a year and a half after the filing and seeing the patent ultimately published at the end of March 2022, we may see the new designs, if they come into fruition, as early as 2023. The inventor listed on the patent is AMD fellow Maxim V. Kazakov.

AMD is in the process of creating new EPYC processors, codenamed Genoa and Bergamo, that utilize a design with the I/O die combined with an accelerator. It may be possible for AMD to make AI-based processors under the Genoa and Bergamo series with machine learning accelerators.

Speaking of AMD's EPYC line, the company is searching for a superior 600W cTDP or configurable thermal design power for the fifth generation EPYC Turin processor line. The EPYC Turin CPUs offer twice the cTDP of the current EPYC 7003 Milan series. Also, the company's SP5 fourth and fifth Gen platform of EPYC processors offers as much as 700W of power consumption in short spurts. With the Genoa and Bergamo processors, if an ML accelerator is added to the processor, it would raise the power consumption. The future server chipsets would benefit from vertically stacked accelerators, such as the ML-accelerated processor designs recently patented by AMD.

It should be understood that many variations are possible based on the disclosure herein[...]

Suitable processors include, by way of example, a general-purpose processor, a special-purpose processor, a conventional processor, a graphics processor, a machine learning processor, [a DSP, an ASIC, an FPGA], and other types of integrated circuit (IC).

[…] Such processors can be manufactured by configuring a manufacturing process using the results of processed hardware description language (HDL) instructions and other intermediary data including netlists (such instructions capable of being stored on a computer-readable media).

— exerpt from the 'Direct-connected machine learning accelerator' AMD patent

With assistance from Xilinx technology, the company can now offer compute-focused GPU designs, robust FPGA designs, programmable processor series from Pensando, and a solid x86 microarchitecture. Multi-chiplet designs, similar to the tech seen in the AMD Infinity Fabric interconnective technology, are now a reality for the company. Datacenter processors with vertical stacking will offer more options for enterprises by combining multi-tile APUs for datacenters and processors built with TSMC's N4X performance nodes and rounding it out with either a graphics processor or FPGA accelerator with an optimally enhanced N3E process tech.

The crucial takeaway from the published patent from AMD is the machine learning accelerator technology itself and its place in the future of consumer-based CPUs. AMD would incorporate the accelerator more universally along future product lines, allowing for a more diverse portfolio that would place them at the forefront of data center applications and client-specific utilization.

Share this story

Deal of the Day

Comments