AMD EPYC Bergamo ‘Zen 4C’ CPUs Being Deployed In 1H 2023 To Tackle Arm CPUs, Instinct MI300 APU Back In Labs

Hassan Mujtaba
AMD EPYC CPUs Including Genoa & Bergamo Expected To Push Server Market Share Beyond 30% 1

AMD's CTO, Mark Papermaster, gave a few new updates regarding EPYC Bergamo 'Zen 4C' CPUs & Instinct MI300 APUs at the Wells Fargo 6th Annual 2022 TMT Summit.

AMD Talks EPYC Bergamo 'Zen 4C' CPUs & Instinct MI300 APUs For Next-Gen Hyperscale Data Centers

Mark starts off by talking about the immediate follow-up to their recently launched 4th Gen EPYC CPUs codenamed Genoa, the 4th Gen Bergamo lineup. The AMD EPYC Bergamo lineup is optimized for compute density and is positioned against Arm.

Related Story AMD Reveals Open-Sourcing Of Additional Radeon GPU Stacks, On-Track To Debut This Year

The AMD EPYC Bergamo chips will be featuring up to 128 cores and will be aiming at the HBM-powered Xeon chips along with server products from Apple, Amazon, and Google with higher core counts (ARM architecture). Both Genoa and Bergamo will utilize the same SP5 socket and the main difference is that Genoa is optimized for higher clocks while Bergamo is optimized around higher-throughput workloads.

We'll talk about Bergamo, our dense core that goes head-to-head with smaller arm cores where you just need throughput processing. Those are all tailored adaptations which we work with hyperscale because we listened because they told us what they needed to have cost-effective solutions and you'll see more and more accelerators added into that mix. Microsoft announced that they have our instinct, our GPU acceleration now up and running in Azure for their training.

Mark Papermaster (AMD CTO and Executive VP of Technology & Engineering)

It is stated that AMD's EPYC Bergamo CPUs will be arriving in the first half of 2023 and will use the same code as Genoa and also run like Genoa but the code is half the size of Genoa. The CPUs are specifically mentioned to compete against the likes of AWS's Graviton CPUs and other ARM-based solutions where peak frequency isn't a requirement but throughput through the number of cores is. One workload example for Bergamo would be Java where the extra cores can definitely come in handy. Following Bergamo will be the TCO-optimized Siena lineup for the SP6 platform which will play a crucial role in expanding AMD's TAM growth in the server segment.

And part two of your question is exactly we're expanding our TAM. And so when you have that kind of offering, what we're able to do with that kind of performance is, one, we offer Genoa to sit right on top of our third-generation EPYC, Milan because Milan is still a leadership processor in the server market.

And so one, we have the -- from top to bottom of stack has incredible coverage now with the kind of granularity that our customers need to really cover hyperscale through enterprise, and we are adding in first half of this year, what we call Bergamo, which will be with our Zen 4c, we increased staffing to our CPU team, and we added a version of Zen 4. It's still Zen 4. It runs a code just like Genoa, but it's half the size.

And that competes head-to-head with Graviton and ARM-based solutions where you don't need the peak frequency. You're running workloads like Java workloads, throughput workloads that don't have to run peak frequency, but you need a lot of cores. So we're adding that in first half of '23. And then later in 2023, we're adding the Sienna which is a variant targeted to telecom space. So we're really, really excited about our TAM growth in server.

Mark Papermaster (AMD CTO and Executive VP of Technology & Engineering)

The AMD Genoa-X CPUs are expected to hit production by end of Q3 / early Q1 2023 and will launch around mid of 2023. They will feature a similar design methodology as the Milan-X chips with 3D V-Cache as 'Large L3' is a highlighted feature of the lineup. While Milan-X features up to 768 MB of L3 cache, Genoa-X CPUs will feature over 1 GB of L3 cache while rocking the same 96 cores based on the Zen 4 design. So in total, SP5 will end up with three EPYC families.

Well, you mentioned Genoa X. And I didn't mention that in the variance and I'll add that now. That's a version where we stack cash right on top of the CPU and that really is tailored to make high-performance workloads like EDA or database workloads, even more TCO effective.

Mark Papermaster (AMD CTO and Executive VP of Technology & Engineering)

Lastly, we have a confirmation from Mark that AMD already has its next-generation Instinct MI300 APU which utilizes a combination of CDNA 3 GPU Cores and Zen 4 CPU cores packaged alongside a handsome amount of HBM3 memory is back in the labs and crunching through various tests.

This behemoth of a chip is expected to launch by 2023 and will utilize a 5nm process node and run on the same SP5 socket platform as the 4th Gen EPYC CPUs. The Instinct MI300 accelerator will rock a unified memory APU architecture and new Math Formats, allowing for a 5x performance per watt uplift over CDNA 2 which is massive.

But with what we announced at we've rolled out with our next-generation instinct that we're already have back in the labs, our MI 300. It is a true Data Center APU. It's a CPU and a GPU acceleration which is leveraging the Infinity architecture to share the same memory fully coherently. It's all sharing a high-bandwidth memory.

Mark Papermaster (AMD CTO and Executive VP of Technology & Engineering)

AMD definitely has a lot planned out for 2023 to accelerate its server growth and TAM. The chip maker unveiled its latest roadmap during its Financial Analyst Day which goes one step ahead & talks about future Zen 5 core lineups too, detailed here.

AMD EPYC CPU Families:

Family NameAMD EPYC VeniceAMD EPYC Turin-DenseAMD EPYC Turin-XAMD EPYC TurinAMD EPYC SienaAMD EPYC BergamoAMD EPYC Genoa-XAMD EPYC GenoaAMD EPYC Milan-XAMD EPYC MilanAMD EPYC RomeAMD EPYC Naples
Family BrandingEPYC 11K?EPYC 10K?EPYC 10K?EPYC 10K?EPYC 8004EPYC 9004EPYC 9004EPYC 9004EPYC 7004EPYC 7003EPYC 7002EPYC 7001
Family Launch2025+2025?2025?202420232023202320222022202120192017
CPU ArchitectureZen 6?Zen 5CZen 5Zen 5Zen 4Zen 4CZen 4 V-CacheZen 4Zen 3Zen 3Zen 2Zen 1
Process NodeTBD3nm TSMC?4nm TSMC4nm TSMC5nm TSMC4nm TSMC5nm TSMC5nm TSMC7nm TSMC7nm TSMC7nm TSMC14nm GloFo
Platform NameTBDSP5SP5SP5SP6SP5SP5SP5SP3SP3SP3SP3
SocketTBDLGA 6096 (SP5)LGA 6096 (SP5)LGA 6096LGA 4844LGA 6096LGA 6096LGA 6096LGA 4094LGA 4094LGA 4094LGA 4094
Max Core Count384?19212812864128969664646432
Max Thread Count768?38425625612825619219212812812864
Max L3 CacheTBD384 MB1536 MB384 MB256 MB256 MB1152 MB384 MB768 MB256 MB256 MB64 MB
Chiplet DesignTBD12 CCD's (1CCX per CCD) + 1 IOD16 CCD's (1CCX per CCD) + 1 IOD16 CCD's (1CCX per CCD) + 1 IOD8 CCD's (1CCX per CCD) + 1 IOD12 CCD's (1 CCX per CCD) + 1 IOD12 CCD's (1 CCX per CCD) + 1 IOD12 CCD's (1 CCX per CCD) + 1 IOD8 CCD's (1 CCX per CCD) + 1 IOD8 CCD's (1 CCX per CCD) + 1 IOD8 CCD's (2 CCX's per CCD) + 1 IOD4 CCD's (2 CCX's per CCD)
Memory SupportTBDDDR5-6000?DDR5-6000?DDR5-6000?DDR5-5200DDR5-5600DDR5-4800DDR5-4800DDR4-3200DDR4-3200DDR4-3200DDR4-2666
Memory ChannelsTBD12 Channel (SP5)12 Channel (SP5)12 Channel6-Channel12 Channel12 Channel12 Channel8 Channel8 Channel8 Channel8 Channel
PCIe Gen SupportTBDTBDTBDTBD96 Gen 5128 Gen 5128 Gen 5128 Gen 5128 Gen 4128 Gen 4128 Gen 464 Gen 3
TDP (Max)TBD480W (cTDP 600W)480W (cTDP 600W)480W (cTDP 600W)70-225W320W (cTDP 400W)400W400W280W280W280W200W
Share this story

Deal of the Day

Comments