NVIDIA AD102 GPUs running next-generation GeForce RTX 40 series rumored to stick to PCIe Gen 4.0 protocol

NVIDIA’s GeForce RTX 40 Series graphics card based on the Ada Lovelace GPU architecture is expected to retain their existing PCIe Gen 4.0 compatibility as reported by Kopite7kimi.

NVIDIA GeForce RTX 40 ‘AD102 GPU’ graphics card to maintain PCIe Gen 4.0 compatibility

NVIDIA launches its GeForce RTX 40 series graphics card based on the all-new Ada Lovelace GPU architecture later this year. The specifications and specifications of the graphics card series have already been leaked, but the design of the card itself is a more interesting aspect.

NVIDIA AD102 GPU flagships to consume insane amounts of power: Gaming GeForce RTX 4090 at 600W TBP & Workstation RTX L6000 at 375W TBP

So far, we know that the graphics cards in the NVIDIA GeForce RTX 40 series will use the new ATX 3.0-compatible 12PVHPWR 16-pin connector, which allows up to 600 W power consumption through a new PCIe Gen 5 power connector interface. This power connector is already shown on the GeForce RTX 3090 Ti graphics card and currently allows up to 450W of power consumption through a triple 8-pin adapter. But there is another aspect to allowing full PCIe Gen 5.0 compliance and that is the interface connector itself.

Currently, modern graphics cards communicate with the CPU through the PCIe Gen 4.0 protocol. The PCIe Gen 4.0 protocol allows for 64 GB / s total and 32 GB / s two-way bandwidth. But the latest platforms from Intel and AMD support the all-new PCIe Gen 5.0 interface protocol. This new standard allows for up to 128 GB / s in total and 64 GB / s two-way bandwidth. This will essentially double the bandwidth, but it looks like upcoming graphics cards or at least the advanced GeForce RTX 40 graphics cards based on the AD102 GPU will not have PCIe Gen 5.0 interface yet.

Based on a tweet from Kopite7kimi, the upcoming GeForce RTX 40 series will retain the PCIe Gen 4.0 protocol, which is a brave move from NVIDIA not to jump on the next generation standard, even if they do in the HPC segment, where their Hopper GPU will be among the first to use the new protocol. Now it makes sense that the HPC series has it because servers require a lot of bandwidth and the Gen 5.0 protocol will help these environments. As far as consumers are concerned, the PCIe Gen 5.0 interface has just too much bandwidth and the current GPUs are not yet fully developed for the PCIe Gen 4.0 interface.

NVIDIA is reportedly starting to test its fastest next-generation GPU, AD102, for GeForce RTX 4090 graphics card, has 24 Gbps GDDR6X memory

Having PCIe Gen 4.0 bodes well for start-level lineup, who do not have to worry about bottlenecks if they are equipped with lower lanes, as was the case with the Radeon RX 6500 and RX 6400 series, as when switching to the Gen 3 standard end up with less than the required graphics bandwidth, leading to poor performance compared to PCIe Gen 4.0 compliant standard. If the high-end lineup does not starve the Gen 4.0 standard, then the low-end lineup is far from reaching the maximum threshold. So far, we can not say for sure whether NVIDIA will really keep PCIe Gen 4.0 on its upcoming RTX 40 series cards, but that may change as marketing likes to have the PCIe Gen 5.0 logo for the new cards.

Aside from PCIe Gen 5.0 and PCIe Gen 4.0 support, NVIDIA will apparently also make major changes to the way its CUDA cores are arranged within the Ada Lovelace architecture. The GPUs for the GeForce RTX 40 series will not only be a simple CUDA core bump from Ampere, but may include a number of new mixed-precision cores that are not detailed yet. The lineup is still a few months away from the introduction, so much may change, but we will make sure to keep you updated.

NVIDIA CUDA GPU (RUMORED) Preliminary:

GPU TU102 GA102 AD102
Architecture Turing Ampere Ada Lovelace
Treat TSMC 12nm NFF Samsung 8nm TSMC 5nm
Die Size 754 mm2 628 mm2 ~ 600mm2
Graphics Processing Clusters (GPC) 6 7 12
Texture Processing Clusters (TPC) 36 42 72
Streaming multiprocessors (SM) 72 84 144
CUDA cores 4608 10752 18432
L2 cache 6 MB 6 MB 96 MB
Theoretical TFLOPs 16.1 37.6 ~ 90 TFLOPs?
Memory type GDDR6 GDDR6X GDDR6X
Memory bus 384-bit 384-bit 384-bit
Memory capacity 11 GB (2080 Ti) 24 GB (3090) 24 GB (4090?)
Flagship SKU RTX 2080 Ti RTX 3090 RTX 4090?
TGP 250W 350W 450-850W?
Release September 2018 Sept. 20 2H 2022 (TBC)

News source: Videocardz