Nvidia has started its second GTX of 2020 by announcing two original graphics playing cards for experts – the RTX A6000 and passively-cooled RTX A40 – as successors to Turing-primarily based fully mostly Quadro parts but without the Quadro branding.
As reported by Anandtech, the RTX A6000 will launch as Nvidia’s original flagship expert graphics card and finally sees the fairway team form exhaust of a truly-enabled GA102 GPU, a trimmed version of which is gentle in both the RTX 3080 and RTX 3090. This gives it a CUDA Core count of 10,752, while Tensor Cores and RT Cores take a seat at 336 and 84, respectively. It moreover comes with a enormous 48GB of VRAM, even though it’s some distance of the GDDR6 diversity running at 16Gbps in resolution to the 19.5Gbps GDDR6X gentle with the RTX 3090, leaving it with 18 percent less total reminiscence bandwidth but double the reminiscence skill. The card will advance with a elephantine quartet of DisplayPort 1.4 connectors to boot to an NVLink3 connector for multi-GPU capabilities.
A second original RTX card with identical specs has moreover been confirmed: the RTX A40. Comprising the identical fully-enabled GA102 GPU and 48GB of GDDR6 (this time at 14.5Gbps), the RTX A40 is passively cooled, designed to be used in excessive-density servers. Despite that, Nvidia has opted to consist of display outputs for drawl exhaust conditions, citing video trucks within the media and broadcast industries as an instance of the put a server-class card with display utilizing capabilities has been requested.
Missing from the specs is the boost velocity and thus the associated efficiency stages measured in FLOPS. Each the RTX A6000 and RTX A40 can contain a 300W vitality envelope, so boost speeds are expected to be identical.
The RTX A6000 is scheduled for a December 2020 launch, and the RTX A40 is determined to arrive in Q1 subsequent year. Nvidia hasn’t disclosed pricing, but given the $10,000 launch MSRP of the outgoing flagship, the Quadro RTX 8000, we wallets are already quivering in corners.