New benchmark data for Intel’s upcoming Arc A380 GPU has leaked and it doesn’t paint Intel’s upcoming DG2 GPU in the best light. Keep in mind that this is a pre-release card being examined with pre-release drivers in an application where driver optimizations can matter quite a bit.
The review comes from SiSoft Sandra’s website. SiSoft Sandra, developed by Adrian Silasi, is a long-running PC diagnostic and benchmarking tool. It contains a wide range of tests for evaluating nearly every aspect of a CPU’s performance and it has a number of GPU tests as well. Sandra supports benchmarking in CUDA, OpenCL, and Microsoft’s DirectCompute.
According to SiSoft, Intel’s DG2 will be available in the A300, A500, and A700 families with up to 1024, 3072, and 4096 streaming processors. Full specs on the Arc A380 as compared to leading GPUs from Nvidia and AMD are shown below:
There are some surprises here. A 96-bit memory bus is rather unusual and memory bandwidth is a bit lower than we might have expected. A 75W board TDP is excellent — the A300 might offer exemplary power efficiency if that figure holds true — but the complete lack of FP64 support is unexpected. AMD and Nvidia both support double-precision workloads on these GPUs, though they limit performance severely enough that it’s not worth much more than a mention on the spec sheet.
On paper, the DG2 looks more like the Radeon RX 6500 XT than the RTX 1660 Ti or RTX 3050. Like RDNA2, DG2 appears to favor high GPU clocks, a narrow memory bus, 32 render outputs and 64 texture mapping units. It also has relatively small caches compared to the competition.
Intel Arc A380 Benchmark Performance
Unfortunately, SiSoft decided to graph the data in some of their charts logarithmically, which can make it quite difficult to compare performance at a glance between cards. We re-graphed the results of their GPGPU scientific computing test to make it easier to compare the cards. Single in the legend means single-precision.
In scientific computing workloads the DG2 looks more like a 6500 XT than either Nvidia Ampere GPU. Single-precision floating point performance is still better than the 6500 XT, though not as fast as the more expensive RTX 3050.
AES hashing is not a friendly workload for DG2, at least not right now. The 6500 XT scores relatively well here, midway between the 1660 Ti and the RTX 3050.
Performance in the overall score still favors AMD over DG2, but we’d suggest that such conclusions are premature. Much depends on final driver support and drivers are not final. The lack of any kind of double-precision floating point is a surprise, but again, potentially not very relevant for the markets where DG2 will sell.
Readers should keep in mind that OpenCL performance and gaming performance do not always correlate in any kind of consistent way. While it’s true that an RTX 3070 will tend to outperform an RTX 3060 in both compute and gaming, the gap between any two cards in a given video game may be larger or smaller than the gap between the same two cards in a given compute workload. The 6500 XT is 57 percent as fast as the GTX 1660 Ti in the scientific analysis subtest, but 70 percent as fast as that card in the final overall score. The Arc A380’s benchmark scores are interesting, but judgements are premature.
SiSoft’s reported specifications imply interesting things about DG2’s positioning. A 6GB GPU with a 96-bit memory bus seems targeted at the Radeon RX 6500 XT with its 4GB of RAM and 64-bit memory bus at the same $199. This assumes the reported $199 target is accurate, however, and that Intel will have any more luck than AMD at keeping GPUs selling for MSRP.
As of this writing, Intel doesn’t seem to be trying to aggressively target Nvidia’s positioning with any of its Alchemist GPUs. Instead, the company seems to be more concerned with making a value play. That could change as we get closer to launch and learn more about final specifications and pricing. Xe doesn’t need to be an absolute performance leader to be competitive, especially not in today’s market.