Review and testing of NVIDIA GeForce GTX TITAN X: massacre of babies. Review and testing of the NVIDIA TITAN X super video card: a demonstration of the superiority of Geforce gtx titan x reviews

NVIDIA's first Pascal architecture to hit the market was the GeForce GTX 1080 based on the GP104 processor. Thanks to the new 16nm FinFET process technology, as well as optimizations in the architecture and circuitry of the chip, the GTX 1080 achieves a level of gaming performance that is approximately 30% higher than that of NVIDIA's previous generation flagship graphics card, the GeForce GTX TITAN X. At the same time, the developers of the GTX 1080 managed to reduce the power budget of the accelerator by 70 W relative to the TDP of its predecessor - from 250 to 180 W. Meanwhile, a thermal package of 250 W is a standard goal for NVIDIA's top gaming video cards of the last several generations, so the appearance after the GTX 1080 of an even more productive product that will occupy this niche in the Pascal line was only a matter of time.

Starting with the Kepler architecture, NVIDIA has adopted the following strategy for releasing GPUs in various performance categories. First, the second-tier chip debuts: GK104 in the Kepler family, GM204 in Maxwell version 2, and now GP104 in Pascal. Subsequently, NVIDIA fills one or two tiers below, and after a significant gap, a top-tier GPU appears, which forms the basis of the most powerful accelerator that NVIDIA can produce, while maintaining power consumption within 250 W with the current workflow.

The current peak of the Pascal architecture development is the GP100 processor, which features an unprecedented number of shader ALUs (3840 CUDA cores) and 16 GB of HBM2 memory combined with a GPU on a silicon substrate. The GP100 is used as part of the Tesla P100 accelerator, the use of which is limited to the field of supercomputers due to a special form factor with an NVLINK bus and a TDP of 300 W. Tesla P100 is also expected to be released in the standard PCI Express expansion card format at the end of the year.

It was the GP100 chip, in the dreams of industry enthusiasts, that was supposed to crown the line of GeForce 10 gaming adapters in the future, and previously NVIDIA could release a new TITAN - just with an intermediate stop at this position, the previous large GPUs arrived in gaming PCs (GK110 as part of TITAN and GM200 - in TITAN X).

However, this time, apparently, the experts turned out to be right, predicting the final division of the NVIDIA GPU line into two non-overlapping groups - chips for gaming and prosumer (from the words producer and consumer) directions, on the one hand, and chips for computing - on the other. The differentiating factor in this case is the speed of the GPU in operations on double precision floating point numbers (FP64). In the Kepler line, developers have already sacrificed this characteristic for all chips (1/24 from FP32), in addition to the older one - GK110 / GK210 (1/3 from FP32), in order to reduce GPU power consumption. In the next generation, this trend worsened: all Maxwell processors run FP64 at a speed of 1/32 from FP32.

The situation with Pascal showed that savings on FP64 performance did not remain a temporary measure due to a delay in the 28 nm process technology. NVIDIA still needs a GPU for servers, supercomputers and workstations capable of running FP64 on high level performance. However, for gaming video adapters, this functionality, which inflates the transistor budget and GPU power consumption, is only a burden.

Thus, instead of porting the GP100 (an obviously expensive chip to manufacture both due to the area and the integrated HBM2 memory) to gaming video cards, NVIDIA released an additional product - GP102, focused on operations with FP32 - the main number format used in 3D graphics rendering and in a number of computational tasks. The only functional feature of the GP102 is support for integer operations of the int8 format. This is an important point for NVIDIA, since int8 is widely used in machine learning tasks, which the company has made for itself one of the priority areas (more specifically, one of the classes of such tasks is deep learning). In the near future, we plan to release a separate article dedicated to this topic.

The new TITAN X, which is the first device based on the GP102 processor, is positioned primarily as a professional-grade accelerator that is designed for research and commercial applications related to deep learning. This is confirmed by the absence of the GeForce brand in the name of the card. However, the wide gaming capabilities of the novelty are also beyond doubt. All the Titans released earlier, in addition to their computing functions, were considered as premium gaming graphics cards, capable of providing graphics quality and performance that were not available to their current models in the main GeForce line.

NVIDIA GP102

This GPU is conceived as an alternative to the supercomputer GP100, which is not inferior to the latter in the functions of rendering 3D graphics and FP32 calculations. At the same time, the creators of the GP102 have reduced all components that do not correspond to the purpose of the product.

For example, a single SM (Streaming Multiprocessor - a block that combines CUDA cores along with texture mapping units, schedulers, dispatchers and local memory segments) in GP100 contains 64 CUDA cores for FP32 operations, while SM in GP102 has in this respect configuration inherited from Maxwell: 128 CUDA cores. The finer distribution of CUDA cores in the GP100 allows the processor to simultaneously execute more instruction streams (and also thread groups - warps - and warp blocks), and the total amount of storage types inside the SM, such as shared memory (shared memory) and register file, in in terms of the entire GPU has increased compared to the Maxwell architecture.

NVIDIA GP102 block diagram

Further, in GP100, for every 64 CUDA cores for FP32 operations, there are 32 cores for FP64, while the SM in GP102 has a configuration inherited from Maxwell in this regard: 128 CUDA cores for FP32 and 4 for FP64. Hence the truncated double precision performance of the GP102.

Finally, the GP100 carries a larger L2 cache: 4096 KB versus 3072 KB in the GP102. And of course, the GP102 lacks an NVLINK bus controller, and the HBM2 memory controllers (with a total bus width of 4096 bits) are replaced by GDDR5X SDRAM controllers. 12 of these 32-bit controllers provide a common 384-bit memory access bus.

In other aspects of interest to us, the GP100 and GP102 chips are identical. Both dies contain 3840 FP32 compatible CUDA cores and 240 texture mappers, as well as 96 ROPs. Thus, from a general point of view, the structure of the GP102 computing units repeats that of the GP104 chip, adjusted for quantitative changes. Although we still don't know some parameters (L1 cache, shared memory and register file sizes), they are probably the same in these two GPUs.

The GP102 chip, manufactured using the 16 nm FinFET process technology at TSMC, contains 12 billion transistors in an area of ​​471 mm 2 . For comparison: the characteristics of the GP100 are 15.3 billion transistors and 610 mm 2. This is a very significant difference. In addition, if TSMC has not increased the size of the photomask for the 16nm process compared to 28nm, then the GP100 is almost exhausted, while the lightweight architecture of the GP102 will allow NVIDIA to create a larger core for the broad consumer market in the future, using the same production line. (Which, however, is unlikely to happen unless developers rethink their TDP standards for top models).

Regarding the differences between the Pascal architecture and Maxwell, we recommend that you refer to our GeForce GTX 1080 review. In this iteration, the developers have developed the advantages of the previous generation and compensated for its inherent shortcomings.

We briefly note the following points:

  • improved color compression with ratios up to 8:1;
  • the PolyMorph Engine's Simultaneous Multi-Projection function, which allows you to create up to 16 projections of the scene geometry in one pass (for VR and systems with multiple displays in the NVIDIA Surround configuration);
  • the ability to interrupt (preemption) during the execution of a draw call (during rendering) and a command stream (during calculations), which, together with the dynamic distribution of GPU computing resources, provides full support for asynchronous computing (Async Compute) - an additional source of performance in games under the DirectX 12 API and reduced latency in VR;
  • display controller compatible with DisplayPort 1.3/1.4 and HDMI 2.b interfaces. Support for high dynamic range (HDR);
  • SLI bus with increased bandwidth.

Specifications, price

The TITAN X does not use a fully functional version of the GP102 GPU: out of 30 SMs, two are disabled here. Thus, in terms of the number of CUDA cores and texture units, the Titan matches the Tesla P100, where the GP100 chip is also partially “cut” (3584 CUDA cores and 224 texture units).

The graphics processor of the novelty operates at higher frequencies (1417/1531 MHz) than in the Tesla P100 (up to 1328/1480 MHz in the supercomputer version and up to 1300 MHz in the form factor of the PCI-Express board). Still, the frequencies of the "Titan" are quite conservative compared to the characteristics of the GeForce GTX 1080 (1607/1733 MHz). As we'll see in the overclocking experiments, the limiting factor was the device's power consumption, which NVIDIA set at its familiar 250W.

TITAN X is equipped with 12GB GDDR5X SDRAM with a bandwidth of 10Gbps per pin. The 384-bit bus provides data transfer at a speed of 480 GB / s: in this indicator, TITAN X is only slightly inferior to the current record holder - Radeon R9 Fury X, as well as other AMD products based on Fiji GPUs (512 GB / s).

Manufacturer NVIDIA
Model GeForce GTX TITAN GeForce GTX TITAN Black GeForce GTX TITAN Z GeForce GTX TITAN X GeForce GTX 1080 TITAN X
GPU
Name GK110 GK110 2 × GK110 GM200 GP104 GP102
microarchitecture Kepler Kepler Kepler Maxwell Pascal Pascal
Process technology, nm 28 nm 28 nm 28 nm 28 nm 16nm FinFET 16nm FinFET
Number of transistors, million 7 080 7 080 2×7080 8 000 7 200 12 000
Clock frequency, MHz: Base Clock / Boost Clock 837/876 889/980 705/876 1 000 / 1 089 1 607 / 1 733 1 417 / 1531
Number of shader ALUs 2 688 2 880 2×2880 3 072 2 560 3 584
Number of texture overlays 224 240 2×240 192 160 224
Number of ROPs 48 48 2×48 96 64 96
RAM
Bus width, bit 384 384 2×384 384 256 384
Chip type GDDR5 SDRAM GDDR5 SDRAM GDDR5 SDRAM GDDR5 SDRAM GDDR5X SDRAM GDDR5X SDRAM
Clock frequency, MHz (bandwidth per contact, Mbps) 1 502 (6 008) 1 750 (7 000) 1 750 (7 000) 1 753 (7 012) 1 250 (10 000) 1 250 (10 000)
Volume, MB 6 144 6 144 2×6144 12 288 8 192 12 288
I/O bus PCI Express 3.0 x16 PCI Express 3.0 x16 PCI Express 3.0 x16 PCI Express 3.0 x16 PCI Express 3.0 x16 PCI Express 3.0 x16
Performance
Peak performance FP32, GFLOPS (based on maximum specified frequency) 4 709 5 645 10 092 6 691 8 873 10 974
Performance FP32/FP64 1/3 1/3 1/3 1/32 1/32 1/32
RAM bandwidth, GB/s 288 336 2×336 336 320 480
Image Output
Image output interfaces DL DVI-I, DisplayPort 1.2, HDMI 1.4a DL DVI-D, DL DVI-I, DisplayPort 1.2, HDMI 1.4a DL DVI-I, DisplayPort 1.2, HDMI 1.4a DL DVI-D, DisplayPort 1.3/1.4, HDMI 2.0b
TDP, W 250 250 375 250 180 250
Suggested retail price at the time of release (US, without tax), $ 999 999 2 999 999 599/699 1 200
Recommended retail price at the time of release (Russia), rub. 34 990 35 990 114 990 74 900 — / 54 990

In terms of theoretical performance, TITAN X is the first single-GPU graphics card to exceed 10 TFLOPS in FP32 performance. Of previous NVIDIA products, only TITAN Z, built on a pair of GK110 chips, was capable of this. On the other hand, unlike the Tesla P100 (and similarly to the GeForce GTX 1060/1070/1080), TITAN X is characterized by very modest performance in double (1/32 of FP32) and half precision (1/64 of FP32) calculations, but is capable of perform operations with int8 numbers at a speed 4 times faster than with FP32. Other Pascal family GPUs - GP104 (GeForce GTX 1070 /1080, Tesla P4) and GP106 (GTX 1060) and GP100 (Tesla P100) also support int8 with a performance ratio of 4:1 relative to FP32, however, we currently do not know if this is limited functionality in GeForce gaming graphics cards.

TITAN X is a very, very expensive purchase, which only those who really want to have such a perfect video card will decide on. NVIDIA has increased the price by $200 from previous single processor models under this brand to $1200. This time the device is not distributed through partners and is sold exclusively on the NVIDIA website in a number of selected countries. Russia is not yet among them.

Design

The case of the video card is made in the same style as products under the Founders Edition brand of the GeForce 10 line. The cooling system with a radial fan (turbine) is covered with a metal casing, and the back surface of the printed circuit board is protected by a thick plate. Part of the latter can be removed in order to provide unhindered air access to the cooler of an adjacent video card in SLI mode. It's funny that although TITAN X formally no longer belongs to the GeForce family, it is this inscription, illuminated by green LEDs, that still flaunts on the side of the video card.

The design of the cooler is the same as that of the GTX 1070/1080: the GPU gives off heat to a heatsink with an evaporation chamber, while the RAM chips and voltage converter transistors are covered with a massive aluminum frame that carries a separate block of small fins.

By the way, as one of the owners of TITAN X found out, NVIDIA allows users to change the cooling system of the video card to something more efficient (for example, LSS) without losing the warranty.

Pay

Like the reference versions of the GTX 1060/1070/1080, the TITAN X board has three DisplayPort connectors and one each of DVI and HDMI.

The power system is built according to the 6 + 1 scheme (the number of phases for the GPU and memory chips). Two additional power connectors are used - six- and eight-pin, which, together with the power lines in the PCI-Express connector, provides the video card with a power reserve of 300 watts.

The GDDR5X SDRAM memory, like on the GeForce GTX 1080, is made up of Micron D9TXS microcircuits with a standard effective frequency of 10 GHz.

Test stand, testing methodology

Test stand configuration
CPU Intel Core i7-5960X @ 4GHz (100×40)
Motherboard ASUS RAMPAGE V EXTREME
RAM Corsair Vengeance LPX, 2133 MHz, 4 × 4 GB
ROM Intel SSD 520 240 GB + Crucial M550 512 GB
Power Supply Corsair AX1200i 1200W
CPU cooling system Thermalright Archon
Frame CoolerMaster Test Bench V1.0
Monitor NEC EA244UHD
Operating system Windows 10 Pro x64
AMD GPU software
All Radeon Software Crimson Edition 16.8.2 Non-WHQL
NVIDIA GPU software
All GeForce Game Ready Driver 372.54 WHQL

The CPU runs at a constant frequency. In the NVIDIA driver settings, the CPU is selected as the processor for PhysX calculation. In the AMD driver settings, the Tesselation setting has been moved from AMD Optimized to Use application settings.

Benchmarks: games
Game (in order of release date) API Settings Full screen anti-aliasing
1920×1080 / 2560×1440 3840×2160
Crysis 3 + FRAPS DirectX 11 Max. quality. The start of the Swamp mission MSAA 4x Off
Battlefield 4 + FRAPS Max. quality. The start of the Tashgar mission MSAA 4x + FXAA High
Metro: Last Light Redux, built-in benchmark Max. quality SSAA 4x
GTA V, built-in benchmark Max. quality MSAA 4x + FXAA
DiRT Rally Max. quality MSAA 4x
Rise of the Tomb Raider, built-in benchmark DirectX 12 Max. quality, VXAO off SSAA 4x
Tom Clancy's The Division, built-in benchmark DirectX 11 Max. quality, HFTS off SMAA 1x Ultra
HITMAN, built-in benchmark DirectX 12 Max. quality SSAA 4x
Ashes of the Singularity, built-in benchmark DirectX 12 Max. quality MSAA 4x + Temporal AA 4x
DOOM Vulkan Max. quality. Mission Foundry TSSAA 8TX
Total War: WARHAMMER, built-in benchmark DirectX 12 Max. quality MSAA 4x
Benchmarks: video decoding, computing
Program Settings
DXVA Checker, Decode Benchmark, H.264 Files 1920 × 1080p (High Profile, L4.1), 3840 × 2160p (High Profile, L5.1). Microsoft H264 Video Decoder
DXVA Checker, Decode Benchmark, H.265 Files 1920 × 1080p (Main Profile, L4.0), 3840 × 2160p (Main Profile, L5.0). Microsoft H265 Video Decoder
LuxMark 3.1x64 Hotel Lobby Stage (Complex Benchmark)
Sony Vegas Pro 13 Sony benchmark for Vegas Pro 11, 65s duration, rendered in XDCAM EX, 1920×1080p 24Hz
SiSoftware Sandra 2016 SP1, GPGPU Scientific Analysis OpenCL, FP32/FP64
CompuBench CL Desktop Edition X64, Ocean Surface Simulation
CompuBench CL Desktop Edition X64, Particle Simulation—64K

Test participants

The following video cards took part in performance testing:

  • NVIDIA TITAN X (1417/10000 MHz, 12 GB);

Performance: 3DMark

Synthetic tests show an average advantage of TITAN X over GeForce GTX 1080 by 25%. Compared with the previous generation of the TITAN brand, as well as the Radeon R9 Fury X new flagship offers 61-63% better performance and more than double the performance of the first version of TITAN based on the Kepler architecture. A rather high position in comparison with the NVIDIA accelerator is held by the Radeon R9 295X2 - the new product is only 18% faster in 3DMark.

3DMark (Graphics Score)
Permission
fire strike 1920×1080 26 341 10 449 17 074 21 648 23 962 16 279
Fire Strike Extreme 2560×1440 13 025 4 766 7 945 10 207 10 527 7 745
Fire Strike Ultra 3840×2160 6 488 2 299 4 011 4 994 5 399 3 942
time spy 2560×1440 8 295 2 614 4 935 6 955 7 186 5 084
Max. −60% −35% −16% −9% −38%
Average −64% −38% −20% −15% −39%
Min. −68% −41% −23% −19% −41%



Performance: gaming (1920×1080, 2560 × 1440)

In tests at a relatively low resolution for such a powerful GPU, the new TITAN X outperforms the GeForce GTX 1080 by 15-20% (from 1080p to 1440p, respectively) in average results. The new flagship looks even more impressive in comparison with the best accelerators of the 28 nm period: it is 47-56% faster than the GeForce GTX TITAN X based on the GM200 and 67-72% ahead of the Radeon R9 Fury X.

If we take the very first TITAN of the Kepler generation, then we are talking about a more than twofold increase in performance.

1920×1080
Full screen anti-aliasing NVIDIA TITAN X (1417/10000MHz, 12GB) NVIDIA GeForce GTX TITAN (837/6008MHz, 6GB) NVIDIA GeForce GTX TITAN X (1000/7012MHz, 12GB) NVIDIA GeForce GTX 1080 (1607/10008 MHz, 8 GB) AMD Radeon R9 295X2 (1018/5000MHz, 8GB) AMD Radeon R9 Fury X (1050/1000MHz, 4GB)
Ashes of the Singularity MSAA 4x 47 20 31 42 34 26
Battlefield 4 MSAA 4x + FXAA High 162 71 118 149 134 94
Crysis 3 MSAA 4x 99 45 65 79 90 60
DiRT Rally MSAA 4x 126 57 83 101 97 65
DOOM TSSAA 8TX 200 69 151 185 122 156
gta v MSAA 4x + FXAA 85 44 68 84 76 52
HITMAN SSAA 4x 68 21 39 52 24 33
Metro: Last Light Redux SSAA 4x 124 47 73 92 94 70
Rise of the Tomb Raider SSAA 4x 70 28 47 62 55 41
Tom Clancy's The Division SMAA 1x Ultra 87 35 59 80 57 58
Total War: WARHAMMER MSAA 4x 76 38 56 73 37 49
Max. −48% −20% −0% −9% −22%
Average −58% −32% −13% −29% −40%
Min. −69% −43% −26% −65% −51%
2560×1440
Full screen anti-aliasing NVIDIA TITAN X (1417/10000MHz, 12GB) NVIDIA GeForce GTX TITAN (837/6008MHz, 6GB) NVIDIA GeForce GTX TITAN X (1000/7012MHz, 12GB) NVIDIA GeForce GTX 1080 (1607/10008 MHz, 8 GB) AMD Radeon R9 295X2 (1018/5000MHz, 8GB) AMD Radeon R9 Fury X (1050/1000MHz, 4GB)
Ashes of the Singularity MSAA 4x 39 16 24 33 27 21
Battlefield 4 MSAA 4x + FXAA High 109 47 75 98 95 65
Crysis 3 MSAA 4x 63 27 40 53 59 39
DiRT Rally MSAA 4x 93 40 60 74 71 48
DOOM TSSAA 8TX 166 45 95 126 82 107
gta v SMAA 67 31 48 63 61 39
HITMAN MSAA 4x + FXAA 43 13 24 33 12 17
Metro: Last Light Redux SSAA 4x 71 26 43 52 54 43
Rise of the Tomb Raider Not supported 44 16 28 38 23 27
Tom Clancy's The Division SSAA 4x 63 24 43 58 45 44
Total War: WARHAMMER SMAA 1x High 57 26 39 50 25 34
Max. −53% −29% −6% −6% −30%
Average −61% −36% −16% −33% −42%
Min. −73% −44% −27% −72% −60%

Note:




Performance: gaming (3840×2160)

When moving from 1440p to 4K, the ratio between NVIDIA graphics cards remains the same. TITAN X is 20% faster than GeForce GTX 1080 and 56% faster than Maxwell based TITAN X.

The Radeon R9 Fury X, which is typical for this model, more effectively copes with tests in 4K, which ultimately reduced the advantage of the "Titan" to 56%.

3840×2160
Full screen anti-aliasing NVIDIA TITAN X (1417/10000MHz, 12GB) NVIDIA GeForce GTX TITAN (837/6008MHz, 6GB) NVIDIA GeForce GTX TITAN X (1000/7012MHz, 12GB) NVIDIA GeForce GTX 1080 (1607/10008 MHz, 8 GB) AMD Radeon R9 295X2 (1018/5000MHz, 8GB) AMD Radeon R9 Fury X (1050/1000MHz, 4GB)
Ashes of the Singularity Off 45 20 29 41 38 37
Battlefield 4 84 35 57 74 72 52
Crysis 3 42 18 28 36 40 29
DiRT Rally 65 26 41 50 48 33
DOOM 92 24 51 68 45 57
gta v 55 25 39 51 49 34
HITMAN 67 21 38 53 24 33
Metro: Last Light Redux 64 23 38 47 47 38
Rise of the Tomb Raider 50 19 33 44 37 31
Tom Clancy's The Division 38 15 25 33 26 28
Total War: WARHAMMER 43 20 30 38 20 32
Max. −53% −29% −7% −5% −18%
Average −61% −36% −16% −29% −36%
Min. −74% −45% −27% −64% −51%

Note: Total War: WARHAMMER does not support DirectX 12 for GeForce GTX TITAN.

Performance: video decoding

The GP102 integrates the same hardware codec as the two junior GPUs of the Pascal family, so TITAN X demonstrates the decoding speed of H.264 and HEVC standards on par with the GeForce GTX 1080, adjusted for lower GPU clock speeds. Pascal's performance in this task is unsurpassed, both compared to NVIDIA codecs in Maxwell chips and those in AMD Polaris.

Note: since decoders usually do not differ within the same GPU line, the diagrams show one device from each family (or more if this rule is violated).

Note. 2: GeForce GTXTITAN X, like other devices based on the Maxwell GPU architecture, with the exception of the GM204 (GeForce GTX 950/960), performs partial H.265 hardware decoding backed up by CPU resources.

Performance: Computing

The relationship between different architectures in GPGPU tasks depends on the specifics of each application. The TITAN X provides predictable performance gains over the GeForce GTX 1080 for the most part, but there are exceptional cases where the task rests on the frequency of the GPU (like the particle physics test in CompuBench CL and rendering in Sony Vegas): here the advantage is on the side of the GTX 1080. On the contrary, the new TITAN X took revenge in a situation where the GeForce GTX 1080 is inferior to the Maxwell-based TITAN X and the Radeon R9 Fury X (ray tracing in LuxMark).

In SiSoftware Sandra's Matrix Multiplication and Fast Fourier Transform test, the TITAN X excels in FP32 mode. As for FP64, simply due to brute force (a large number of CUDA cores and high clock speeds), the accelerator achieved higher performance than the original TITAN of the Kepler generation and the Radeon R9 Fury X - video cards that have a more favorable ratio of performance with FP32 and FP64. This ultimately does not completely discount TITAN X as a double precision task accelerator. However, the Radeon R9 295X2 is best suited for this purpose. AMD graphics cards keep strong positions in some other tests: water surface calculation in CompuBench CL and Sony Vegas.

Clock speeds, power consumption, temperature, overclocking

Under gaming load, the TITAN X GPU periodically reaches the same high clock speeds as the GP104 in the GTX 1080 (1848 vs. 1860 MHz), but most of the time it stays in a significantly lower range (1557-1671 MHz). At the same time, the maximum GPU supply voltage is 1.062 V (1.05 V in the GTX 1080).

The CO fan rotates at speeds up to 2472 rpm. The card requires stronger cooling than the GTX 1080, and since the cooler design has remained unchanged, it creates more noise. To compensate for this factor, TITAN X has been set a 3°C higher target GPU temperature.

Although Pascal-based TITAN X technically has the same TDP as the previous generation TITAN X, in practice, a system with a new video card develops significantly more power (by 49 W). However, the increased load on the CPU, which serves a more efficient graphics processor, can play a role here. In FurMark, on the other hand, all accelerators with a TDP of 250W (as well as the 275W Fury X) are on the same level.

To overclock the Titan, we took advantage of the stock ability to increase the power limit of the graphics card by 20%, run the CO turbine at full speed (4837 rpm) and increased the maximum GPU voltage to 1.093 V (the same value as on the GTX 1080). As a result, we were able to raise the base GPU frequency by 200 MHz to 1617 MHz, and the effective memory frequency to 11100 MHz.

This alone is not bad for such a large chip, but the increased power limit is no less important. The overclocked GPU supports frequencies in the 1974-1987 MHz range, peaking at 2063 MHz, which is nothing less than an amazing achievement. For comparison: the peak frequency of the GPU in our GTX 1080 instance during overclocking was 2126 MHz.

A system with an overclocked TITAN X delivers 46W more power than the stock graphics card. The fan cranked up to its maximum speed brought down the GPU temperature by 17-20 °C, which allows users to expect equally effective overclocking at lower RPMs, providing a relatively comfortable noise level.

Performance: overclocking

Overclocking TITAN X allows for a very significant increase in performance - by 14% in 3DMark and by 18-23% in gaming benchmarks at 1080p and 1440p resolutions. In games at 4K resolution, the bonus reaches 26%.

The difference between an overclocked TITAN X and a GeForce GTX 1080 running at reference frequencies reaches shocking values ​​of 36%, 47%, and 50% in the three resolutions we used. Of course, the GTX 1080 itself is also subject to overclocking, but as we remember from our review of the reference video card, this adds only 9, 13 and 12% to the results. Thus, if we compare the overclocked flagship of the GeForce 10 line and the overclocked TITAN X, the advantage of the latter will be 25, 30 and 34%.

Using our old overclocked GeForce GTX TITAN X GM200 chip performance data, we will make similar calculations to compare two generations of Titans. The overclocked TITAN X on Pascal is ahead of its predecessor by 75, 93 and 97%. When both accelerators are overclocked, the novelty maintains a lead of 74 and 70% at 1440p and 2160p resolutions. We (as readers who criticized this decision will remember) refused to test in 1080p mode in the GeForce GTX TITAN X review.

3DMark (Graphics Score)
Permission NVIDIA GeForce GTX 1080 (1607/10008 MHz, 8 GB) NVIDIA TITAN X (1417/10000MHz, 12GB)
fire strike 1920×1080 21 648 26 341 31 038
Fire Strike Extreme 2560×1440 10 207 13 025 15 191
Fire Strike Ultra 3840×2160 4 994 6 488 7 552
time spy 2560×1440 6 955 8 295 8 644
Max. +30% +51%
Average +25% +42%
Min. +19% 101 126 126
DOOM TSSAA 8TX 185 200 200
gta v MSAA 4x + FXAA 84 85 96
HITMAN SSAA 4x 52 68 77
Metro: Last Light Redux SSAA 4x 92 124 140
Rise of the Tomb Raider SSAA 4x 62 70 94
Tom Clancy's The Division SMAA 1x Ultra 80 87 117
Total War: WARHAMMER MSAA 4x 73 76 88
Max. +35% +57%
Average +16% +36%
Min. +0% +8%

TITAN X is positioned primarily as a GPGPU task accelerator, among which machine learning takes priority by supporting the int8 number format in GP102 at a speed of 4:1 compared to FP32 operations. For most FP32-based computing applications, TITAN X also delivers performance-leading performance over any previously released gaming and prosumer class accelerator.

Do not discount the function of working with double precision operations. Although cards based on GPUs such as NVIDIA's GK110/210 and AMD's Tahiti and Hawaii have a better FP32 to FP64 ratio compared to TITAN X, it achieves at least competitive results in this category due to its advanced manufacturing process. , which provided the video card with high clock speeds and a huge array of CUDA cores.

For our site, the new TITAN X is interesting primarily as a gaming video card. In this capacity, the novelty makes a double impression. On the one hand, a 15-20% advantage over the GeForce GTX 1080 in gaming benchmarks does not justify, from the buyer's point of view, such a high price of the model ($1,200) and, moreover, still does not allow running many of the modern games in 4K resolution with maximum graphics quality settings at a comfortable frame rate (60 FPS).

On the other hand, NVIDIA's 250W TDP limit clearly doesn't match the GPU's capabilities. When overclocked natively, the TITAN X easily reaches frequencies in excess of 2 GHz, which ultimately provides 34% higher performance than the (also overclocked) GeForce GTX 1080 in 4K mode. In fact, overclocking makes TITAN X the first gaming graphics card unconditionally suitable for such settings.

2560×1440
Full screen anti-aliasing NVIDIA GeForce GTX 1080 (1607/10008 MHz, 8 GB) NVIDIA TITAN X (1417/10000MHz, 12GB) NVIDIA TITAN X (1617/11110MHz, 12GB)
Ashes of the Singularity MSAA 4x 33 39 48
Battlefield 4 MSAA 4x + FXAA High 98 109 146
Crysis 3 MSAA 4x 53 63 81
DiRT Rally MSAA 4x 74 93 93
DOOM TSSAA 8TX 126 166 183
gta v SMAA 63 67 86
HITMAN MSAA 4x + FXAA 33 43 49
Metro: Last Light Redux SSAA 4x 52 71 82
Rise of the Tomb Raider Not supported 38 44 59
Tom Clancy's The Division SSAA 4x 58 63 86
Total War: WARHAMMER SMAA 1x High 50 57 74
Max. +36% +58%
Average +20% +47%
Min.
DOOM 68 92 104
gta v 51 55 75
HITMAN 53 67 77
Metro: Last Light Redux 47 64 74
Rise of the Tomb Raider 44 50 69
Tom Clancy's The Division 33 38 52
Total War: WARHAMMER 38 43 58
Max. +37% +59%
Average
Previous Elite Graphics Card NVIDIA GeForce GTX TITAN X 12 GB was released in March 2015 and was based on the Maxwell 2.0 architecture GM200 GPU. At that time, the novelty was distinguished by a colossal amount of video memory for gaming video cards, very high performance and cost ($999). However, the dashing prowess of the GeForce GTX TITAN X faded after only three months, when the equally fast GeForce GTX 980 Ti in games was presented to the public for much more acceptable cost ($649).

It seems that NVIDIA decided to repeat this path of announcements in the line of top graphic solutions, which can be expressed by the sequence “GeForce GTX 980 –> GeForce TITAN X –> GeForce GTX 980 Ti”, only now video cards are based on GP104/102 cores of the Pascal architecture and are released according to 16nm process technology. With the first video card - NVIDIA GeForce GTX 1080 - we already met, as with her original versions. Now it's time to explore NVIDIA's newest and phenomenal performance graphics card, TITAN X.

The novelty began to cost $ 200 more than its predecessor - $ 1200, and, of course, is still positioned as a professional video card for research and deep learning. But, as you probably understand, we are primarily interested in its performance in gaming applications and graphics benchmarks, since all gamers are looking forward to the announcement of the GeForce GTX 1080 Ti, the latest signs of which have already deprived the company's most obvious adherents of sleep. However, today we will test the NVIDIA TITAN X in separate computing benchmarks to make sure it holds up well as a professional graphics card.

1. NVIDIA TITAN X 12 GB super video card review

video card specifications and recommended cost

Specifications and the cost of the NVIDIA TITAN X video card are shown in the table in comparison with the reference NVIDIA GeForce GTX 1080 and old version GeForce GTX TITAN X.




packaging and equipment

NVIDIA has reserved the release of TITAN X strictly for itself, so the packaging of the video card is standard: a compact box that opens up and a video card inserted into its center in an antistatic bag.



There is nothing in the package, although there is one additional compartment inside. Recall that the recommended cost of NVIDIA TITAN X is 1200 US dollars.

PCB design and features

The design of the new NVIDIA TITAN X has become more daring or even aggressive than the design of the GeForce GTX TITAN X. The casing of the cooling system on the front side of the video card was endowed with additional edges that glare under the rays of light, and the back of the textolite was covered with a corrugated cover made of metal.




Together with a chrome-plated fan rotor and the same inscription on the front side, the video card looks really stylish and attractive. Note that on the top end of NVIDIA TITAN X, glowing symbols "GEFORCE GTX" are left, although they are no longer in the name of the video card.




The reference video card is 268 mm long, 102 mm high and 37 mm thick.

The video outputs on the panel additionally perforated with triangular holes are as follows: DVI-D, three DisplayPort version 1.4 and one HDMI version 2.0b.




In this regard, the novelty has no changes in comparison with the GeForce GTX 1080.

Two connectors are provided on the video card to create various SLI configurations. 2-way, 3-way and 4-way SLI options for combining video cards are supported using both new rigid connecting bridges and old flexible ones.




If the reference GeForce GTX 1080 has only one eight-pin connector for additional power, then TITAN X also received a six-pin connector in addition, which is not surprising, because the declared power consumption level of the video card is 250 watts, like the GeForce GTX TITAN X that preceded it. The recommended power supply for a system with one such video card should be at least 600 watts.

The NVIDIA TITAN X reference PCB is much more complex than the GeForce GTX 1080 board, which is quite logical given the increased power requirements, increased video memory, and a wider communication bus with it.




The GPU power system is five-phase using Dr.MOS power elements and tantalum-polymer capacitors. Two more power phases are dedicated to video memory.



The uP9511P controller from uPI Semiconductor is responsible for GPU power management.



Monitoring functions are provided by the INA3221 controller manufactured by Texas Instruments.



Made to 16nm, the 471mm2 GP102 GPU die was released on week 21, 2016 (end of May) and belongs to revision A1.


Apart from architectural improvements to the Pascal GPU lineup, compared to the NVIDIA GeForce GTX TITAN X GPU GM200, the new GP102 contains 16.7% more universal shader processors, and their total number is 3584. The advantage in this indicator over the GeForce GTX GP104 1080 is an impressive 40%. The same alignment is in terms of the number of texture units, of which the new TITAN X has 224 pieces. Complementing the GP102 numbers are 96 Raster Operations Units (ROPs).

GPU frequencies have also increased. If the GeForce GTX TITAN X had a base GPU frequency in 3D mode of 1000 MHz and could be boosted up to 1076 MHz, then the new TITAN X has a base frequency of 1418 MHz (+41.8%), and the declared boost frequency is 1531 MHz. In fact, according to monitoring data, the frequency of the GPU briefly increased to 1823 MHz, and averaged 1823 MHz. This is a very significant increase over its predecessor. We add that when switching to 2D mode, the frequency of the GPU drops to 139 MHz with a simultaneous decrease in voltage from 1.050 V to 0.781 V.

NVIDIA TITAN X is equipped with 12 GB of GDDR5X memory, assembled with twelve Micron chips (marking 6KA77 D9TXS), soldered only on the front side of the printed circuit board.



Compared to the previous GeForce GTX TITAN X on GM200, the memory frequency of the new TITAN X on GP102 is 10008 MHz, which is 42.7% higher. Thus, with an unchanged memory bus width of 384 bits, the TITAN X memory bandwidth reaches an impressive 480.4 GB / s, which is only slightly less than the current record holder in this area - AMD Radeon R9 Fury X with its high-speed HBM and 512 GB/s In 2D mode, the memory frequency is reduced to 810 effective megahertz.

The result of the review of the hardware of the new video card will be summed up by information from the GPU-Z utility.


We also post the BIOS of the video card, read and saved using the same utility.

cooling system - efficiency and noise level

The NVIDIA TITAN X cooling system is identical to the NVIDIA GeForce GTX 1080 Founders Edition cooler.



It is based on a nickel-plated aluminum heatsink with a copper evaporation chamber at the base, which is responsible for cooling the GPU.



This radiator is small in area, and the intercostal distance does not exceed two millimeters.



Thus, it is not difficult to assume that the efficiency of GPU cooling with this radiator will seriously depend on the fan speed (which, in fact, was confirmed later).

A metal plate with thermal pads is provided for cooling the memory chips and elements of the power circuits.



To check the temperature regime of the video card as a load, we used nineteen cycles of the Fire Strike Ultra stress test from the 3DMark package.



To monitor temperatures and all other parameters, MSI Afterburner version 4.3.0 Beta 14 and newer was used, as well as the GPU-Z utility version 1.12.0. Tests were carried out in a closed case of the system unit, the configuration of which you can see in the next section of the article, at room temperature 23,5~23,9 degrees Celsius.

First of all, we tested the NVIDIA TITAN X's cooling efficiency and thermal performance with fully automatic fan speed control.



Auto mode (1500~3640 rpm)


As you can see from the monitoring graph, the GPU temperature of the NVIDIA TITAN X very quickly reached 88-89 degrees Celsius, and then, due to a relatively sharp increase in fan speed from 1500 to 3500 rpm, it stabilized at around 86 degrees Celsius. Further on during the test, the fan speed increased further to 3640 rpm. It is unlikely that any of us expected from a reference video card with a thermal package of 250 watts other temperature indicators, which practically do not differ from the GeForce GTX TITAN X.

At maximum fan speed, the temperature of the NVIDIA TIAN X graphics card graphics processor drops by 12-13 degrees Celsius compared to the automatic adjustment mode.



Maximum speed (~4830 rpm)


In both fan modes, the NVIDIA TITAN X is a very noisy graphics card. By the way, NVIDIA does not deprive the owners of this video card model of the warranty when replacing the reference cooler with alternatives.

overclocking potential

When checking the overclocking potential of NVIDIA TITAN X, we increased the power limit by the maximum possible 120%, the temperature limit was raised to 90 degrees Celsius, and the fan speed was manually fixed at 88% power or 4260 rpm. After several hours of tests, we managed to find out that without loss of stability and the appearance of image defects, the base frequency of the graphics processor can be increased by 225 MHz (+ 15.9%), and the effective frequency of the video memory by 1240 MHz (+ 12.4%).



As a result, the frequencies of the overclocked NVIDIA TITAN X in 3D mode amounted to 1643-1756/11248 MHz.


Due to the significant spread of GPU frequencies during the temperature test of the overclocked video card, the test from the 3DMark package again reported on the instability of TITAN X.



Despite this fact, all 19 cycles of this test, as well as all the games of the test set, were successfully passed, and according to monitoring data, the core frequency of the overclocked video card increased up to 1987 MHz.



88% power (~4260 rpm)


Given the overclocking of the reference NVIDIA TITAN X, we can assume that the original GeForce GTX 1080 Ti will overclock even better. However, time will tell.

2. Test configuration, tools and testing methodology

Video cards were tested on a system with the following configuration:

motherboard: ASUS X99-A II (Intel X99 Express, LGA2011-v3, BIOS 1201 from 10/11/2016);
CPU: Intel Core i7-6900K (14 nm, Broadwell-E, R0, 3.2 GHz, 1.1 V, 8 x 256 KB L2, 20 MB L3);
CPU cooling system: Phanteks PH-TC14PE (2 Corsair AF140, ~900 rpm);
thermal interface: ARCTIC MX-4 (8.5 W/(m*K));
RAM: DDR4 4 x 4GB Corsair Vengeance LPX 2800MHz (CMK16GX4M4A2800C16) (XMP 2800MHz/16-18-18-36_2T/1.2V or 3000MHz/16-18-18-36_2T/1.35V) ;
video cards:

NVIDIA TITAN X 12GB 1418-1531(1848)/10008MHz and overclocked to 1643-1756(1987)/11248MHz;
Gigabyte GeForce GTX 1080 G1 Gaming 8 GB 1607-1746(1898)/10008 MHz and overclocked to 1791-1930(2050)/11312 MHz;
NVIDIA GeForce GTX 980 Ti 6 GB 1000-1076(1189)/7012 MHz and overclocked to 1250-1326(1437)/8112 MHz;

disk for system and games: Intel SSD 730 480GB (SATA-III, BIOS vL2010400);
benchmark drive: Western Digital VelociRaptor (SATA-II, 300 GB, 10,000 rpm, 16 MB, NCQ);
backup disk: Samsung Ecogreen F4 HD204UI (SATA-II, 2 TB, 5400 rpm, 32 MB, NCQ);
sound card: Auzen X-Fi HomeTheater HD;
case: Thermaltake Core X71 (four be quiet! Silent Wings 2 (BL063) at 900 rpm);
control and monitoring panel: Zalman ZM-MFC3;
PSU: Corsair AX1500i Digital ATX (1500W, 80 Plus Titanium), 140mm fan;
Monitor: 27" Samsung S27A850D (DVI, 2560 x 1440, 60Hz)

Of course, we could not have the previous versions of the TITAN X video card, so we will compare the new product with two other video cards, but not at all slow ones. The first of these will be the original Gigabyte GeForce GTX 1080 G1 Gaming, which we tested at the frequencies of the reference NVIDIA GeForce GTX 1080, as well as when overclocked to 1791-1930/11312 MHz.





Note that the peak frequency of the graphics processor of this video card during overclocking reached 2050 MHz.

The second test card is the reference NVIDIA GeForce GTX 980 Ti, whose performance we tested both at nominal frequencies and when overclocked to 1250-1326(1437)/8112 MHz.





Since, at its release, the GeForce GTX 980 Ti in games demonstrated performance equal to the previous GeForce GTX TITAN X, this comparison can be considered a comparison of two different TITAN X. We add that the power and temperature limits on all video cards have been increased to the maximum possible, and the GeForce drivers were set to the highest performance priority.

To reduce the dependence of video card performance on the speed of the platform, the 14-nm eight-core processor with a multiplier of 40, a reference frequency of 100 MHz and the Load-Line Calibration function activated to the third level was overclocked to 4.0 GHz when the voltage in the BIOS of the motherboard rises to 1.2095 V.



At the same time, 16 gigabytes of RAM functioned at a frequency 3.2 GHz with timings 16-16-16-28CR1 at a voltage of 1.35 V.

Testing started on October 20, 2016 was conducted under the control of operating system Microsoft Windows 10 Professional with all updates up to date and with the following drivers installed:

motherboard chipset Intel boards Chipset Drivers- 10.1.1.38 WHQL dated 10/12/2016;
Intel Management Engine Interface (MEI) - 11.6.0.1025 WHQL dated 10/14/2016;
NVIDIA graphics card drivers - GeForce 375.57 WHQL from 10/20/2016.

Since the video cards in today's tests are very productive, it was decided to refuse tests at a resolution of 1920 x 1080 pixels and only a resolution of 2560 x 1440 pixels was used. Resolutions are even higher, unfortunately, the existing monitor does not support. However, given the results in the latest updates, there is no need to regret the unavailability of higher resolutions. For tests, two graphics quality modes were used: Quality + AF16x - texture quality in drivers by default with 16x anisotropic filtering enabled and Quality + AF16x + MSAA 4x (8x) with 16x anisotropic filtering enabled and 4x or 8x full-screen anti-aliasing, in cases , when the average frames per second remained high enough for a comfortable game. In some games, due to the specifics of their game engines, other anti-aliasing algorithms were used, which will be indicated later in the methodology and in the diagrams. Anisotropic filtering and full-screen anti-aliasing were enabled directly in the game settings. If these settings were not available in games, then the parameters were changed in the GeForce driver control panel. Vertical synchronization (V-Sync) was also forcibly disabled there. In addition to the above, no additional changes were made to the driver settings.

Graphics cards were tested in one graphics test, one VR test and fifteen games updated to latest versions on the date of commencement of the preparation of the material. Compared to our previous video card test the old and resource-intensive Thief and Sniper Elite III were excluded from the test set, but the new Total War: WARHAMMER and Gears of War 4 with support for the DirectX 12 API were included (now there are five such games in the set). In addition, in the following articles about video cards, another new game with support for the DirectX 12 API will appear in the list. So now the list of test applications looks like this (games and further test results in them are arranged in the order of their official release):

3DMark(DirectX 9/11) - version 2.1.2973, tested in Fire Strike, Fire Strike Extreme, Fire Strike Ultra and Time Spy scenes (graphical score is shown in the diagram);
SteamVR– test for the support of "virtual reality", the result was the number of tested frames during the test;
Crysis 3(DirectX 11) - version 1.3.0.0, all graphics quality settings at maximum, blur level medium, glare enabled, modes with FXAA and with MSAA 4x, double sequential pass of the scripted scene from the beginning of the Swamp mission lasting 105 seconds;
Metro: Last Light(DirectX 11) – version 1.0.0.15, built-in test was used, graphics quality settings and tessellation at the Very High level, Advanced PhysX technology in two testing modes, tests with SSAA and without anti-aliasing, double sequential run of the D6 scene;
Battlefield 4(DirectX 11) - version 1.2.0.1, all graphics quality settings on Ultra, double sequential run of the scripted scene from the beginning of the TASHGAR mission lasting 110 seconds;
Grand Theft Auto V(DirectX 11) - build 877, Very High quality settings, Ignore Suggested Limits enabled, V-Sync disabled, FXAA enabled, NVIDIA TXAA disabled, MSAA reflections disabled, NVIDIA soft shadows;
DiRT Rally(DirectX 11) - version 1.22, the test built into the game on the Okutama track was used, the graphics quality settings were set to the maximum level for all items, Advanced Blending - On; tests with MSAA 8x and no anti-aliasing;
Batman: Arkham Knight(DirectX 11) - version 1.6.2.0, quality settings at High, Texture Resolutioin normal, Anti-Aliasing on, V-Sync disabled, tests in two modes - with and without activation of the last two NVIDIA GameWorks options, double sequential run of the built-in into the test game;
(DirectX 11) - version 4.3, texture quality settings at Very High, Texture Filtering - Anisotropic 16X and other maximum quality settings, tests with MSAA 4x and without anti-aliasing, double sequential run of the test built into the game.
Rise of the Tomb Raider(DirectX 12) - version 1.0 build 753.2_64, all parameters set to Very High, Dynamic Foliage - High, Ambient Occlusion - HBAO+, tessellation and other quality improvement techniques activated, two cycles of the built-in benchmark test (Geothermal Valley scene) without anti-aliasing and with SSAA 4.0 activation;
Far Cry Primal(DirectX 11) - version 1.3.3, maximum quality level, textures high resolution, volumetric fog and shadows to the maximum, built-in performance test without anti-aliasing and with SMAA activated;
Tom Clancy's The Division(DirectX 11) - version 1.4, maximum quality level, all image enhancement parameters are activated, Temporal AA - Supersampling, test modes without anti-aliasing and with SMAA 1X Ultra activation, built-in performance test, but fixing FRAPS results;
Hitman(DirectX 12) - version 1.5.3, built-in test with graphics quality settings set to "Ultra", SSAO enabled, shadow quality "Ultra", memory protection disabled;
Deus Ex: Mankind Divided(DirectX 12) - version 1.10 build 592.1, all quality settings manually set to the maximum level, tessellation and depth of field activated, at least two consecutive runs of the benchmark built into the game;
Total War: WARHAMMER(DirectX 12) - version 1.4.0 build 11973.949822, all graphics quality settings to the maximum level, reflections enabled, unlimited video memory and SSAO activated, double sequential run of the benchmark built into the game;
Gears of War 4(DirectX 12) - version 9.3.2.2, quality settings at Ultra, V-Sync disabled, all effects enabled, 150% resolution scaling (up to 3840 x 2160) instead of anti-aliasing not supported by the game, double sequential run of the benchmark built into the game .

If the games implemented the ability to fix the minimum number of frames per second, then it was also reflected in the diagrams. Each test was carried out twice, the best of the two obtained values ​​was taken as the final result, but only if the difference between them did not exceed 1%. If the deviations of the test runs exceeded 1%, then the testing was repeated at least once more in order to obtain a reliable result.

3. Performance test results

In the diagrams, the results of testing video cards without overclocking are highlighted in green, and with overclocking - in dark turquoise. Since all the results on the diagrams have a common pattern, we will not comment on each of them separately, but will analyze the summary diagrams in the next section of the article.

3DMark




SteamVR




Crysis 3




Metro: Last Light







Battlefield 4




Grand Theft Auto V




DiRT Rally




Batman: Arkham Knight




Tom Clancy's Rainbow Six: Siege




Rise of the Tomb Raider




Far Cry Primal




Tom Clancy's The Division




Hitman




Deus Ex: Mankind Divided




Total War: WARHAMMER

Since we are testing Total War: WARHAMMER for the first time, here are the settings at which this game will be tested today and in our subsequent articles about video cards.



And then the results.




Gears of War 4

We will also give the settings new game Gears of War 4 included in the test set for the first time.








The results are as follows.



Let's supplement the constructed diagrams with a final table with test results with the derived average and minimum values ​​of the number of frames per second for each video card.



Next in line are summary charts and analysis of the results.

4. Summary charts and analysis of results

In the first pair of summary diagrams, we propose to compare the performance of the new NVIDIA TITAN X 12 GB at nominal frequencies and the reference NVIDIA GeForce GTX 980 Ti 6 GB also at nominal frequencies. The results of the last video card are taken as the starting point, and the average FPS of the NVIDIA TITAN X video card is set aside as a percentage of it. The advantage of the new graphics card is, no doubt, impressive.



In our test conditions and settings, the NVIDIA TITAN X is at least 48% faster than the NVIDIA GeForce GTX 980 Ti, and peaks at a staggering 85%! Considering that the GeForce GTX 980 Ti in games was actually equal to the previous GeForce TITAN X, we can say that NVIDIA TITAN X is just as much faster than its predecessor. The progress of a full-fledged Pascal GPU is incredible, it’s a pity that so far all this is very expensive, but the GeForce GTX 1080 Ti already flickering on the horizon will be noticeably more affordable (the only question is what exactly will they cut?). So, on average for all games at a resolution of 2560 x 1440 pixels, NVIDIA TITAN X is faster than NVIDIA GeForce GTX 980 Ti by 64.7% in modes without anti-aliasing and by 70.4% when various anti-aliasing algorithms are activated.

Now let's evaluate how much NVIDIA TITAN X is ahead of the Gigabyte GeForce GTX 1080 G1 Gaming at nominal frequencies with a frequency formula brought to the level of reference versions of the GeForce GTX 1080.



Again, a very decent performance boost! At a minimum, the new product is faster than the GeForce GTX 1080 by 19%, and in Rise of Tomb Raider its advantage reaches an impressive 45.5%. On average across all games, NVIDIA TITAN X is 27.0% faster in non-anti-aliasing modes and 32.7% faster with anti-aliasing enabled.

Now let's dream that NVIDIA, when releasing the GeForce GTX 1080 Ti, will not cut the top Pascal in terms of the number of blocks and the number of shader processors, and at the same time its partners will release original versions with increased frequencies. How much more will the performance of the flagship increase in this case? The answer is in the following summary chart.



Overclocking NVIDIA TITAN X by 15.9% in the core and by 12.4% in video memory accelerates the already breathtakingly fast video card by 12.9% in non-anti-aliased modes and by 13.4% with AA enabled. If we return to the first summary diagram, it is easy to assume that the original GeForce GTX 1080 Ti may be twice as fast reference GeForce GTX 980 Ti or GeForce GTX TITAN X. Of course, such a comparison is not objective, because everyone knows that the original GeForce GTX 980 Ti is often capable of overclocking up to 1.45-1.50 GHz on the core, which means the advantage of potential GeForce GTX 1080 Ti will not be as high. Nevertheless, even a 60-70% increase in performance over the flagship of the previous generation cannot but impress. Where do you and I have a similar increase in central processors or RAM? There is nothing like it, even in the top segment. And NVIDIA already has such capabilities!

5. GPU computing

First, we will test the performance of the new NVIDIA TITAN X video card in CompuBench CL version 1.5.8. The first two tests are face recognition based on the Viola-Jones algorithm and based on the calculation of the TV-L1 Optical Flow motion vector.



Once again, NVIDIA TITAN X's performance is impressive. In the nominal mode of operation, the novelty is ahead of the reference GeForce GTX 980 Ti by 66.6% in the Face Detection test and by 90.4% in the TV-L1 Optical Flow benchmark. The advantage over the GeForce GTX 1080 is also quite noticeable, and overclocking the new Titan accelerates this video card by another 8.1-12.1%. However, about the same performance increase was observed in the other two video cards tested with increasing frequencies.

Next in line is the Ocean Surface Simulation test for rendering the movement of water surface waves using the fast discrete Fourier transform, as well as the Particle Simulation test for the physical simulation of particles.



A distinctive feature of this pair of tests was the relative closeness of the results of the GeForce GTX 980 Ti and GeForce GTX 1080, it seems that the Maxwell core is not going to give up easily. But before the new TITAN X, both of these video cards give in, losing from 42.6 to 54.4%.

Much denser results in the Video Composition test.



The overclocked Gigabyte GeForce GTX 1080 G1 Gaming even manages to catch up with the nominal NVIDIA TITAN X, although the latter demonstrates a twenty percent advantage over the GeForce GTX 980 Ti.

But in the Bitcoin mining simulation, we again see the tremendous advantage of NVIDIA TITAN X.



The new product is almost twice as fast as the GeForce GTX 980 Ti and 30.4% faster than the Gigabyte GeForce GTX 1080 G1 Gaming at the frequencies of the reference NVIDIA GeForce GTX 1080. At such a rate of performance growth, NVIDIA and video cards on AMD GPUs will have very little left.

Next in line is the GPGPU test from the AIDA64 Extreme utility version 5.75.3981 Beta. From the results obtained, we built charts for single and double precision floating point operations.



If earlier NVIDIA GeForce GTX TITAN X was 62% ahead of the first version of GeForce GTX TITAN in these tests, then the new TITAN X on the Pascal core outperforms its predecessor by 97.5% at once! For any other results of the AIDA64 GPGPU test, you can refer to the discussion thread of the article in our forum.

In conclusion, let's test the most complex scene of the latest LuxMark 3.1 - Hotel Lobby.



Note that the old GeForce GTX 980 Ti "does not give up" Gigabyte GeForce GTX 1080 G1 Gaming in this test, but TITAN X outperforms it immediately by 58.5%. Phenomenal performance! Still, it's a pity that NVIDIA is still delaying the release of the GeForce GTX 1080 Ti, and it's especially a pity that no one is pushing it in this yet.

6. Power consumption

The energy consumption level was measured using the Corsair AX1500i power supply via the Corsair Link interface and the program of the same name version 4.3.0.154. The power consumption of the entire system as a whole was measured, excluding the monitor. The measurement was carried out in 2D mode with normal work in Microsoft Word or Internet surfing, as well as in 3D mode. In the latter case, the load was created using four consecutive cycles of the Swamp level intro scene from the game Crysis 3 at 2560 x 1440 pixels at maximum graphics quality settings using MSAA 4X. Power-saving technologies of the CPU are disabled.

Let's compare the power consumption of systems with the video cards tested today in the diagram.



Despite the huge increase in performance everywhere and everywhere, NVIDIA managed to keep the thermal package of the new TITAN X with a Pascal core within the same limits as the previous version of TITAN X - 250 watts, so the power consumption of systems with these video cards does not differ significantly. So, in the nominal mode, the configuration with NVIDIA TITAN X consumes 41 watts more than with the NVIDIA GeForce GTX 980 Ti video card, and when both video cards are overclocked, this difference is reduced to 23 watts. At the same time, we note that a system with a Gigabyte GeForce GTX 1080 G1 Gaming is more economical than both versions of TITAN X, and at frequencies of the reference GeForce GTX 1080 it almost falls within the limit of 400 watts, and this is taking into account the fact that the configuration contains a decently overclocked eight-core processor . More economical novelty in 2D mode.

Conclusion

Since today NVIDIA video cards represented by the GeForce GTX 1080 and GTX 1070 occupy the sole leadership in performance in the upper price segment, we can consider the release of an even more productive TITAN X to be the most demonstrative of its technological superiority over the only competitor. Moreover, this demonstration was fully successful, because, being in the same thermal package, the advantage of the novelty over the flagship NVIDIA video card of the previous generation in gaming tests sometimes reaches 85%, and on average it is about 70%! No less impressive is the increase in performance in computing, which, as we know, is paramount for NVIDIA TITAN series video cards.

The difference in performance with the GeForce GTX 1080 is a little more modest and amounts to 27-33%, but the performance gain from overclocking is higher for TITAN X (about 13% versus 10% for the GeForce GTX 1080), which means that when the GeForce GTX 1080 Ti appears based on the same GP102 we have the right to count on even higher frequencies and, as a result, an increase in performance. The negative point in the announcement of TITAN X is a two hundred dollar increase in the recommended cost, however, in our opinion, a 20% increase in cost will not cause serious problems for potential consumers of such video cards. Well, more modest gamers are looking forward to the appearance of the GeForce GTX 1080 Ti, as well as its “red” competitor.

In addition, we note that, despite the stunning performance in games, NVIDIA itself positions TITAN X, first of all, as an effective tool for training neural networks and solving problems related to Deep Learning algorithms (deep learning). These algorithms are now actively used in various fields: speech, image, video recognition, hydrometeorological forecasting, more accurate medical diagnoses, high-precision mapping, robotics, self-driving cars, and so on. Therefore, we can say that the possibilities of the new NVIDIA TITAN X video card are unlimited and will satisfy any user.

We thank NVIDIA and personally Irina Shekhovtsova
for the video card provided for testing
.


In March 2015, the public was presented with a new flagship graphics card from NVIDIA. The Nvidia Titan X gaming video card is a single-chip, and its architecture is based on the Pascal algorithm (for the GP102 GPU), patented by the manufacturer. At the time of the presentation of the Geforce GTX Titan X, it was rightfully considered the most powerful gaming video adapter.

GPU. The GPU has 3584 CUDA cores with a base frequency of 1417 MHz. In this case, the clock frequency with acceleration will be at the level of 1531 MHz.

Memory. The flagship was presented with a capacity of 12 Gb, but later a version with a reduced volume of 2 times was released. Memory speed reaches 10 Gb / s. The bandwidth in the memory bus is 384-bit, which makes it possible to have a memory bandwidth of 480 Gb / s. GDDR5X memory chips are used, so even with a 6 Gb configuration, performance will be high.

Other characteristics of the Titan X. The number of ALUs is 3584, the ROP is 96, and the number of overlaid texture units is 192. The card also supports resolutions up to 7680×4320, a set of new DP 1.4, HDMI 2.0b, DL-DVI, and HDCP version 2.2 connectors.

The video card works with a slot (bus) PCIe 3.0. To provide full power, you must have additional 8-pin and 6-pin connectors on the power supply. The card will take up two slots on the motherboard (SLI is possible for 2, 3 and 4 cards).

The height of the graphics card is 4.376″ and the length is 10.5″. It is recommended to use power supplies with a power of 600 W or more.

Video card overview

The main emphasis of the manufacturers was placed on improving the graphics for VR, as well as full support for DirectX 12. The performance of the video card in games can be slightly increased by overclocking the performance of the GTX Titan X 12 Gb card.


Pascal technology is aimed at VR games. Using ultra-high-speed FinFET technology, maximum smoothing is achieved when using a helmet. The Geforce Titan X Pascal model is fully compatible with VRWorks, which gives the effect of complete immersion with the ability to experience the physics and tactile sensations of the game.

Instead of copper heat pipes, an evaporation chamber is used here. The maximum temperature is 94 degrees (from the manufacturer's website), however, in tests, the average temperature is 83-85 degrees. When rising to this temperature, the cooler turbine speeds up. If the acceleration is not enough, then the clock frequency of the graphics chip is reduced. The noise from the turbine is quite distinguishable, so if this is a significant indicator for the user, then it is better to use water cooling. Solutions for this model already exist.

Mining Performance Improvement

The company has focused on gaming performance. In comparison with the video card, Geforce GTX Titan X 12 Gb does not improve mining, but the consumption is higher. All Titan series graphics cards stand out for their FP32 and INT8 double precision performance. This allows us to consider a series of cards as professional class accelerators. However, the model with the GM200 chip is not, as many tests show performance degradation in hash calculations and other operations. Cryptocurrency mining performance is only 37.45 Mhash/s.

We do not recommend using the X model for mining cryptographic currencies. Even tweaking the Nvidia Titan X for performance won't deliver the same results as the Radeon Vega (if taken in the same price bracket), let alone the Tesla.

New map from the same manufacturer gives indicators more than 2.5 times. In the overclocked state, the Titan V gave a figure of 82.07 Mhash / s.

Test results in games

If we compare the Titan X Pascal video card with others, then it is 20-25% better than the video card of the same manufacturer, and also almost twice outperforms the competitor Radeon R9 FuryX, which is also single-chip.

In all games in 4K and UltraHD we see a smooth picture. We also achieved good results in tests using the SLI mode.

Comparison of video cards from different manufacturers

The price of a Titan X 12 Gb video card starts at $1200 and depends on the manufacturer and the amount of memory.

We offer you to get acquainted with the comparative characteristics of goods from different manufacturers (* - similar):

ProductPalit GeForce GTX TITAN XMSI GeForce GTX TITAN XASUS GeForce GTX TITAN X
Primary Feature List
Video card typegame *
Name of the GPUNVIDIA GeForce GTX TITAN X *
Manufacturer codeNE5XTIX015KB-PG600F *
GPU codenameGM 200 *
Technical process28nm *
Supported Monitorsfour *
Resolution GM200 (maximum)5120 to 3200 *
List of specifications
GPU frequency1000Mhz *
Memory size12288Mb *
Memory typeGDDR5 *
Memory frequency7000 Mhz7010MHz7010MHz
Memory bus width384bit *
RAMDAC frequency400 Mhz *
Support for CrossFire /SLI modepossible *
Quad SLI supportpossible* *
List of specifications by connection
Connectorssupport for HDC, HDMI, DisplayPort x3 *
HDMI Version2.0 *
Math block
Number of universal processors3072 *
Shader version5.0 *
Number of texture blocks192 *
Number of rasterization blocks96 *
Additional Features
Dimensions267×112 mm280×111 mm267×111 mm
Number of occupied slots2 *
Price74300 r.75000 r.75400 r.

Based on the comparison table, it can be seen that various manufacturers comply with standardization. The difference in characteristics is insignificant: different frequency of video memory and sizes of adapters.

Now on sale there is no this model from any manufacturer. In January 2018, the world was introduced, which outperforms its counterparts several times in performance in games and in cryptocurrency mining.

Spring is not only the time of awakening nature, but also the time for the traditional announcement of the flagship single-chip video card of the Titan line. And although the first demonstration of the NVIDIA GeForce GTX Titan X was unexpected, this announcement was surrounded by a number of rumors. A couple of days ago there was an official presentation of this video card, for several days we had the opportunity to study it in detail. What can she boast of, let's see further.

NVIDIA GeForce GTX Titan X became the fourth in the line and the third "titan" on a single-chip system. Let me remind you that the GeForce GTX Titan Z with two GPUs stands apart. Of course, we understand that such systems cannot be called "popular", but even in the absence of 1000-1300 dollars, this review may be interesting to get acquainted with the Maxwell architecture in its maximum implementation today. As expected, it is based on the GM200 core, the second generation of this architecture. It comes with PolyMorph Engine 3.0 with support for feature level Direct3D 12 and hardware accelerated global illumination technology. Maybe this will be the breakthrough in realism and graphics that the gaming industry has been waiting for so long?

The GM200 contains 8 billion transistors, 256 ROPs, 3072 CUDAs, and a 384-bit memory bus. All of these weapons are aimed at supporting 4K resolution and improving 3D performance. The base core frequency is: 1000 MHz, Boost Clock - 1076 MHz. The memory operates at a frequency of 7012 MHz. There is 12 GB of graphics memory on board, which was unavailable to graphics cards aimed at gamers before the release of the Titan X.

Video review NVIDIA GeForce GTX Titan X

Appearance

NVIDIA GeForce GTX Titan X did not revolutionize the design of top-end graphics cards, appearance changed little. An air cooling system is used, which externally repeats what we saw earlier.

The edits here are small, they are limited to a change in the color of the hull. Now it is almost completely painted black.

The video card is devoid of an amplifying board on the back side printed circuit board. Let me remind you that it was in the reference design in the GeForce GTX 980.

On the back panel output: three DisplayPort, HDMI and DVI. Three connectors can work in the shared workspace mode, but you can connect all 5 connectors at the same time. The same approach is applied throughout the 900 line.

Luminous NVIDIA GeForce GTX logo on the side. Unfortunately, the spectacular images of the glowing fan are just photographs.

Cooling

The design of the installed cooling system repeats the one used in the GeForce GTX 780 Ti.

The evaporation chamber is used, which proved to be a strong side when removing a large amount of heat to the radiator.

The system is collapsible, so you can completely remove the heat diffuser, it can come in handy for installing a water cooling system.

Filling

The power system also migrated, despite the mention of the change, upon inspection it is all the same capacitors and chokes. Even the PWM controller is familiar to us - NCP4206.

But I'm not going to dramatize, we ourselves couldn't find the noise and squeak, which were mentioned in the comments to a number of video cards, even in long-term loads.

The bar for increasing the voltage has also been preserved. In NVIDIA GeForce GTX Titan X, it can be increased up to 25W (TDP 250W/275W).

12 GB of SKhynix memory chips with a frequency of 1750 MHz are soldered, there are 24 of them in total.

Testing NVIDIA GeForce GTX Titan X

A test stand was used.

ModelData
FrameAerocool Strike-X Air
MotherboardBiostar Hi-Fi Z87X 3D
CPUIntel Core i5-4670K Haswell
CPU coolerDeep Cool Ice Blade Pro v2.0
video cardInno3D iChill GeForce GTX 780Ti HerculeZ X3 Ultra
RAMCorsair CMX16GX3M2A1600C11 DDR3-1600 16GB Kit CL11
HDDADATA XPG SX900 256 GB
Hard disk 2WD Red WD20EFRX
Power SupplyAerocool Templarius 750W
wifi adapterTP-LINK TL-WDN4800
AudioCreative Sound Blaster EVO Wireless
Monitoriiyama ProLite E2773HDS
Monitor 2Philips 242G5DJEB
mouseROCCAT Kone XTD
KeyboardRazer BlackWidow Chroma
StabilizerSven AVR PRO LCD 10000
Operating systemMicrosoft Windows Ultimate 8 64-bit

In all tables below, the data is given using factory settings, proprietary software from manufacturers is not installed. The memory and core frequencies are also not affected in order to exclude the influence of extraneous factors.

1. Temperature regime of the video card

  • NVIDIA GeForce GTX Titan X - 31/83
  • Inno3D iChill GeForce GTX 960 Ultra - 29/44
  • GeForce GTX 980 - 34/79
  • GeForce GTX 770 - 35/80
  • GeForce GTX 780 - 35/77
  • GeForce GTX 760 - 35/84

2. Noise

  • NVIDIA GeForce GTX Titan X - 36/42
  • GeForce GTX 980 - 34/79

3. Power consumption

  • NVIDIA GeForce GTX Titan X-405
  • Inno3D iChill GeForce GTX 960 Ultra - 260
  • GeForce GTX 980 - 295
  • Inno3D iChill GeForce GTX 780Ti HerculeZ X3 Ultra - 340
Traditionally, we start evaluating performance with synthetic tests.

  • NVIDIA GeForce GTX Titan X - 7133
  • Inno3D iChill GeForce GTX 960 Ultra - 3522
  • GeForce GTX 980 - 6050
  • Inno3D iChill GeForce GTX 780Ti HerculeZ X3 Ultra - 6190
The rest of the test suite:

It's time for the most spectacular tests, FPS measurements in resource-intensive games. We accompany a number of tables with visual videos with records during the game. Data is captured in Full HD resolution at Ultra settings. It is worth considering that in a number of moments in the videos, the real FPS data is lower than those obtained during the test runs, this is due to the cost of resources for video recording. For this video card, we separately tested the work at a resolution of 3840x2160 on Ultra settings.

6. Crysis 3
Crysis 3 - 3840x2160 - Very High 4xAA - 22.

  • Inno3D iChill GeForce GTX 960 Ultra - 45
  • GeForce GTX 980-69
  • Inno3D iChill GeForce GTX 780Ti HerculeZ X3 Ultra - 61
  • GeForce GTX 770-43
  • GeForce GTX 780-47

7. Battlefield 4
Battlefield 4 - 3840x2160 - Ultra - 39

  • NVIDIA GeForce GTX Titan X-75
  • Inno3D iChill GeForce GTX 960 Ultra - 52
  • GeForce GTX 980-91
  • Inno3D iChill GeForce GTX 780Ti HerculeZ X3 Ultra - 82

8. Hitman: Absolution
A very demanding game, based on the Glacier 2 engine. The appetite of the game is the envy of other new releases of the year.
Hitman: Absolution - 3840x2160 - High, 2x MSAA, 16x AF - 46

  • NVIDIA GeForce GTX Titan X-95
  • Inno3D iChill GeForce GTX 960 Ultra - 44/li>
  • GeForce GTX 980-70
  • Inno3D iChill GeForce GTX 780Ti HerculeZ X3 Ultra - 62
  • GeForce GTX 770-43
  • GeForce GTX 780-55
  • GeForce GTX 760-41

9. Metro Last Light
Another hardware-demanding game that uses DirectX 11 and tessellation.
Metro Last Light - 3840x2160 - Very high - 35

10Middle Earth: Shadow Mordor

  • Inno3D iChill GeForce GTX 960 Ultra - 51
  • GeForce GTX 980-111
Middle Earth: Shadow of Mordor - 3840x2160 - Ultra - 49

11. Tomb Raider

  • NVIDIA GeForce GTX Titan X - 156
  • Palit GeForce GTX 960 Super JetStream - 64
  • Inno3D iChill GeForce GTX 960 Ultra - 68
  • GeForce GTX 980-100
Tomb Raider - 3840x2160 - Ultra - 49

12. Watch Dogs Ultra 4x AA

  • NVIDIA GeForce GTX Titan X-80
  • Inno3D iChill GeForce GTX 960 Ultra - 49
  • GeForce GTX 980-62
Watch Dogs - 3840x2160 - Ultra - 27

13. Total War: Rome II Extreme

  • NVIDIA GeForce GTX Titan X-79
  • Inno3D iChill GeForce GTX 960 Ultra - 41
  • GeForce GTX 980-70
Total War: Rome II - 3840x2160 - Ultra - 30

14. GRID Autosport Ultra 4x MSAA

  • NVIDIA GeForce GTX Titan X - 154
  • Inno3D iChill GeForce GTX 960 Ultra - 80
  • GeForce GTX 980-128
GRID Autosport - 3840x2160 - Ultra - 69

15. World of Tanks

  • NVIDIA GeForce GTX Titan X - 124
  • Palit GeForce GTX 960 Super JetStream - 71
  • Inno3D iChill GeForce GTX 960 Ultra - 75
  • GeForce GTX 980-116

16. World of Warships

This is a new section of our tests, while the number of tested video cards is limited, comprehensive material will be presented by the end of March. Game World of Warships is difficult to evaluate in terms of graphics efficiency, but in general, this data can be useful when building a system specifically for Wargaming games.

  • NVIDIA GeForce GTX Titan X-72
  • Inno3D iChill GeForce GTX 780Ti - 72
  • Palit GeForce GTX 960 Super JetStream - 59
  • Radeon R9 280x - 70
  • Radeon R9 280x - 70
At the moment, the game engine limits the maximum FPS mark at around 72.

Overclocking

By tradition, we are not limited to testing at standard frequencies. For overclocking, MSI Afterburner is the latest version at the time of testing. For the NVIDIA GeForce GTX Titan X, we were able to achieve the following results without raising the core voltage:

To compare the increase in performance, a synthetic test 3D Mark FireStrike is used:

There is potential for further overclocking at the maximum voltage rise. The core frequency can be raised to 1202 MHz and 7806 MHz by memory. Here, the maximum temperature bar rises to 88 degrees.

NVIDIA GeForce GTX Titan X Results

NVIDIA GeForce GTX Titan X showed an increase in performance against the backdrop of lower power consumption. With the current alignment of forces, this is the maximum performance on a single-chip system. Adequate responses from AMD Radeon have not yet been announced. As an alternative, we can consider the GTX 780 Ti, GTX 980 in SLI mode, Radeon R9 290X, which remains relevant. It will also be interesting for video rendering.

NVIDIA GeForce GTX Titan X wins a well-deserved Gold..

NVIDIA does not deviate from the traditions that have been formed over the years so often. So in 2015, according to the established spring tradition, the “greens” present a new single-chip flagship in the face of GeForceGTXTITANX. At the beginning of the summer of 2015, this is the most powerful video card in the world based on a single GPU.

The new graphics adapter is the fourth in a row in the Titans lineup and, logically, is replacing the . At the heart of the novelty lies the graphics core marked GM200, created on the microarchitecture of the second generation. In terms of basic specs, the GM200 is a 1.5x "expanded" GM204 core on which the recent single-chip flagship is based. More precisely, the number of CUDA cores, ROP and TMU units, as well as the cache size has been increased by one and a half times. Let's take a closer look at the characteristics of these two video cards.

The power consumption of the new flagship turned out to be noticeably higher than the power consumption of the GTX 980. Naturally, this is due to the higher performance of the TITAN X, which can reach 30% compared to the 980 model. The manufacturer recommends that the power supply for the system should be at least 600 watts.

Here, perhaps, you need to pay attention to the cooling system of the novelty. Namely, the fact that the GeForce GTX TITAN X will officially be supplied exclusively with a reference cooler, which should provide high performance at a low noise level.

Games/Settings 1920x1080 pxThe Witcher 3: Wild Hunt 79 fpsgta v 66fpsBattlefield Hardline 126 fpsMetro: Last Light 67 fpsCrysis 3 65fps

*Highest possible graphics quality

It is quite obvious that the new product supports all currently existing NVIDIA technologies - SLI®, G-Sync™, GameStream™, ShadowPlay™, 2.0 GPU Boost™, Dynamic Super Resolution, MFAA, GameWorks™, OpenGL 4.5. Microsoft DirectX 12 API is also supported, with a subsequent upgrade to 12.1.

The price for the model in question at the time of the start of sales was announced by the manufacturer in the amount of $999. Actually, this is the same amount as that of the “black Titan”. But if you pay attention to the enormously increased performance of the new video card in comparison with its predecessor, then NVIDIA again made a big and timely step forward.