Test – Asus ROG Strix RTX 2080 Ti |Specs | Price

Test - Asus ROG Strix RTX 2080 Ti

Test – Asus ROG Strix RTX 2080 Ti
| Specs | Price

Good morning all!

The new RTX from Nvidia came out a few days ago, not having had our test copy until very late in the writing, it was wiser to delay the test so as not to rush the thing. Today we are therefore going to offer you the test of the very latest Asus card equipped with the Nvidia RTX 2080 Ti GPU, the ROG Strix in OC versions (the most powerful in the range)!
RTX marks a major change at Nvidia, with an architecture that is intended to take advantage of future technologies, such as Real time ray tracing and DLSS which are not supported at the hardware level by current GTXs. We will therefore present to you what these new technologies are, in addition of course to the custom benchmarks and the Asus card itself!

Presentation

Architecture Turing

Nvidia offers us its new GPUs by presenting them as the biggest step forward of all models “for more than 10 years”. Before even starting it is good to note that Turing is available in 3 GPUs for the moment, the TU102, TU104 and TU106. The RTX 2080 Ti uses the TU102, but which is however a somewhat “castrated” version. The TU102 has a maximum of 72 RT cores, 4608 CUDA cores, 576 Tensor cores and 12 GB of memory, the 2080 Ti has 68, 4352, 544 and 11 GB respectively, enough to see a Titan arrive from ‘here shortly? 😉
The GPU used for the 2080 Ti uses no less than 18.9 billion transistors, which makes it an extremely large chip, even though it is engraved at 12nm. Some even speak of a feat of fitting so much into the same DIE.

Turing TU102, the GPU used for the RTX 2080 Ti

The Turing architecture has a new design largely incorporating what was introduced with the Volta GV100 (GPU dedicated to calculations), with for each TPC (Texture / Cluster processor) two SM (Streaming Multiprocessor), each SM having a total of 64 FP32 / INT32 cores. For comparison, Pascal GP10x GPUs have only one SM per TPC.
Each SM is equipped with eight Tensor cores (used for “Deep Learning”) and an RT (Ray Tracing) core, the latter being the ones who will manage everything that is DLSS and Real time ray tracing, the major advances in terms of visual renderings.

Read This Now:   AMD: will the Radeon RX 6x50 XT debut on May 10? Here are the references in an image

The new RTXs also use GDDR6 memory, allowing a much higher throughput compared to what was found on GDDR5X, going from 11 Gbps for the old GTX 1080 Ti to 14 Gbps for the new RTX 2080 Ti, which is not negligible. In addition to this consumption is also reduced, which is not bad even if memory is not the most consuming element on a graphics card.

Deep Learning Super-Sampling (DLSS)

Tensor cores are calculation units specializing in matrix calculations, which are the basic calculation functions used in deep learning. They are used for the calculation of DLSS (Deap Learning Super Sampling). DLSS leverages a deep neural network to extract features from the rendered scene and intelligently combine details from multiple images to build a high-quality final image. This allows Turing GPUs to use half of the samples for rendering and use artificial intelligence to supplement the information to create the final image. The result is a crisp, crisp image with a quality similar to traditional rendering (which typically relies on Temporal AA (TAA) in most of today’s latest games), but with superior performance.

Pascal vs Turing, a drastic increase in the number of Tensor hearts

For each game, training is necessary on NVIDIA supercomputers to develop the neural network. Once it is trained, the inference (name of the trained neural network) is distributed to the player via GeForce Experience or in the future via the drivers. The Tensor cores of the graphics card then exploit this neural network to render detailed images, while offering performance superior to the TAA. As DLSS uses Tensor cores and no longer CUDA cores for TAA, the displayed FPS number is higher than the old method while showing superior visual quality, which is a plus! For comparison, DLSS is equivalent to a game running with 32X supersampling, you can immediately forget with a “classic” GPU.

TAA
DLSS

As you can see above all the small details are much sharper with DLSS enabled, with no performance loss. For the moment we have 25 games announced to be compatible with DLSS, which is a good start but we hope that this number will increase drastically, otherwise it will be difficult to justify the change of card 😉 Note that Nvidia offers free developers all the calculations part on their servers, which is always good to know.

Read This Now:   T-FORCE CARDEA A440 Pro Special SSD compatible with PlayStation 5

Real-time Ray Tracing

NVIDIA GeForce RTX 2080 Ti on Battlefield V in video, with Real-Time Ray Tracing!

Real time ray tracing is the big deal and what we find so much about on forums all over the world. It aims to make the lighting effects as realistic as possible in the scenes, instead of playing tricks to create reflections and the like.
Ray tracing was used in the past, but to render pre-calculated scenes, or even images, it was never possible to run a game with real-time ray tracing.
Until now, games used a rasterization technique (see wikipedia) to display lighting effects. Although the rendering can be very good, it imposes certain limitations and artifacts can appear in addition to the aliasing, which is not ideal and not necessarily realistic.

Ray tracing in action on a star wars demo

Nvidia started working on Ray tracing at the time of the Gefore 8800, which is to say that it is still a bit dated (10 years!). To enable real-time ray tracing, you need to have much more computing power than any graphics card is capable of today. The firm therefore opted for a “hybrid” approach, by mixing rasterization and ray tracing, using the first when it is not necessary to use the second. Ray tracing will therefore be used for reflections, refractions or even shadows. This allows you to combine the best of both worlds, and to be less “greedy”.

Read This Now:   An American store lists the Intel Core from 12th. Generation, revealing their prices

The cards based on the Turing architecture are therefore equipped with RT cores, one per Sreaming Multiprocessor for a total of 72. The RT cores are there to manage everything related to ray tracing, whether via the Nvidia or Nvidia OptiX libraries. , the Microsoft DirectX Raytracing API or even the Vulkan API. It should therefore be understood that ray tracing is not something only reserved for Nvidia cards, if tomorrow AMD offers us a new graphics card, it will be able to fully exploit this technology.

Ray tracing, architecture Pascal
Ray tracing architecture Turing

Without RT hearts rendering using ray tracing should be done by software, and in other words you can immediately forget if you expect a game that looks like something other than a slideshow diap As you can see just above, when the Pascal architecture had to do ray tracing, all the calculations were done in software. Turing meanwhile does his own hardware grub via his RT cores, performance of course has nothing to do with it. By comparison, an RTX 2080 Ti is roughly ten times more efficient in these tasks than the “old” GTX 1080 Ti.


Notice: ob_end_flush(): failed to send buffer of zlib output compression (1) in /home/gamefeve/bitcoinminershashrate.com/wp-includes/functions.php on line 5420

Notice: ob_end_flush(): failed to send buffer of zlib output compression (1) in /home/gamefeve/bitcoinminershashrate.com/wp-includes/functions.php on line 5420