Top 12 Best video cards Review- 33 Games Tested: “Spring-Summer 2004: Buyers”

Top 12 Best video cards Review

The past period of the current year, 2004, has become a notable milestone in the history of consumer 3D graphics – both graphics giants, NVIDIA Corporation and ATI Technologies, presented a new generation of graphics processors to consumers. The emergence of a new architecture is always an Event with a capital letter: long before it, various and contradictory rumors usually circulate among computer enthusiasts, from time to time supported by information received from the companies themselves, from “sources” close to them or from the media …

The beginning of 2004 was no exception – the characteristics of future solutions from NVIDIA and ATI were actively discussed in the press, but no one had reliable information. It was only known that both solutions from leading developers would use economical high-frequency GDDR3 memory, in addition, it was reliably known that NVIDIA NV40 would have 16 pixel pipelines, while the exact number of pipelines for ATI R420 was unknown – the numbers 8, 12 were called and 16. There was no information on clock frequencies either, figures in the range from 400 to 600 MHz were named.

Reality, as always, put everything in its place. So, let’s see what GPU developers were able to achieve by releasing new VPUs, and how much their achievements may interest those who are planning to purchase a new video card.

NVIDIA GeForce 6800 and ATI RADEON X800: new of the season

NVIDIA GeForce 6800 The first blow this year in the eternal war between ATI Technologies and NVIDIA Corporation was struck by NVIDIA, announcing its next generation graphics processor under the codename NV40, as well as a new line of cards called the GeForce 6800. The FX console has apparently been removed , in order to make everyone understand – this is really a new generation, which has little in common with the far from being the most successful NV3x architecture and the GeForce FX line.

The aforementioned epoch-making event took place, ushering in the era of a new generation of GPUs on April 14, 2004. The novelty turned out to be truly a monster – the crystal of the new VPU consisted of 220 million transistors and its dimensions were amazing. As for the filling, if it was based on the NV3x architecture, then it was redesigned to be completely unrecognizable. Unfortunately, this complexity did not allow for high frequencies; the maximum that NVIDIA managed to achieve with sufficient yield of suitable crystals – 400 MHz.

When working on the GeForce 6-series, all the shortcomings of the previous architecture, the NV3x, were taken into account, as well as a number of new technologies were introduced to both increase performance and improve image quality. In particular, it has expanded support for HDR formats. However, the main innovation in GeForce 6 is still full support for shaders 3.0. This VPU was the first GPU in the world to support the new shader standard. You can get detailed information about the NV40 by reading the theoretical part of our review devoted to NVIDIA GeForce 6800 Ultra and NV40 architecture.

All this had to pay for a relatively high level of power consumption – the older model in the family was equipped with two power connectors, which was previously only found in the XGI Volari Duo V8 Ultra. Moreover, NVIDIA’s recommendations on choosing a power supply were overwhelming: the company recommended a 480 (!) Watt power supply for use with the GeForce 6800 Ultra. However, as it turned out, this requirement turned out to be somewhat redundant; the fact is that powerful and expensive power supplies, as a rule, are assembled from high quality components and do not have all kinds of cheapening “simplifications”. Consequently, they provide higher stability of supply voltages, which is important for motherboards of this class like the GeForce 6800 Ultra.

The new family of NVIDIA cards, as we have already noted, is called the GeForce 6800. It includes the following cards with the following main characteristics:

GeForce 6800 Ultra : 16 pipelines, 400/1100 MHz, two power connectors, two DVI-I outputs;
GeForce 6800 GT : 16 pipelines, 350/1000 MHz, one power connector, 1 DVI-I output;
GeForce 6800 : 12 pipelines, 325/700 MHz, one power connector, 1 DVI-I output;

All cards received a full-fledged 256-bit memory bus. Thus, NVIDIA’s high-end product line is complete. What is the mystery of the appearance of the GeForce 6800 GT, which only slightly differs in clock speeds from the GeForce 6800 Ultra? The point, apparently, is that NV40 turned out to be so complicated that the yield of suitable crystals capable of operating at 400 MHz was not too high, therefore, the company decided to use these chips to create the GeForce 6800 GT. This option compares favorably with the older more compact single-slot cooling system, as well as the presence of only one power connector.

The very first tests of the GeForce 6800 Ultra showed excellent performance; not a trace of the slowness typical of GeForce FX-based cards when executing pixel shaders! Where ATI Technologies has always confidently held the championship, NVIDIA has reigned overnight with its new brainchild.
But NVIDIA’s blow could not remain unanswered, and now, on May 4, “X-hour” came for ATI Technologies, and a retaliatory strike followed from its side. ATI RADEON X800

The “weapon of retaliation” for the Canadian company is the R420 GPU. Interestingly, initially, this chip was supposed to enter the market in a twelve-pipe version, however, at the very last moment, the company’s management decided that extra performance would not hurt, and four additional pipelines were activated. The R420 turned out to be more compact, and most importantly, colder and less power hungry than the NV40. In fact, this VPU became an evolutionary development of the R3xx architecture, without receiving support for Shader Model 3.0, unlike the NV40. This point should be explained in more detail: unlike the GeForce 6800, which has become the physical embodiment of new technologies and capabilities, the RADEON X800

looks less interesting from an architectural point of view. The RADEON X800 has 16 pixel pipelines, but it would be more accurate to say that there are only 4. ATI and NVIDIA organized the pipelines in such a way that each of these pipelines processes a group of four pixels simultaneously.
The number of vertex processors was also increased from four to six, and the processors themselves were slightly improved.

We should also note the introduction of a new version of HyperZ technology, HyperZ HD, whose purpose, however, remains the same – to optimize the interaction of the graphics processor with the memory subsystem.
Of the new technologies, only a new compression algorithm for 3Dc normal maps can be specified, which allows you to significantly increase the detail of objects without spending too many resources and without losing image quality. In addition, the so-called Temporal Anti-Aliasing was offered to the attention of users, which additionally improves the quality of anti-aliasing where there is enough speed for this.

Initially, it was planned that the line of new cards based on R420 would consist of three names – RADEON X800 XT Platinum Edition, RADEON X800 PRO and RADEON X800 SE, however, the last eight-pipe version with a 128-bit memory bus, if it appears, will be closer to autumn , whereas in summer ATI will offer good old RADEON 9800 in its price segment. The fact is that, according to some reports, ATI’s yield of R420 crystals turned out to be very high, and the company did not want to spend full-fledged crystals on a cut-down version of RADEON X800. Thus, ATI still has only 2 new items in its assortment:

But the most important event was the introduction of a new technological process, which combined 0.13-micron production standards and new dielectric materials, so-called low-k dielectrics. The combination of these two elements helped ATI to reach frequencies over 500 MHz, beating NVIDIA where the latter has always been the leader – in clock frequencies. Moreover, all of the above, as well as the relative simplicity of the R420 (about 160 million transistors versus 220 in the NV40) made it possible to keep the heat dissipation level at the same level as in the RADEON 9800 XT and confine ourselves to simple single-slot cooling systems. In addition, the Canadian company finally decided to use high-speed memory operating at frequencies of about 1 GHz in its new products, which was sometimes lacking in the old RADEONs.

RADEON X800 XT Platinum Edition : 16 pipelines, 520/1120 MHz;
RADEON X800 PRO : 12 pipelines, 475/900 MHz;

Both cards use the same PCB design and are equipped with one DVI-I connector, one D-Sub connector and one Molex type power connector. The memory bus has a 256-bit organization. In the process of testing, in more detail the results of which can be found here

it turned out that, despite their architecture, which cannot be called “revolutionary”, R420-based cards show excellent results in all modern games, often bypassing the GeForce 6800 family members. Once again, simplicity and efficiency won out over complexity and versatility. This turned out to be especially true for the so-called “heavy modes”, that is, those where full-screen anti-aliasing and anisotropic filtering are used simultaneously. However, here ATI Technologies’ positions were quite strong before.

In general, parity has recovered in the 3D graphics market, only the weapons of struggle between the two graphic giants have become much more advanced and powerful. And it is still unknown who will be the final winner in this eternal war? ATI has high frequencies, economy and traditionally efficient architecture that is great in modern games, while NVIDIA boasts the most complex and versatile VPU to date, supporting next generation pixel and vertex shaders and a number of unique technologies. In the future, with the release of the next version of DirectX, and with the advent of games using Shader Moder 3.0, the balance of power may change.

2004: other developers

Other developers of graphical desktop solutions have shown themselves more than sluggish this year. Only S3 Graphics has released another series of announcements about the rather interesting DeltaChrome GPU, which belongs to the lower end of the middle class. Real cards based on this VPU also began to appear on the shelves of stores in Japan, although this took more than six months. They soon reached the European market, and Club 3D became their main supplier in this region. It is unlikely that S3 Graphics will be able to pretend to be a serious piece of the graphics market, but, at least, things got off the ground.
The rest of the company follows in the footsteps of the giants: it is developing a shader compiler designed to improve the efficiency of shader execution on its processors, and is also preparing to release a new, more efficient GammaChrome processor with support for the new PCI Express bus. As for XGI, the situation here looks much more gloomy: an aggressive attempt to storm the graphics market, undertaken by the company, has completely failed. Just read our Club3D Volari Duo V8 review

Ultra to see why. Awful performance, a blatant decrease in image quality in order to increase it, numerous flaws in the software – all this buried the XGI Volari line, which had not really had time to be born. However, the company is not giving up and continues to develop new GPUs. In particular, the release of VPU with Shader Model 3.0 support is promised, in addition, at the Computex Taipei 2004 exhibition, a working prototype of a video adapter with a PCI Express bus is demonstrated, which, however, is still provided with a special AGP-to-PCI Express bridge.

In fact, we would not rate the XGI’s chances of success too high – after all, to release a successful graphics processor, not only agility and aggressiveness are required, but, above all, a competitive architecture, which, at the moment, is not observed in the XGI arsenal. Volari rework, even a cardinal one, will hardly solve all the problems, and development of a new, successful architecture will take a lot of time that XGI does not have, because rivals in the person of ATI Technologies, NVIDIA Corporation and even S3 Graphics are not asleep. The desktop graphics market is a very tasty pie, and it seems that its division has already taken place – new contenders in this field have practically nothing to do. Of course, you can try to pick up the crumbs left over from the giants’ feast, but this activity is extremely resource-intensive, frustrating and unprofitable.

PCI Express is gaining momentum

Our review would be incomplete without mentioning the products that support the new data transfer standard actively promoted by Intel. PCI Express is the next generation bus designed to replace PCI and AGP interfaces.

PCI Express has a number of advantages over its predecessors – APG and PCI – in particular, they include point-to-point topology, bidirectional, and high bandwidth. Even in the smallest version, PCI Express x1, this bus has twice the bandwidth of PCI (250 versus 133 MB / s), moreover, in each direction (the total bandwidth is 500 MB / s). As for the PCI Express x16 slot for installing graphics cards, its bandwidth is already 4 GB / s (total – 8 GB / s), while AGP 8x provides only 2.1 GB / s when transferring data from the chipset to the VPU and about 200 MB / sec in the opposite direction.

At the moment, the corresponding platforms with support for PCI Express x1 and x16 have already been announced by Intel, but the development of graphics cards capable of working with the new bus began much earlier. As you already know, it is PCI Express x16 that will replace AGP 8x in the systems of the future, therefore, it is about it, as well as the approaches of manufacturers of graphics solutions to its implementation, which will be discussed in the following paragraph.

ATI Technologies and NVIDIA Corporation approached the creation of PCI Express-compatible solutions in a diametrically opposite way – the first integrated support for the new bus directly into the VPU die, and the second designed a special HSI chip for this purpose, which acts as an AGP-PCI Express bridge. It is worth telling about it in more detail – this microcircuit allowed NVIDIA not to waste time developing new versions of its VPUs, but to use existing solutions, equipping them with the appropriate bridge. The company did just that, announcing a new line under the general name GeForce PCX. It includes the following products:

NVIDIA GeForce PCX 5950 : GeForce FX 5950 Ultra with PCI Express bridge;
NVIDIA GeForce PCX 5750 : GeForce FX 5700 with PCI Express bridge;
NVIDIA GeForce PCX 5300 : GeForce FX 5200 with PCI Express bridge;
NVIDIA GeForce PCX 4300 : GeForce4 MX with PCI Express Bridge

A certain number of GeForce PCX 5900 cards were also released, which are a hybrid of the GeForce 5900 and the HSI bridge.

In addition to its advantages, this solution also has its disadvantages – the use of a bridge chip does not allow the PCI-Express bus to be fully realized. In addition, this bridge generates a lot of heat and requires a passive radiator, which also speaks against NVIDIA’s approach. In general, this design looks rather awkward, however, the company took into account its shortcomings, and, in the case of the VPU NV45, on the basis of which the GeForce 6800 video adapters intended for the PCI Express platform will be assembled, the HSI bridge was transferred to the chip substrate. This can also be called a half-measure, but at least such a dual-die chip looks better and more reliable than two separate microcircuits. So, the NV45, against all expectations, turned out to be nothing more than a combination of the NV40 and the HSI bridge.

ATI Technologies turned out in this case to be more technologically advanced than NVIDIA – all its solutions with PCI Express support have it from birth, without requiring any additional chips. Today the company’s product range includes a number of solutions with PCI Express support: these are R423, RV380 and RV370 chips. The first is based on the RADEON X800 architecture, the second – on the RADEON 9600 XT, while the third, although it uses the RADEON 9600 architecture, is manufactured using a completely new 0.11 micron technological process.

However, not everything is so rosy for ATI: not so long ago, NVIDIA questioned the presence of “native” PCI Express support in the products of the Canadian developer. Comparative images of the RV380 and RV360 cores have been published on the Web; at the same time, it was assumed that the PCI Express support in the first of them is not so “native”, as it was commonly believed, just ATI managed to integrate the PCIE-AGP bridge into the crystal without resorting to using external chips. Perhaps this is the case, although, most likely, the part of the die responsible for supporting the AGP bus was simply replaced by a circuit capable of working with the PCI Express x16 bus. Proof of this can be found in our testing of bus bandwidth on video adapters with PCI Express interface, the results of which can be found here…. Although the results shown by the RADEON X600 are far from the theoretical maximum, they are clearly superior to the results shown by the GeForce PCX 5900, which indicates a more correct implementation of PCI Express support in ATI products.

This situation is somewhat reminiscent of the situation with Serial ATA-150: a number of manufacturers used transition bridges in their products, and only Seagate equipped its hard drives with initial support for a new interface for those times. However, drives that fully implement the capabilities of Serial ATA are only now beginning to appear. So it will be with the PCI Express x16 bus – products with full support for all its capabilities will not appear immediately. In addition, there is a need for appropriate driver support, which has not yet been observed. Probably, PCI Express will be able to show all its capabilities only with the release of the new operating system of the Windows family – Longhorn. So, over the past period of 2004, the following important events in the field of desktop graphics can be distinguished:

NVIDIA NV40 next generation GPU announced.
Announcement of the next generation ATI R420 GPU.
The beginning of the process of transferring video adapters to the new PCI Express x16 bus.
The appearance of the first solutions with “native” support for the new bus.

Optimization Wars: Does Image Quality Suffer?

Not so long ago, we published a material concerning the use of various optimizations by leading GPU manufacturers. In short, in order to increase the performance of their products, NVIDIA and ATI Technologies use a number of software tweaks, for example, simplify trilinear and anisotropic filtering, reduce the accuracy of pixel shaders, etc., often due to a slight drop in the quality of displaying a 3D scene.

Of course, the days of high-profile scandals like the famous 3DMark scandal are over, and companies are now acting much more cautiously. There are situations in which the presence of optimizations immediately catches the eye, but most often during the game they are invisible, and only upon close examination can you notice some differences. Needless to say, in games such a situation is very rare: basically, the player has to not peer at the picture, trying to find traces of optimizations, but follow the gameplay so as not to end the game ahead of time and against his will.

Nevertheless, the question of the presence or absence of optimizations is still relevant. Now it sounds like this: “Should the user be given the opportunity to manage optimizations?” To this question, both leading graphics companies, as befits mortal enemies, answer diametrically opposite: NVDIA believes that such an opportunity should be provided (and provides it in the latest versions of its drivers), while ATI Technologies claims that the presence of options that allow disabling optimizations is pointless. Disabling texture filtering optimizations in its new products will not lead to anything other than a drop in performance, as even the most fastidious user is unlikely to notice an improvement in image quality, according to representatives of the Canadian company. they are so insignificant. There is some truth in this statement, however, we are talking about an elementary attitude towards consumers of graphics companies’ products, and from this point of view, NVIDIA’s approach looks much more respectful.

We decided to find out if the optimizations are actually as imperceptible as ATI and NVIDIA imagine it to be, and took a number of screenshots in modern games. We consider this approach to be correct, since the use of different test programs displaying artificial scenes with and without tint of MIP levels on the monitor can only reveal the presence or absence of optimizations, and then only when the optimizations are not disabled when the tint is turned on. but does not allow us to say how much the quality of displaying the scene as a whole has dropped. In addition, buyers of gaming accelerators acquire them not in order to enjoy the appearance of the simplest checkerboard texture or stains of mip-level tint, but in order to play. Thus, it is in games that you should look for potential imperfections in the image.


We have selected several of the most technologically advanced games as examples:

Farcry
Painkiller
Max Payne 2: The Fall Of Max Payne
Halo: Combat Evolved

When taking screenshots, we used the “Eye Candy” mode – “maximum quality” – 1280×1024. This means that the maximum possible degree of anisotropic filtering has been enabled and full-screen anti-aliasing with a degree of 4x has been enabled. An exception was Halo, which does not support FSAA due to the way the scene is built.
Each of the screenshots was taken in two modes – with and without texture filtering optimizations, on the following four video adapters:

NVIDIA GeForce 6800 Ultra
NVIDIA GeForce FX 5950 Ultra
ATI RADEON X800 XT
ATI RADEON 9800 XT

It should be reminded that at the moment ATI’s drivers in the control panel do not provide options for managing optimizations, they are always enabled. Especially for those who are interested in the ability to disable optimizations on cards of the RADEON family, we present the procedure for doing this:

  • Search the registry for the string variable AnisoDegree.
  • Correlate its value with the ATI Control Panel settings.
  • Change the degree of anisotropy in the ATI Control Panel.
  • Press F5 in RegEdit to see if the value of AnisoDegree has changed.
  • If yes, go to the next item, otherwise, continue searching for the desired branch.
  • Add new string variable “RV350TRPER” with value 1
  • Add new string variable “RV350ANTHRESH” with value 1
  • Add new string variable “R420AnisoLOD” with value 2
  • To restart a computer.
Read This Now:   MSI Gaming Laptops Understanding Features and Models: Quick Overview

Screenshots taken on a GeForce 6800 Ultra in High Quality mode were used as reference. In this mode, all optimizations are disabled.
So, let’s see if the influence of the optimizations introduced by the developers on the image quality is really insignificant:

As you can see, all the cards demonstrate approximately the same image quality. If you look closely, you can see that the GeForce FX 5950 Ultra in some places – on the slopes of sand dunes – displays textures a little more clearly compared to other video cards. This is due to the fact that the GeForce FX 5950 Ultra uses a more resource-intensive, but also more honest anisotropic filtering algorithm, which does not have “uncomfortable corners” at which the degree of anisotropy is sharply reduced in new graphics processors.


Halo makes the difference better. The difference is especially clearly visible on the walls of the corridor. Here the best image quality is demonstrated by GeForce FX 5950 Ultra with disabled optimizations, as well as GeForce 6800 Ultra in the same mode. In other cases, one can see not a smooth, but a “stepped” manifestation of a detailed texture, designed to display the microrelief of the wall surface.

In Max Payne 2, the differences between different maps and modes are generally almost imperceptible, thanks to the complexity and richness of the game scene.

There are slight differences, mainly when enabling / disabling optimizations on the GeForce 6800 Ultra, as well as on the RADEON X800 XT, but at first glance it is difficult to detect them.

As you can see, the presence of optimizations does not always mean degradation of the image quality. On the contrary, in most cases, introducing texture filtering optimizations has practically no effect on the visual perception of the scene. In dynamics, the situation, of course, changes – for example, noises that are not noticeable in static screenshots or MIP-level switching lines become most noticeable in dynamics.
But, as you know, players who seriously play Painkiller, Farcry, Halo and other games look out for transitions between mip-levels, hoping to discover hidden optimizations, they simply have no time – they are primarily concerned with the problem of survival in the game world. Of course, this does not mean that you should not pay attention to optimizations at all – for example, the approach

XGI’s attempt to increase the performance of cards based on GPU Volari is completely unacceptable and needs to be eradicated in principle. However, such crude “optimizations” degrade the image quality so much that they instantly catch the eye, unlike the optimizations of ATI and NVIDIA, to find which one should carefully examine each game screenshot or look for scenes where the optimizations are most noticeable. As for real game situations, let us repeat, it is difficult to find these optimizations in them, if you do not do it on purpose, in most cases.

Nevertheless, we still believe that the right to decide whether to sacrifice a dozen or more frames per second for the sake of true trilinear and anisotropic filtering, or to get an additional speed increase due to their simplification, should remain with the end user – for example, in some either the game or the scene, due to the peculiarities of the construction of the engine or other factors, the influence of optimizations becomes noticeable, and the user has every right to turn them off. If he does not need a guaranteed perfect texturing quality and he is quite satisfied with what the optimizations offer, then he has the right to leave them enabled.

Consumer confidence in a particular manufacturer is a very important factor, since the success of their products in the market depends on it. It is very easy to lose it, but it is extremely difficult to return it, and for this, at times, it is necessary to undertake truly titanic efforts.
We can only rejoice for NVIDIA, which has decided to rely on trust, giving control over optimizations to consumers of its products, and we hope that this practice will continue both by the company itself and by other GPU manufacturers.

Test system and test conditions

But it’s time to move on, finally, to the practical part of our today’s review. This time, we decided to significantly expand the range of games used to 35 titles in order to get the most complete picture of the performance of modern high-end and mid-range video adapters. These classes are of greatest interest to those who are interested in modern games, because low-end video adapters, such as, for example, ATI RADEON 9200 and GeForce FX 5200 are not suitable for serious gaming PCs. RADEON 9600 and GeForce FX 5600 cards can already be referred to the same category, and even more so, various mutants with a 64-bit memory bus are not suitable for this purpose – more and more truly “heavy” games appear on the market. test even for the most powerful and modern video adapter.

3D-Action from the first person view:

Call of Duty;
RTCW: Enemy Territory;
Star Trek: Elite Force 2;
Unreal Tournament 2004;
Halo: Combat Evolved;
Deus Ex: Invisible War;
FarCry;
Painkiller;
Tron 2.0;
Firestarter;
Breed;
America’s Army 2;

Third Person Games:

Star Wars: Knights of The Old Republic.
Splinter Cell: Pandora Tomorrow;
Tomb Raider: Angel of Darkness;
Prince of Persia: Sands of Time;
Max Payne 2: The Fall of Max Payne;
Lord Of The Rings: Return Of The King;
Thief: Deadly Shadows;
Hitman: Contracts;
Manhunt;

Simulators:

IL-2 Sturmovik: Aces In The Sky;
Lock On;
Microsoft Flight Simulator 2004;
X2: The Treat;
F1 Challenge 99-02;
Colin McRae Rally 04;

Sport games:

FIFA 2004;

Strategies:

C&C Generals: Zero Hour;
Perimeter;

Semi-synthetic tests:

Final Fantasy XI Official Benchmark 2;
Aquamark3;

Synthetic tests:

Futuremark 3DMark03 build 340.

Of the new games, the most interesting is the three-dimensional strategy Perimeter, which is executed on a completely unusual engine. In this game, the battlefield is not only three-dimensional, but also fully interactive: it can be destroyed and terraformed with the help of special units. The secret here is simple – the landscapes in Perimeter are made using technology that is somewhat reminiscent of a voxel: it seems that each unique landscape in the game has its own three-dimensional texture.
Unfortunately, you have to pay dearly for all the beauties that can be observed while playing Perimeter – at maximum graphics quality settings, you can play more or less comfortably only on a system equipped with a powerful processor and latest-generation graphics adapter – RADEON X800 XT or GeForce 6800 Ultra …

All new games that were not equipped with a built-in benchmark were tested using a new methodology using FRAPS and entering not only average, but also minimum frames per second into the table for a more accurate display of the gameplay speed. In the future, we will continue this practice. Test bench configuration:

Processor: AMD Athlon 64 3400+ (2.20GHz, 1MB L1);
Motherboard: ASUS K8V Deluxe;
RAM: OCZ PC-3200 Platinum EB DDR SDRAM (2x512MB, CL2.5-3-2-8);
Hard drive: Seagate 7200.7 HDD (SerialATA-150, 8MB buffer);
Sound: Creative SoundBlaster Audigy 2;
OS: Microsoft Windows XP Pro SP1, DirectX 9.0b, NVIDIA ForceWare 61.34, ATI CATALYST 4.6.

Traditionally, each game was tested with the highest possible graphics quality in three resolutions in the pure performance mode, as well as with FSAA 4x enabled and 8x / 16x anisotropic filtering enabled. For the new generation cards – GeForce 6800 / GT / Ultra and RADEON X800 PRO / XT, we used the latest drivers at the time of testing. For older cards, the latest official driver versions were used.

The results of the GeForce 6800 Ultra overclocked to 435/1150 MHz are given for comparison. Since the official announcement of the GeForce 6800 Ultra Extreme Edition never happened, these figures are only of theoretical interest. However, some manufacturers may release higher frequency variants of the 6800 Ultra and sell them for a higher price. In this case, the figures obtained by us can be of some value to those who are interested in buying such cards and would like to know how much faster they are than their counterparts operating at standard frequencies.

Game Tests: Call Of Duty


As you can see, the situation is quite common – all test participants ranked in descending order, from senior models to junior ones, which is quite natural. The low-resolution thrive cards based on NVIDIA chips, and in the high observed parity between competing models of ATI RADEON X800 and NVIDIA GeForce 6800. rulers

When anisotropic filtering and anti-aliasing at lower resolutions also reigns NVIDIA, but its growth new ATI The RADEON X800 takes their toll. Well, high-efficiency work in heavy modes has always been the strong point of ATI RADEON video adapters.

Game tests: RTCW: Enemy Territory

In this game we used only the “high quality” mode, since the results obtained in the “speed” mode turned out to be practically unrepresentative – all cards, except for the younger models of the previous generation, rested on a certain limiter for the number of frames per second, which was not observed when turned on FSAA and anisotropic filtering.

The picture is rather ambiguous and it is not possible to determine the favorite among GPU manufacturers for this game. On the one hand, the RADEON X800 XT demonstrates performance that is only slightly different from that of the GeForce 6800 Ultra, slightly breaking ahead only at 1600×1200. On the other hand, the RADEON X800 PRO is slightly behind the GeForce 6800 GT.



The GeForce 6800 is hampered by very slow (for NVIDIA products) memory. It slows it down so much that at 1600×1200 the card loses to the GeForce FX 5950 Ultra! However, both cards compete well with the RADEON 9800 XT. The results of less powerful cards are at about the same level.

Game tests: StarTrek: Elite Force 2


In this relatively old 3D shooter, the best results have always been demonstrated by NVDIA cards, which has happened now. Only in the maximum quality mode, at 1600×1200, the RADEON X800 XT was able to outperform the GeForce 6800 family.

Game tests: Unreal Tournament 2004


Torlan level

In the mode without activating anisotropic filtering and full-screen anti-aliasing, almost all cards have reached their theoretical maximum, resting on the performance of the system’s central processor. With the increase in resolution, however, the older generation models lagged behind a bit. The favorite among them is the former king of 3D graphics – RADEON 9800 XT.

The VPUs of the new generation coped well with the “weighting” of the situation – the GeForce 6800 and RADEON X800 families showed excellent results, with the exception of the 12-pipe GeForce 6800, which was again let down by the slow memory – in high resolutions it again lagged behind the GeForce 5950 Ultra and RADEON 9800 XT. despite belonging to a new generation. Metallurgy level



The situation is similar to the one that developed at the Torlan level, except that, due to the lower “severity” of the level in general, the frame rate values ​​are higher here. Among the family of newest accelerators, the unrivaled leader in performance for this case is GeForce 6800.

In case of anisotropic filtering and full-screen anti-aliasing, the leadership changes, and the RADEON X800 XT and RADEON X800 PRO take the lead. At the same time, the GeForce 6800 line shows generally decent results, sometimes not lagging behind the leaders. Among the “oldies” GeForce FX and RADEON 9000 there is an approximate parity. It is worth noting that all the games described above practically do not use the new features built into the new generation graphics processors and cards. Let’s see what will happen next.

Game tests: Halo


RADEON X800 XT remains the leader in this game, however, it cannot be said that it has gone far from GeForce 6800 Ultra cards. But the GeForce 6800 GT showed an impressive advantage over the RADEON X800 PRO, which is in the same price category with it. GeForce 6800 confidently outperforms all competitors in the same price category, including the RADEON 9800 XT / RADEON 9800 PRO, which, although they outperformed the competitors from the GeForce FX, did not do it so confidently.

Read This Now:   Review TridentZ Royal 3200MHz 2x8GB [F4-3200C16D-16GTRS]

Game tests: DeusEx 2


Thanks to the special effects used, Deus Ex 2 is an extremely difficult game, however, the new generation of video cards allows you to play it quite comfortably. Thanks to its high efficiency of work with pixel shaders and high clock frequencies, the RADEON X800 XT becomes an unsurpassed and confident leader. ATI RADEON X800 PRO does not feel so good, however, it shows results comparable to those of the GeForce 6800 GT. GeForce 6800 does not provide even the minimum acceptable level of playability, not to mention the cards of the previous generation.

Game tests: FarCry

In the process of preparing this review for publication, a patch for FarCry version 1.2 was provided to the media, which gives a certain boost to graphics cards based on the GeForce 6800 Ultra, taking away all possible optimizations from the GeForce FX. For completeness, we decided to show the results for both version 1.1 and 1.2 of the game. FarCry 1.1, MP_Dune



FarCry is perhaps the most advanced first-person shooter to date. It really demands everything from the PC video system that it can provide, and easily eats up all the resources provided. The new families of cards from ATI and NVIDIA show excellent results in this game and it’s hard to say who can be called the winner. The RADEON 9800 XT / PRO family also looks good, and the RADEON 9800 XT again competes with the GeForce 6800, only slightly lagging behind it. The old GeForce FX architecture can hardly cope with the load that FarCry places on it.

When the maximum quality mode is enabled, the alignment of forces changes – the RADEON X800 XT takes the lead. The 12-pipe RADEON X800 PRO can no longer cope with the increased load as well as before, and is beginning to yield to the GeForce 6800 Ultra and GeForce 6800 GT. The GeForce 6800 again shows its inability to work in FSAA and AF modes, and successfully competes only with the RADEON 9800 XT. FarCry 1.2, Pier FarCry 1.2, Research FarCry 1.2, Volcano









Game Tests: Painkiller


RADEON X800 XT and senior members of the GeForce 6800 family (Ultra and GT models) show almost identical results; RADEON X800 XT and GeForce 6800 also look good. Even the lowest-powered cards, RADEON 9600 XT and GeForce FX 5700 Ultra, provide excellent playability in pure performance mode. It is worth noting that for some unknown reason, the minimum number of frames per second for all graphic cards that took part in the Painkiller testing turned out to be at the same level.

In the mode with enabled full-screen anti-aliasing and anisotropic filtering, new items also feel great; you can even play at 1600×1200. RADEON X800 XT wins here, and RADEON X800 PRO is slightly inferior to GeForce 6800 GT.

The exception to the aforementioned thesis, as always, is the GeForce 6800, which is literally pressed to the ground by the slow memory. Nevertheless, the game engine is so successful that it provides good performance in FSAA 4x + Aniso 8x / 16x mode even on the previous generation cards.

Game tests: Tron 2.0


Tron prefers cards from the NVIDIA GeForce 6800 line, at least if you don’t use modes with full-screen anti-aliasing and don’t enable anisotropic filtering. Only the GeForce 6800 is inferior to the RADEON X800 family. In the “oldies” sector, on the contrary, the RADEON 9800 XT / PRO family easily takes the victory over the GeForce FX line. The RADEON 9600 XT is on a par with the GeForce FX 5700 Ultra. With FSAA enabled and anisotropic filtering, the above situation remains in the first two resolutions, but at 1600×1200 the GeForce 6800 Ultra loses ground, yielding to the RADEON X800 XT, the GeForce 6800 GT drops to the level of the RADEON X800 PRO, and the GeForce 6800 again remains in competition with the RADEON 9800 XT , sometimes staying behind.



Game tests: Firestarter

This game uses a non-standard resolution 1600×1024, with which ATI RADEON cards refuse to work in full-screen mode without using any third-party utilities, therefore, we present the results only for the first two resolutions.

GeForce 6800 Ultra is ahead, followed by RADEON X800 XT, and the third place is taken by GeForce 6800 GT. The RADEON X800 PRO easily outperforms the GeForce 6800, however, it still lags slightly behind its real rival, the GeForce 6800 GT. As for the cards of the previous generation, the RADEON 9800 XT is clearly superior to the GeForce 5950 Ultra, and the RADEON 9600 XT is lagging behind the GeForce FX 5700 Ultra.

As the load increases, the picture remains the same, except that now the GeForce FX 5700 Ultra is inferior to the RADEON 9600 XT.

Game tests: Breed

This shooter refused to work at 1600×1200 on cards based on NVIDIA chips; or rather, this resolution was simply absent in the list of available ones, therefore, we used data obtained in resolutions of 1024×768 and 1280×1024.

The game clearly prefers cards based on NVIDIA GPUs: surprisingly, the RADEON X800 XT and RADEON X800 PRO adapters are outperformed even by the GeForce FX 5950 Ultra! At 1280×1024 the RADEON X800 XT slightly corrected the situation, but it did not reach the level of the GeForce 6800. It is difficult to say what caused this – in any case, the graphics in the game are not too complicated and do not shine with technological delights.

Surprisingly, even with the maximum quality mode enabled, ATI’s cards could not get ahead, as is usually the case. Most likely, the reason for this lies somewhere deep inside the game engine. It should be noted that the minimum number of frames per second for all cards is approximately the same, which means that the user should not feel much difference in the performance of different cards.

Game Tests: America’s Army

We used the latest, 2.1, version of this free online shooter, downloaded from the developer’s site.

RADEON X800 XT / PRO look a little better in this game than GeForce 6800 Ultra / GT, respectively. The game is based on a fairly simple engine, without any special frills, so all cards provide an acceptable level of performance at all resolutions.

As the load increases, the advantages of the ATI RADEON X800 architecture begin to manifest themselves more strongly; this is especially noticeable in the 1600×1200 resolution. Since the game engine is not burdened with modern special effects, the presence of fast memory on board and the ability to work effectively with it come to the fore – therefore, the GeForce 6800 is again inferior to the GeForce FX 5950 Ultra.

Game Tests: Star Wars: Knights of the Old Republic


In this game, there is a very significant superiority of graphics cards based on NVIDIA chips over competing products from ATI. The GeForce FX 5950 Ultra looks relatively good, while the RADEON 9800 XT and lower models do not provide a comfortable frame rate.

When FSAA and anisotropic filtering are enabled, little changes, except that the RADEON X800 XT shows slightly higher results relative to the GeForce 6800 Ultra, while the RADEON X800 PRO catches up and sometimes outperforms the GeForce 6800 GT. The “oldies” GeForce FX 5950 Ultra / 5900/5900 XT and RADEON 9800/9600 / XT / PRO no longer provide an acceptable level of playability.

Game tests: Splinter Cell: Pandora Tomorrow


In this game, the RADEON X800 XT is almost on a par with the GeForce 6800 Ultra, and the RADEON X800 PRO – with the GeForce 6800 GT. The 12-pipe GeForce 6800 can only compete with the RADEON 9800 XT. GeForce FX 5950 Ultra, 5900 and 5900 XT allow you to play comfortably only at 1024×768, while GeForce FX 5700 Ultra and RADEON 9600 XT do not even allow this.

Game tests: Tomb Raider: Angel of Darkness


Tomb Raider: Angel of Darkness makes extensive use of complex pixel shaders to create effects, therefore, the RADEON X800 family performs at its best in this game. The GeForce 6800 line lags far behind, but still shows good results (except for the GeForce 6800), in contrast to the GeForce FX line, which allows you to play normally only at 1024×768. RADEON 9800 XT and RADEON 9800 PRO, on the other hand, feel good and easily deal with the GeForce 6800.

The situation practically does not change, except that the speeds demonstrated by the test participants drop due to the increased load.

Game tests: Prince of Persia: Sands of Time


Here the GeForce 6800 Ultra looks a little better than the RADEON X800 XT, whose performance is on par with the GeForce 6800 GT. GeForce 6800 competes with RADEON X800 PRO, which can be considered a success for NVIDIA, and RADEON 9800 XT shows better results than GeForce FX 5950 Ultra. The rest of the test participants demonstrate rather low performance, which is sufficient, however, for a comfortable game at a resolution of 1024×768.

Game tests: Max Payne 2: The Fall of Max Payne


At 1024×768, all new and some old cards are limited by the performance of the system’s central processor, but starting from 1280×1024 the GeForce 6800 and RADEON X800 PRO can no longer keep pace. At 1600×1200 the GeForce 6800 fails even more. RADEON 9800 XT looks preferable to GeForce FX 5950 Ultra, and RADEON 9800 PRO – GeForce FX 5900/5900 XT. The same can be said about the youngest models – the RADEON 9600 XT demonstrates better performance than the GeForce FX 5700 Ultra.

As the load grows in the mode with FSAA enabled and anisotropic filtering, the GeForce 6800 sharply loses its positions, however, it does not lag behind the GeForce FX 5950 Ultra. As for the rest, everything remains in place – ATI RADEON video adapters are slightly superior to competing NVIDIA GeForce models.

Game tests: Lord Of The Rings: Return of the King


With FSAA and anisotropic filtering disabled, new products from ATI and NVIDIA in this game easily run into the CPU performance, and only at 1600×1200 we can observe some differences, in particular, the RADEON X800 PRO lags behind the leaders. Strange, but the GeForce 6800, which also has 12 pipelines, but is equipped with much slower memory, loses much less in performance. In the camp of the previous generation eight-pipelined GPUs, parity is observed: the RADEON 9800 XT shows the same results as the GeForce 5950 Ultra. GeForce FX 5900 / XT are slightly inferior to RADEON 9800 PRO. RADEON 9600 XT outperforms GeForce FX 5700, but only in the first two resolutions; at 1600×1200 it suffers from a lack of memory bandwidth and is therefore inferior to its competitor.



As the load grows, nothing new happens: the RADEON X800 XT competes with the GeForce 6800 Ultra, while the RADEON X800 PRO cannot cope with the GeForce 6800 GT, sometimes lagging behind the GeForce 6800, which costs $ 100 less. Interesting events are observed only in the sector of solutions of the previous generation – the RADEON 9800 XT demonstrates performance at the level of the GeForce FX 5900 / XT, but lags behind the GeForce FX 5950 Ultra, while the RADEON 9600 XT is losing ground before the onslaught of the GeForce FX 5700 Ultra with its fast memory and high speed processing geometry.

Game tests: Thief: Deadly Shadows

This shooter, like Splinter Cell, uses the Bloom effect, when enabled, FSAA is automatically disabled to avoid problems with image quality.

The winner is the one who can work more efficiently with complex pixel shaders, that is, the RADEON X800 XT / PRO video adapters. However, the GeForce 6800 family also shows quite decent results. Of the old cards, only the RADEON 9800 XT and 9800 PRO look relatively good.

Game tests: Hitman: Contracts (Hitman 3)

The game is built on the same engine as Splinter Cell and Thief, which means that when the Bloom effect is enabled, it does not support full-screen anti-aliasing, so the results were shot for it only in “speed” mode.

The test results of graphic cards in this game show that the number of Pixel Shaders 2.0 is so great that even at 1280×1024 the GeForce 6800 and GeForce FX families cannot show really high performance. However, only the RADEON X800 XT holds up really well, while the RADEON X800 PRO comes close to the GeForce 6800 Ultra …

Game tests: Manhunt


The performance leaders in this game were RADEON X800 XT and GeForce 6800 Ultra, while the RADEON X800 PRO performed, to put it mildly, far from being so successful and even outperformed the GeForce 6800. Out of eight-pipelined cards, the RADEON 9800 XT performed well, demonstrating better performance than GeForce FX 5950 Ultra.

Once again, ATI Technologies’ solutions have proven their worth when leveraging features such as FSAA and anisotropic filtering.

RADEON X800 XT outperforms both 16-pipe solutions from NVIDIA, and RADEON X800 PRO managed to keep up with the GeForce 6800 GT. A similar state of affairs is observed with the cards of the previous generation – the RADEON 9800 XT outperforms the GeForce FX 5950 Ultra, almost catching up with the GeForce 6800, although the latter manages to equalize the score at 1600×1200. But the RADEON 9600 XT looks really bad – despite its architectural advantages, it is inferior to the GeForce FX 5700 Ultra.

Game tests: IL-2 Shturmovik: Aces in the Sky

The new version of Il-2 Shturmovik received improved graphics; in particular, there is now support for pixel shaders version 2.0. Now, when enabled, water surfaces look much more realistic. We tuned the game to the highest level of image quality in order to maximize the load on the graphics adapter.

The results obtained are stunning – the 1.5-2 times superiority of the RADEON X800 XT over the GeForce 6800 Ultra looks very impressive. Even the 12-pipe RADEON X800 PRO outperforms the leader of the GeForce 6800 family. We suppose that such a result, shown by the new ATI cards, is due to the high speed of operation with pixel shaders, high frequencies of the RADEON X800, as well as the presence of fast memory and the ability to efficiently work with it. Together with the specifics of the flight simulator genre, this allowed ATI’s new products to show excellent results.

The victory of the RADEON X800 XT does not look so impressive as in the previous case, but nevertheless, it remains the undisputed leader even with FSAA and anisotropic filtering enabled.

Game tests: Lock On: Modern Air Combat


All test participants demonstrate extremely low results in this game – after all, this flight simulator is very complex and, at maximum quality settings, brings any modern video card to its knees. Nevertheless, the new GeForce in this case still look better than the new RADEONs. On the contrary, the RADEON 9800 XT shows the best result among eight-pipelined solutions.

In FSAA mode and anisotropic filtering, the RADEON X800 XT either slightly outperforms the GeForce 6800 Ultra, or on par with it. The same can be said about a pair of GeForce 6800 GT – RADEON X800 PRO. But the GeForce 6800 is once again inferior to the GeForce FX 5950 Ultra due to the insufficiently productive memory subsystem.

Game tests: Microsoft Flight Simulator 2004


Almost all cards are limited by the CPU performance, only the GeForce FX 5700 Ultra demonstrates a lower result. As the resolution grows, other members of the GeForce FX family join it, as well as the RADEON 9600 XT. Curiously, in this game, the minimum number of frames per second is almost equal to the average.

Senior representatives of the new lines ATI and NVIDIA continue to demonstrate all the same speeds, however, at 1280×1024 the GeForce 6800 GT starts to lag behind. At 1600×1200 the RADEON X800 XT and PRO show better results than the GeForce 6800 Ultra and GT, respectively, and the GeForce 6800 suffers once again due to its slow memory. In the class of the previous generation cards, the battle is between the RADEON 9800 XT and the GeForce FX 5950 Ultra and between the RADEON 9800 PRO and the GeForce FX 5900. The RADEON 9600 XT outperforms the GeForce FX 5700 Ultra in all resolutions, except 1600×1200, where slow memory starts to hinder it.

Game tests: X2: The Threat


In this space simulator, the RADEON X800 XT looks a little faster than the GeForce 6800 Ultra, and the RADEON X800 PRO, on the contrary, is slightly inferior to the GeForce 6800 GT. At first, the GeForce 6800 shows a good result against the background of the competing RADEON 9800 XT, but as the resolution rises, the lack of fast memory starts to show itself again. As for the cards of the previous generation, the GeForce FX line feels better in this game than the RADEON 9800 line, because of the more efficient work with shadows. Likewise, the GeForce FX 5700 Ultra shows better results than the RADEON 9600 XT.

With anti-aliasing and anisotropic filtering enabled, the situation changes little, except that at low resolutions the GeForce 6800 Ultra starts outperforming the RADEON X800 XT.

Read This Now:   Review and testing of the AMD Radeon 535 video card: Test | Specs | CPU | Config

Game Tests: F1 Challenge 99-02


At 1024×768, all cards, except for the RADEON 9600 XT, run into a certain frame rate limiter, which, to all appearances, cannot be disabled. As the resolution grows, everyone falls away from this barrier, except for the GeForce 6800 line, which demonstrates exceptional performance.

When FSAA is used together with anisotropic filtering, the picture is more varied, although nothing special is observed in it – the GeForce 6800 Ultra is slightly faster than the RADEON X800 XT, the GeForce 6800 GT is almost on par with the same RADEON, overtaking the RADEON X800 PRO and the GeForce 6800 in once again there is not enough fast memory. GeForce FX 5950 Ultra and 5900 outperform the RADEON 9800 XT, and the GeForce FX 5700 Ultra slightly outperforms the RADEON 9600 XT.

Game Tests: Colin McRae Rally 04


ATI RADEON X800 XT Platinum Edition is at its best, but RADEON X800 PRO, on the contrary, quickly yields to GeForce 6800 GT with the increase in resolution. The GeForce 6800 demonstrates good results, outperforming the RADEON 9800 XT. The leadership of ATI products can be distinguished from the previous generation of cards.

But when the maximum quality mode is activated, the RADEON X800 PRO gets a second wind and outperforms the GeForce 6800 GT, despite its 12 pipelines. On the whole, the GeForce 6800 still outperforms the RADEON 9800 XT, while the GeForce FX 5950 Ultra is significantly inferior to it.

Game Tests: FIFA 2004


The RADEON X800 family starts off very confidently, but only at the lowest resolution. Already at 1280×1024 it starts to yield to the GeForce 6800 Ultra / GT video adapters. This can be explained by the relative simplicity of the game engine, as well as by the presence of a large number of shadows. At the same resolution, the GeForce 6800 sharply loses its positions, but the explanation here is different – the same notorious slow memory. The slowest card in this game is the RADEON 9600 XT, which, in principle, was to be expected – it has the slowest memory subsystem and the lowest geometry processing speed among all other test participants.

Enabling FSAA and anisotropic filtering widens the gap between the RADEON X800 XT / PRO and the GeForce 6800 Ultra / GT. Even the GeForce 6800 is able to compete with ATI products, albeit only in a resolution of 1024×768. The RADEON 9800 XT lags behind the GeForce FX 5950 Ultra, as well as the GeForce 6800. The GeForce FX 5700 Ultra is inferior to the RADEON 9600 XT in the resolution of 1024×768, but then it catches up and surpasses the competitor due to the fast memory.

Game tests: Command & Conquer Generals: Zero Hour


Nothing interesting is observed, all test participants run into the performance of the central processor. Some differences begin to appear only in high resolutions, but all new generation cards continue to demonstrate practically the same performance.

When the FSAA + AF mode is enabled, the superiority of ATI Technologies solutions, which is usual in many reviews, is observed in this game. At 1600×1200 the GeForce 6800 is again behind the GeForce FX 5950 Ultra.

Game Tests: Perimeter


You can play more or less comfortably only on the RADEON X800 XT. Everything else does not provide even the minimum required 30 frames per second. Subjectively, we feel that the GeForce 6800 Ultra and RADEON X800 PRO provide about the same level of comfort. All of the above applies only to the resolution of 1024×768 (!). At 1280×1024, only the same RADEON X800 XT can achieve 30 frames per second.

But when you enable the maximum quality mode, you can play only on the RADEON X800 XT and only at 1024×768. Everything else is more like a slideshow. We have never seen such a demanding game in our practice, but more and more of them will appear. It is for these games that you will need the most modern video adapters and processors that the computer industry has to offer.

Semisynthetic Benchmarks: Final Fantasy XII Official Benchmark 2

In this test the RADEON X800 XT dominates, followed by the GeForce 6800 Ultra by a small margin. The third and fourth places are taken by RADEON X800 PRO and GeForce 6800 GT, respectively. GeForce 6800 demonstrates performance at the level of RADEON 9800 XT, and the last place belongs to GeForce FX 5700 Ultra.

When tested in the maximum quality mode, the alignment of forces changes insignificantly, except that the RADEON X800 PRO is no longer able to compete with the GeForce 6800 GT. On the whole, RADEON cards look preferable to GeForce models corresponding to them in positioning.

Semi-synthetic tests: Aquamark 3


In the Aquamark3 test suite the situation for the RADEON X800 family looks a little worse than in the previous test – the RADEON X800 XT competes with the GeForce 6800 GT, and the RADEON X800 PRO lags slightly behind the GeForce 6800, outperforming it only in 1600×1200 resolution. The same can be said about eight and four-pipe cards – NVIDIA’s variants are slightly superior to their corresponding class solutions from ATI Technologies or are equal to them in performance.

But as the load increases, the RADEON X800 perform well, and the RADEON X800 PRO reaches the level of the GeForce 6800 GT, or even Ultra. There is no clear winner among eight-pipelined cards – there is a fight on equal terms between GeForce FX 5950 Ultra and RADEON 9800 XT. The RADEON 9600 XT wins in the junior weight category.

Synthetic tests: Futuremark 3DMark03


Overall result

This diagram shows that the older models of ATI and NVIDIA video adapters show practically the same results. If we take the younger models, the GeForce 6800 GT outperforms the RADEON X800 PRO due to the presence of 16 pipelines. The youngest model of the new generation from NVIDIA, the GeForce 6800, takes a place above the RADEON 9800 XT and below the RADEON X800 PRO. Let’s take a closer look at the results obtained in this test suite. First gaming test





The first game test is very simple and does not contain any graphical delights. There is nowhere to demonstrate high performance when executing pixel shaders, and the scene fill rate comes first, so the RADEON X800 XT lags a little behind the GeForce 6800 Ultra. Nevertheless, RADEON X800 PRO still outperforms GeForce 6800 due to higher frequencies. GeForce FX 5950 Ultra outperforms RADEON 9800 XT for the same reason.

More efficient work with the memory subsystem helps the RADEON X800 family when FSAA and anisotropic filtering are enabled, therefore ATI cards look better than their competitors in the maximum quality mode. Second play test



In the second game test, effective work with shadows is of great importance, therefore, NVIDIA products are at their best here too – GeForce 6800 Ultra is beyond competition. The RADEON X800 XT is on a par with the GeForce 6800 GT, and the RADEON X800 PRO is on a par with the GeForce 6800. Among the previous generation cards, there is an approximate parity between the RADEON 9800 XT / PRO and the GeForce FX 5950 Ultra.

With anti-aliasing and anisotropic filtering enabled, NVIDIA’s advantage is further strengthened. The third game test The third game test is practically similar to the second one, so the situation in it does not change. The same is true for the highest quality mode. Fourth gaming test









In the fourth game test of the 3DMark03 package, as we have already written, the speed of work with pixel shaders is the most important, therefore, the new RADEONs are at their best here, and the old ones outperform their competitors.

Exactly the same happens when you enable full-screen anti-aliasing, supplemented by anisotropic filtering. In general, in the first three 3DMark03 tests, products based on NVIDIA VPUs look stronger than those in which GPUs from ATI Technologies are used, but the fourth, and perhaps the most important test, remains with the products from the Canadian company.

Conclusion

So, we examined the performance of twelve modern video adapters in 33 gaming tests. What conclusions can be drawn from such a large-scale testing? Which card is your best buy? What is the future of the previous generation cards? Let’s look at these questions in order.

After analyzing the results obtained, we can conclude that, today, there is no clear leader in the field of consumer 3D graphics – both ATI Technologies and NVIDIA Corporation have in their arsenal a whole range of solutions with different performance designed for any category of buyers. NVIDIA can nevertheless be called the technological leader, since its latest graphics processor, NV40, has a number of unique features so far. However, a promising architecture is far from all that is needed for the success of this or that VPU – just remember the NV3x line, which, despite all the innovations, was inferior in performance to similar products from ATI Technologies. Since it will not be possible to clearly highlight the favorite today,

Graphics Accelerators with a Recommended Price of $ 499 Perhaps the most non-trivial part of the review is identifying the absolute leader in performance among all graphics cards widely available today. None of the fastest card-leaders – ATI RADEON X800 XT and GeForce 6800 Ultra – has demonstrated unambiguous performance superiority in modern games. NVIDIA GeForce 6800 Ultra leads in low resolutions, as well as in cases when anisotropic filtering and antialiasing are not activated. In addition, the new product from NVIDIA is ahead of the competitor in older games.



Overclocking is not decisive for NVIDIA GeForce 6800 Ultra: where the chip outperforms its main competitor, it does not need a frequency higher than 400 MHz, and in cases where a new product from ATI is in the lead, additional megahertz does not help to come out ahead. Perhaps, after some time, cards based on GeForce 6800 will get some speed gain due to optimization of drivers and applications, however, it should be noted that immediately after the release of new games, video cards based on new NVIDIA products may not show performance exceeding that of a competitor.


Among the advantages of the GeForce 6800 Ultra, it is worth highlighting the support for Shader Model 3.0, which, on the one hand, can provide a slightly higher speed, and on the other, it has every chance to please with new special effects in the future.
Today, support for Shader Model 3.0 is a rather weak argument – attempts to use the extended functionality of NVIDIA chips appeared only in Far Cry with the release of patch 1.2. But over time, when Shader Model 3.0 finds wider support in the gaming industry, there will be more and more such games, and the role of Shader Model 3.0 will become more significant, which means that the lack of support for shaders 3.0 will indeed become a serious drawback of ATI RADEON X800 XT / PRO and, conversely, the advantage of the NVIDIA GeForce 6800 family cards.

ATI RADEON X800 XT Platinum Edition shows excellent performance across the entire range of applications, in many cases lagging behind the GeForce 6800 Ultra, and also proves its leadership in cases where anisotropic filtering and antialiasing are enabled. Buying RADEON X800 XT today you can be sure that the speed in games that actively use DirectX 9.0 shaders will be invariably high, regardless of the efforts of ATI programmers regarding driver optimizations.
Besides, ATI RADEON X800 XT is more compact, less noisy, does not require two power connectors and occupies only one slot, while having high performance.
It should be added that ATI RADEON X800 XT does not require a super-quality power supply unit, unlike its competitor, however, if you are going to purchase such a powerful graphics accelerator, keep in mind that to build a balanced gaming system you will have to purchase a decent amount of RAM and a powerful central processor, and this can lead to the need to purchase a high-quality power supply.
Among the shortcomings of the EADEON X800 XT, one can name an ineffective cooling system for a solution of this class, as well as the lack of support for Shader Model 3.0. $ 399 Recommended Graphics Accelerators



The next step, GeForce 6800 GT and RADEON X800 PRO, are estimated by the companies at $ 399. NVIDIA GeForce 6800 GT looks a little more preferable due to the presence of 16 pixel pipelines and performance, which is not much inferior to the speed of the older model, GeForce 6800 Ultra. However, even here the situation is not so unambiguous: ATI RADEON X800 PRO takes the lead in a number of new DirectX 9.0 games with anisotropic filtering and antialiasing enabled, mainly due to the high efficiency of pixel shaders and efficient algorithms of anisotropic filtering and anti-aliasing.

It is known that most of the GeForce 6800 GT video cards will certainly be able to work at the frequencies of the GeForce 6800 Ultra, so overclocking adds attractiveness to these cards, and as a result, the GeForce 6800 GT is probably the best choice for those who are limited by the $ 399 limit.

Those who like quietness, compactness and economy can recommend the RADEON X800 PRO .
ATI RADEON X800 PRO has efficient algorithms for anisotropic filtering and full-screen anti-aliasing, which often make it possible to outperform the GeForce 6800 GT, however, “in general” the leader in performance is, of course, NVIDIA GeForce 6800 GT. Graphics accelerators with a suggested price of about $ 299 Moving down the spectrum of graphics accelerators, we come to an interesting comparison: the slowest cards based on the new chips are opposed to the fastest GPUs of the previous generation.



It should be said that the GeForce 6800 stands out among the new video cards . It would seem that this video adapter, due to the presence of 12 pixel processors, should compete with the RADEON X800 PRO, but NVIDIA equipped it with slow memory operating at 700 MHz. As a result, the price has decreased, but at the same time productivity has seriously decreased.

As a result – in heavy modes the GeForce 6800 feels less confident – the “fast” architecture of the NV40 in this case is held back by the combination of slow memory and the reduced efficiency of using the available memory bus bandwidth as compared to “full” 16-pipelined motherboards. As a result, sometimes the GeForce 6800 is inferior even to the GeForce FX 5950 Ultra and RADEON 9800 XT.
However, on the whole, the GeForce 6800 certainly outperforms its competitors in its price range, especially if you don’t get carried away with full-screen anti-aliasing, and at the recommended price of $ 299 the motherboard can become an excellent purchase. Graphics Accelerators under $ 199 Let’s go down one step further. Here are such cards as NVIDIA GeForce FX 5900 and 5900 XT



and also ATI RADEON 9800 PRO . What should a shopper on a budget, but want to enjoy modern games, prefer? The answer is simple and our summary diagrams show it eloquently: given the equal price and choice, the RADEON 9800 PRO should be preferred over the GeForce FX 5900/5900 XT, for the same reason that the RADEON 9800 XT should be preferred to the GeForce FX 5950 Ultra.


Further, if there is no opportunity to purchase the RADEON 9800 PRO, but there is a choice between the RADEON 9600 XT and the GeForce FX 5900 XT, the latter looks preferable due to the presence of eight pixel pipelines, and also because such cards can overclock well, providing a very decent performance gain.


If the choice lies only between the GeForce FX 5700 Ultra and the RADEON 9600 XT , you should again pay attention to what games you are playing or are going to play. Due to its architectural features, the RADEON 9600 XT will perform best in games rich in pixel shaders, while the GeForce FX 5700 Ultra will perform at its best in games where speed of processing geometric information and fast work with shadows are important. In any case, modern games are unlikely to allow you to play these cards at 1024×768, especially with anti-aliasing and anisotropic filtering enabled at the same time.


So, it is obvious that we cannot give completely unambiguous recommendations, since the choice of each buyer depends on many factors, including those that we did not take into account in this review.
The main thing when buying a new video adapter is to clearly understand in which system it will be installed, in which games and in which video modes it will be used most often. In this case, the probability of an erroneous decision when choosing will be minimal.


Notice: ob_end_flush(): failed to send buffer of zlib output compression (1) in /home/gamefeve/bitcoinminershashrate.com/wp-includes/functions.php on line 5420

Notice: ob_end_flush(): failed to send buffer of zlib output compression (1) in /home/gamefeve/bitcoinminershashrate.com/wp-includes/functions.php on line 5420