This page shows the source for this entry, with WebCore formatting language tags and attributes highlighted.
Title
Acronyms galore (NVidia 6800Ultra and Radeon X800 XT)
Description
If you dare step away from the video card world for more than a few months, you find, on returning, whole new vistas of acronyms, bandwidths, core-speeds, registers, pipelines and "technologies" waiting to eat you alive. The latest cards from the two top vendors, ATI and NVidia, are lining up to play tomorrow's games at 1600x1200 with 8x Anti-aliasing and all effects on high. Quake 3 doesn't even make a dent --- assume it runs at 300FPS with everything on.
<h>Radeon X800 XT</h>
ATI's latest card, which, though announced after NVidia's should actually hit stores first, pushes <iq>8.3G Pixels/sec</iq>. <a href="http://arstechnica.com/news/posts/1083689399.html" source="Ars Technica" title="Flagship Radeon X800 XT debuts along with junior, the new Radeon X800 Pro">Flagship Radeon X800 XT debuts</a> gives more details:
<bq>$499 price tag ... 16 pipelines, a 520MHz core, and 256MB of DDR3 memory clocked at an effective 1.12GHz (550MHz GDDR-3)</bq>
The <a href="http://www.beyond3d.com/reviews/ati/r420_x800/index.php" source="Beyond3D">ATI Radeon X800 XT Platinum Edition / PRO Review</a> addresses my primary concerns with their <iq>Cooling and Power</iq> section. You'll see why this is important when we take a look at Nvidia's behemoth in the next section.
<bq>The fan utilised proved itself to be a fairly silent solution on 9800 XT, as is the case here - however as part of the on-chip mobile technology utilisation, R420 features an on-die thermal probe which constantly monitors the temperature and different fan speed steppings are set in the BIOS to different temperature ranges, hence the fan will run at lower speeds when the chip is running cool and higher speeds when its hotter.</bq>
The high-end version uses about <iq>76W to 65W</iq> on average, whereas the lower-end version (which is still ungodly powerful compared to most of our cards) uses about <iq>58W and 49W</iq> under the same conditions. The review itself contains an unbelievable amount of detail relating to how the card works with all of the aforementioned acronyms and buzzwords. It's fast. It runs UT2004 at 1600x1200 with 4xAA (Anti-aliasing) at about 50 FPS.
<h>NVidia GeForce 6800Ultra</h>
<img class="frame" align="left" href="{att_link}10817474486qLMOmeutS_1_14_l.jpg" src="{att_thumb}10817474486qLMOmeutS_1_14_l.jpg">NVidia has a new card too and it's a monster. <a href="http://www.hardocp.com/article.html?art=NjEx" source="[H]ard|OCP">GeForce 6800Ultra Preview</a>. As you can see on the left, the installed part is so huge, it completely overlaps one whole PCI slot, making it unusable. In fact, the article notes that <iq>[i]t is our suggestion that you give any video card as much room to breathe as possible.</iq>, meaning you should probably leave one more open, meaning you've got a powerhouse card that takes up 3 slots.
The last card they came out with was quickly named the <iq>GeForceFX 5800Ultra dustbuster</iq> because of the unbelievable noise it made. This one is, apparently quieter:
<bq>When you first start your computer before the driver loads the fan will run at top speed, and the noise from that is noticeable. It is loud, but not as loud as the 5800Ultra. However, once you install your driver and the card is running in Windows the fan runs at a lower RPM. ... In normal operational mode the fan is no louder than any normal 80mm case fan. It is definitely not discernable with it inside your case with the case covers on.</bq>
I'm thinking that my tolerance for noise might not be as high, but we may just have to take the reviewer's word for it. <iq>not discernable [sic]</iq> is quite subjective, I think, especially when you're wearing headphones.
As with the previous review (in which this card is actually used in comparisons), there are charts and benchmarks all showing you that this care is really, really fast. Again, there are image quality comparisons, this time to the 9800 Radeon series (the precursor to the X800 XT), in which the 6800Ultra comes up short. NVidia seems to have the same image quality problem that Voodoo cards used to --- before they went out of business.
<h>Shader Models and shamming</h>
In noted in another article, <a href="http://www.hardocp.com/article.html?art=NjA5" source="[H]arc|OCP">Shader Model 3.0</a>, NVidia has released screenshots in which they claim to <iq>show the advantages of using Shader Model 3.0 in a game</iq>. The screenshots are <i>very</i> convincing, showing the image quality that their card is capable of. However, they compare SM1.1 to SM3.0. ATI's offerings use SM2.0, but do not currently support SM3.0. Hence, NVidia's screenshots.
Unfortunately for NVidia, while SM3.0 does offer advantages over SM2.0, there are no games on the horizon (within the next year) that will take advantage of it. As the article says: <iq>It is our opinion that SM2.0 is technology we are likely to see have the greatest quality impact on our gaming experiences through this next year</iq>. Even worse, though, are the statements made by the lead developer of Far Cry, the game that NVidia used to make their screenshots. It seems that, while Far Cry takes big advantage of SM2.0 features to look as good as it does,
<bq>In current engine there are no visible difference between PS2.0 and PS3.0. ... In current generation engine quality of PS3.0 is almost the same as PS2.0. PS3.0 is used for performance optimization purposes.</bq>
Looks like NVidia is so concerned about image quality, that they're already starting with their fake benchmarks and false press releases again. At least some things never change.
<h>Which one?</h>
<img src="{att_thumb}1083564189888Adk70te_4_4_l.jpg" href="{att_link}1083564189888Adk70te_4_4_l.jpg" class="frame" align="right"><a href="http://www.hardocp.com/article.html?art=NjEx" source="[H]ard|OCP">Radeon X800XT-PE and X800Pro Review</a> offers benchmarks which also compare to the latest from NVidia, including image quality comparisons. To the right, you see a striking example of the shortcuts Nvidia has always taken in order to get better performance. In this case, ATI's card offers both superior quality and performance.
The [H]ard|OCP review's emphasis on quality evidenced by the chart they include, showing the highest <iq>IQ [Image Quality] settings that we found playable on our test system.</iq>
<img align="center" class="frame" src="{att_link}1083564189888Adk70te_10_1.png">
So it looks like all of the toughest, most photorealistic, graphics-intensive games available today are playable with super-high anti-aliasing enabled and all visual options maxed out. The top ATI part manages all this at 1600x1200 for all but one of these games. The economy ATI card and the NVidia card do it all at 1280x1024. Unbelievable. Methinks John Carmack has, once again, predicted and timed the graphics card market just right.
Here's looking forward to Half-Life 2 and Doom III.
Finally, here's a roundup of the info I'm keeping in mind for my next card:
<ul>
NVidia's card uses at best 2 slots (1 AGP/1 PCI), ATI's fits in like a normal card.
NVidia has a bad reputation for noise and power consumption.
ATI makes prettier graphics
ATI is faster
ATI's Anti-aliasing is better and faster
Both sets of drivers are probably quite stable
Both cards cost $499
Both have a cheaper variant that is also ridiculously fast
Both will probably be cheaper in 6 months
</ul>