This page shows the source for this entry, with WebCore formatting language tags and attributes highlighted.
Title
Quakecon 2002
Description
<a href="{data}/news/old_attachments/images/carmack_quakecon_2002.jpg"><img class="frame" align="left" src="{data}/news/old_attachments/images/carmack_quakecon_2002_tn.jpg"></a>John Carmack gave a long presentation (almost 3 hours) at Quakecon 2002 which covered the current Doom technology and the future of gaming as he sees it. <a href="http://www.gamespy.com/">Gamespy</a> has the complete coverage in <a href="http://www.gamespy.com/articles/august02/quakecon2002/carmack/">QuakeCon 2002 - John Carmack Speaks</a>. The future of graphics technology involves rendering effects like lens flares and specular highlighting in a more realistic manner; not that they look more realistic necessarily (though they will), but moving more rendering into the 'standard' pipeline instead of handling so many effects as <iq>special-case scenario[s]</iq>.
<bq>He believes that the most significant benefit to this type of advancement is that certain effects will become less of a special-case scenario. When rendering things "the right way," one does not need to worry about covering up for an elaborate fake. The effect actually exists, and can be used in the world as-is, without needing to design a level or area around that effect. </bq>
This includes a <iq>jump to 64-bit color</iq> as well. The 64-bit color is for internal precision when rendering, so calculations are done correctly, then scaled to the output device (along dimensional or color axis) only at the very end. <iq>[t]here's still a lot of benefit to gain by doing all of the intermediate calculations the way you really should do it.</iq>
Once again, we see that John Carmack is truly an engineer who sees that quality is a long-term goal directly affected by short-term decisions. He sees that game engines should be able to graphically render effects as they physically occur in order to provide the best possible model. If this is too slow, those become the 'special case' effects only at the end, in order to ensure proper performance on current hardware. Concessions to performance should be made only after correctness can be ensured.
One gets this feeling even more when he discusses features that he'd like to put in, like a much better ambient lighting system. <iq>the DOOM 3 engine does contain some basic ambient light sources that will affect all of the bump-maps on the various models without giving a highly directed look to them, but what he feels would be optimal is to affect each bump on a per-pixel level, by having "each bump look up in a cube map."</iq> Unfortunately, this type of change, while basically done from his technology's end, would require far too much reworking of existing artwork and content. It will have to wait, but you can tell it bothers him to have to 'settle' for a 'hacked' ambient lighting model, especially since any card after the GeForce-1 would be able to render ambient lighting correctly with no difficulties. In this case, he (and we) will just have to wait, as he has also said, <iq>for the most part my work on Doom, the significant contribution to it, is done. Set in stone. The renderer decisions are all made.</iq>
This desire for correctness is a recurrent theme in Carmack's work and his designs. His approach to rendering and graphics is extremely abstracted and takes its goals directly from the real world. He sets his standards incredibly high.
<bq>The way you should be calculating all graphics... the way it ought to be done is: you're basically counting out photons that are, you know, imprinted on a surface. Lights spray out a whole lot of photons, that are collected on surfaces. ... So what we want to do is do all of this calculation the right way</bq>
The Doom III technology is actually based on <iq>GeForce1-level hardware</iq> as the baseline feature set, whereas <iq>id will be basing their next technology on the features just now becoming available with ATI's Radeon 9700 card, and NVidia's upcoming NV30 chipset.</iq> Though Carmack has repeatedly pushed hardware to its limits, he still falls a little short of predicting exactly how far and fast it will grow. These days, different cards support different features, so <iq>[t]his has led him to keeping track of roughly half a dozen separate backends, each optimized for a particular card.</iq>
This is actually a very sound design decision, which allows the rest of the engine the flexibility of interfacing with a standard API, but still allows the most performance to be squeezed from an individual card. <iq>Carmack has chosen to have all of the basic rendering and most special effects follow one path, with lighting effects being the primary area of divergence.</iq> This means that any card will be supported, but that popular ones with special features will have their own renderer.
<bq>This means that Doom3 does not take complete advantage of every single feature found on every single card on the market, but that it will look very close to the same on a wide variety of cards. Carmack stressed that he prefers to have a title which appears the same on a variety of systems, and requires lower resolutions for increased framerates, than a title which turns off effects to achieve higher framerates.</bq>
He's already written an OpenGL 2.0 compliant renderer and continues to stress that developers should move to a higher-level shading language rather than specifying effects in card-specific APIs. It's like the <iq>transition from assembly language to C and other languages</iq>; Sometimes a human can hand-code better than the compiler can create assembler, but not many humans know how, and the process is incredibly hard to debug and is very error prone. More often than not, developers won't implement an effect because it's too hard. Developers should be able to <iq>specify what we want to have done, and as it needs to, [the card] cut[s] it down into multiple passes for different cases</iq>. That gives the developer freedom to develop and design effects and the individual card vendors implement those directives as quickly and well as their cards can do it. It's a very natural progression that's happened many times before in programming. The transition to object-oriented programming was (and still is) similarly accompanied by complaints of performance. Carmack's a visionary and doesn't have such hangups; he realizes what will create the faster, most maintainable, extendable code overall:
<bq>I am on the record as saying that the next game engine that I work on, after Doom here, is going to be written in a high level shading language. There's just no doubt about it. There's not going to be custom, per-card assembly specifications going on there. It's going to be in a high-level language, and the drivers are just going to have to deal with it there.</bq>
The future of video cards will only get more advanced (<iq>Carmack points out that the ATI Radeon 9700 is over a hundred times faster and more powerful than ATI's initial Radeon offering</iq>) and developers should be happy not to have to figure out for themselves how to get the best performance out of it. Developers should be able to focus on doing the math for their environments to feel realistic, then specifying the commands needed to render it. The card can translate this into a scene on the display device. <iq>Put quite simply, the 50-100 passes that Doom3 uses to render a scene will, within a short period of time, seem minimal in comparison to what graphics cards will be capable of.</iq> Considering the videos from Doom, this is fantastic news for us.
As for Carmack? He's getting ready to move back into research (lucky dog). He says, <iq>I'm getting a little tempted now to start peeling off and working on some next-generation technology generations to research some of the things.</iq>