top of page
Search
Writer's pictureRowan Karrer

Nvidia RTX and raytracing


If you're a fellow nerd, you'd have to have returned to nature and be living off nothing but love and water to not have heard about Nvidia's new 20xx series GPU's. It's hardly breaking news then that the new cards will sport dedicated raytracing architecture. A question that comes to mind for any 3D artist then is what effect, if any, will this specialised hardware have on the processes of raytracing in a production renderer, like VRay, Redshift, FStorm, or Octane?

The demonstrations released to date are video games using a rasteriser as the primary visibility engine, with a raytracer processing secondary rays for computation of the direct reflection contribution. This is a great idea for video games, but it's not how we do things in TVC, VFX and visualisation, and it begs the question, how will the hardware handle indirect diffuse bounces and refractions? Can existing renderers use this new architecture to accelerate their current bi-directional raytraced computations? Or are the rendering architectures too far removed to be complimentary?

Vlado from Chaosgroup has gone a fair way in answering these questions in his blog post below, which I thought would be very interesting and valuable to any 3D artist even vaguely interested in GPU rendering and what this new hardware may mean for the future.

https://www.chaosgroup.com/blog/what-does-the-new-nvidia-rtx-hardware-mean-for-ray-tracing-gpu-rendering-v-ray

The short answer is 'it depends', which honestly comes as no surprise to me. I have always been skeptical of GPU rendering demonstrations, not because they aren't impressive, but just because years of experience and disappointment with the technology has consistently taught me that the rigorous demands of production rendering often negate potential speed advantages that may come from GPU architecture. To date by far the most impressive features of GPU rendering are their ability to rapidly process direct illumination, depth of field, chromatic aberration, and lens distortion.

Perhaps more enticing however is the 2080 and 2080ti's newly integrated capacity to use the NVLink technology that was previously reserved for only the top-tier professional compute cards, costing in excess of $9000AUD. This effectively allows the gaming cards to reach 22GB of VRAM if you link two together, less of course a few gig that the operating system and 3D application consume. That's still not enough for many applications, but it's getting tantalisingly close.

I assume GPU rendering will probably become the norm in the future, but it's not there yet. I presume real time rendering and offline rendering will one day converge, but we're not there yet. I cannot ignore my personal experience in rendering, and that experience is that for the past 10 years Blinn's law is unshakable. That is to say, image fidelity increases equivalently with computational power, therefore render times never decrease.

My choice has always been to lean towards flexibility and features, as artist time will always be a great deal more valuable than processor time. At least in 2018 then, this means CPU rendering is still be resolutely my go to tool for 3D rendering, but it's exciting that AMD and NVidia are shaking things up, and who knows where we'll be this time next year?


28 views0 comments
bottom of page