You've probably heard the saying if it looks like computer graphics, then it's bad computer graphics. Well those shots definitely don't look like computer graphics. In particular, I like the top of the car in the black & white airport image---how the glare from the window shield and roof merge into the single reflected color. (I'm curious, do you ever find yourself looking at real objects now with the same critical eye you examine a rendering and think they sometimes look fake? Maybe its just me...)
No we are doing the same. Especially since we are dealing with all kind of people like designer, engineers, marketing people, photographers. Everyday I learn something new about cars and this makes it a real challenge. What is missing here is real tonemapping or a real 16bit display like the one from brightside, the eye does very funny things when it gets to lighting. A
Now I understand you are a commercial outfit, so can't freely give away your secrets, but I'd like to turn your own question back at you. "What's your idea on dealing with the noise?" I don't see any noise in the images you posted, so I'm curious as to why you asked. How many rays/pixel? What kind of render times? Just wondering I'm wondering on how far away I am from professional grade applications.
You are right, there are of course several things I cannot disclose because of commercial reasons but there are also some things I cannot for technical reasons (sampling is really not my field of expertise). The render time is 3 minutes for the image in 2560x1600pixel including antialiasing on an 8core machine.
Also, are you running Windows on the 32p machine?
Yes, Windows 2003 Server, everything out of the box.
Do you get 6 fsp with the same image quality you showed? If so, that absolutely incredible. How much is hardware (GL) and to what degree is it ray traced.
see comments about speed above. rendering is done 95% CPU and 5% GPU. But this may be subject of change.
With all the buzz, it seems as if real-time ray tracing is the new programmable shaders. Your comment about the Telsa is true, however I've heard its been difficult to get the K-d Tree etc. accelerators to run on the GPU because of the SIMD and data base access. Programming CUDA is supposed to have a small cache size (32K) or something like that, and that currently GPU based ray tracers are no faster than multicore CPU. However, NVIDIA did just hire Peter Shirley...
Your (or anyone's) thoughts?
GPU programming is of course much harder than CPU programming for two reasons. First one is that your hardware faces some real limitations like bad sequential memory access as well as shader clusters that are bigger than useable. Second reason is that the development tools or the lack of them makes it really hard to do.
What nvidia has bought from Peter Shirley I don't know, judging from the photos on the web page I see some basic raytracing functionality like reflection and refraction and that's all. But we will see in future what kind of product will appear on the market. Their goal is of course to have hybrid rendering, something they also want to achieve with the acquisition of mental images. I am very curious how all this will turn out. In the meantime we will continue the development of our software in a small team and not concentrate on all this marketing hype