CPU vs GPU Discussion
-
I once again find myself feeling rather dumb, and watching Mythbuster guys shoot paintballs doesn't really explain a lot about the looming CPU vs GPU wars. So far what I've read is a GPU (Graphics Processing Unit) is a graphic accelerator that is directly attached to your computers graphics card, allowing it to do more complex calculations particularly tuned to computer graphics in much less time than the previous CPU could.
While the CPU (Central Processing Unit)is the typical unit within your computer that does the grunt work of all the calculations. I have also read that the CPU can contain a GPU, and there seems to be some argument about which is better, when taking price, speed, and overall usage.
This also seems to be a war between Intel and NVIDIA.(True?)
Then we get into 32 vs 64 bit.
As I understand it the bit count is really a limitation on the amount of data a computer can keep in its "working Memory". The Higher the bit count the quicker the access to this information is as it is within the virtual memory of the machine.
I guess this is a precis of my read.
So I would appreciate your thoughts on what this means in terms of preparing myself for the next generation of computing. First of all in relation to the work I do, I understand it is supposed to greatly increase rendering time, but is it more aimed at animations or will it be be so substantially better in the Arch Vis field as to justify the expense?
And I guess the next question I would have is: Will we be moving more toward rendering in architectural visualization in the future? Will clients be demanding this? Will they be willing to put out the extra expense for this?
And will the faster rendering times, and newer software that will become available mean that our time spent on these projects will actually be less (once we have mastered the applications)?
I appreciate your thoughts. -
GPU renderers are not that different to CPU renderers. Available (or should I say announced) solutions are more or less based on path tracing and useable only with CUDA. Path tracing is a good old solution with a direct light.
Would a client notice if your work is made with a GPU renderer - definetly no.
There is plenty of marketing hype around rendering, GPU rendering is part of that. Just thing about your workflow, how large part of your time you spend on the actual rendering of a image? Animation creators will certainly like GPU rendering, as it should cut hardware cost (comparing to render farms).
I think that scene and material setup are parts of rendering workflow in Arch Vis that will most benefit from GPU rendering. My bet is that it wont take long (after OpenCL is mature enough) that many renderer will offer some sort of GPU accelerated engine - will it be used to final rendering or simply with the viewer, don't know, but it will be just a ordinary part of a renderer (like OpenGL acceleration is in SU). -
Dale,
I think it's a bit too early to tell. Right now I favour a high-end GPU and dedicated memory on a graphics card, along with a multi-core CPU and lots of system memory. It's better to go 64bit for sure. My understanding of 64 bit is that you think of the superhighways in your processor which take bits in and send bits out. The more "lanes" you have in your highway, the faster you can get stuff into the core and faster it can come out. But you need software built for this architecture to take advantage of it. Software designed for 64bit systems can allocate bits to multiple cores more efficiently than 32 bit software, as I understand it.In my experience with mapping and GIS software, expectations for better and better products always, ALWAYS exceeded reality. Yes, software becomes better, and the hardware to run everything is extremely powerful, but consumers of your products often don't understand what it takes to learn and effectively use all of the software and hardware.
They will want more, and likely they won't want to pay a premium for higher-end work unless you can prove to them that there is extra value in it. If they think your products are the norm for the industry, they won't be willing to pony up the cash.
-
Thanks for the input. But Of course I have more questions. Cuda is a NVIDIA format is it not, so does that mean other video card developers are developing other formats?
And as for my understanding of pathtracing, am I correct in my assumption that it follows the photon path from the camera to the light source vs photon tracing (mapping?) which does the opposite, traces from light source to camera?
What makes one better than the other? or are they? And am I misinformed that path tracing allows for better shading?@notareal said:
Post by notareal on Thu Jan 14, 2010 11:19 am
I think that scene and material setup are parts of rendering workflow in Arch Vis that will most benefit from GPU rendering.
"
Is this because of the fact its speed will allow you to essentially "preview" higher quality lighting, material, and scene setups before committing to the final render?@nuclearmoose said:
Post by nuclearmoose on Thu Jan 14, 2010 11:28 am
Software designed for 64bit systems can allocate bits to multiple cores more efficiently than 32 bit software, as I understand it.I have heard that the new (Intel i7 chip I think) will be able to allocate resources in way that essentially over rides software like SketchUp that doesn't support multi cores. Would having a 64 bit system mean anything for large poly count models in SketchUp?
-
@dale said:
@notareal said:
Post by notareal on Thu Jan 14, 2010 11:19 am
I think that scene and material setup are parts of rendering workflow in Arch Vis that will most benefit from GPU rendering.
"
Is this because of the fact its speed will allow you to essentially "preview" higher quality lighting, material, and scene setups before committing to the final render?Exactly!
@dale said:
@nuclearmoose said:
Post by nuclearmoose on Thu Jan 14, 2010 11:28 am
Software designed for 64bit systems can allocate bits to multiple cores more efficiently than 32 bit software, as I understand it.I have heard that the new (Intel i7 chip I think) will be able to allocate resources in way that essentially over rides software like SketchUp that doesn't support multi cores. Would having a 64 bit system mean anything for large poly count models in SketchUp?
i7 won't make SU suddenly to multi-threaded. But i7 will "over clock" it's core when running a single threaded program like SU. So one can say that SU will benefit from i7.
Running a 32-bit program in 64-bit system won't usually give added benefit to that particular program, but naturally (if you have more ram) you can run more 32-bit programs in 64-bit os than is 32-bit system. A single 32-bit program is still limited with useable memory, like in 32-bit OS. To get most of a 64-bit system the program must be 64-bit too. So if you run only SU in your system, then there is no difference... until SU is offered as 64-bit. But for a new i7 system I would not consider any other than 64-bit OS. -
It's not that simple to say that path tracing would offer a better result than photon mapping. But sure path tracing is far more "user friendly", photon mapping requires usually some scene based tuning. I would not say that GPU rendering will be limited to path tracing... it just seems to be a simplest to implement and so offered first.
nVidia has CUDA. OpenCL is open... and ATI supports it, but nVidia has also released OpenCL drivers. So for me, in a long run, OpenCL sounds more interesting. Don't know if there will be significant performance difference between these two.
-
@dale said:
Thanks for the input. But Of course I have more questions. Cuda is a NVIDIA format is it not, so does that mean other video card developers are developing other formats?
And as for my understanding of pathtracing, am I correct in my assumption that it follows the photon path from the camera to the light source vs photon tracing (mapping?) which does the opposite, traces from light source to camera?
What makes one better than the other? or are they? And am I misinformed that path tracing allows for better shading?CUDA is indeed an nvidia format, openCL is the other alternative i know of but its not quite as mature as CUDA at the moment.
As you say, path tracing follows the photon from the camera to the light source, i havent heard photon tracing, but the major advantage of path tracing is that its pretty efficient, as by definition every path thats calculated will be part of the final image.
-
With regards to why path tracing is used for GPU based renderers (at the moment), its to do with the fact that each path can be calculated independently of the others, thus each thread on the GPU can be run separately from the other and does not need to wait for results form other threads to continue running.
-
@remus said:
With regards to why path tracing is used for GPU based renderers (at the moment), its to do with the fact that each path can be calculated independently of the others, thus each thread on the GPU can be run separately from the other and does not need to wait for results form other threads to continue running.
Are any rendering programs equipped to cut the calculation time involved in rendering by essentially saving a base photon path trace, and only alter what is necessary in a scene if you for instance alter a material?
-
Not in the way your thinking. The problem is that with materials that have complex properties (SSS, specularity etc.) they can send paths of all over the place interfering in lots of different places, so if you change the material but keep the paths, the paths may well be wrong leading to an incorrect render.
Edit: a good example of this would be a glass ball. If you render this you will get lots of pretty caustics, but if you then went to re render the ball with a diffuse material, the caustics shouldnt be there.
Any renderer that has a multi-light style feature is using a similar idea to this, though. Very roughly, it just remembers which paths are from which light source, so when you play with the light intensity/colour it just adjusts the relevant pixels accordingly.
-
@remus said:
With regards to why path tracing is used for GPU based renderers (at the moment), its to do with the fact that each path can be calculated independently of the others, thus each thread on the GPU can be run separately from the other and does not need to wait for results form other threads to continue running.
Same quote different question.
Maybe this is at the base of what I am trying to understand in how CPU, and GPU processing differs. Because the GPU is solely dedicated to graphics performance it theoretically has a better performance? So this is something that is only to do with speed, not necessarily quality? In other words even with a slower CPU based machine you would still be able to produce a rendering of equal quality with those of a GPU based machine, if you are willing to let it cook for a longer time period? If this is the case then about how much longer? -
@notareal said:
nVidia has CUDA. OpenCL is open... and ATI supports it, but NVIDIA has also released OpenCL drivers. So for me, in a long run, OpenCL sounds more interesting. Don't know if there will be significant performance difference between these two.
So Open CL ( boy there's a lot of topics in this topic) Again from my limited understanding is a cross platform language which can take advantage of the power of both the CPU and the GPU, is this what makes it attractive?
-
The name GPU is a bit of a misnomer in that sense. Its best to just forget the 'graphics' part of the name and think of it as lots of small processors. Each individual processor is very slow compared to a CPU but because theres lots of them the total amount of calculations that can be performed is greater than that of a CPU, thus giving you the rendering performance we're seeing with the current crop of GPU based unbiased renderers.
The actual work being done by the GPU and the CPU is the same, so if you leave the CPU chugging away to do as many calculations as the GPU it will produce a practically identical result to that of the GPU.
-
And time is money I guess.
-
Perhaps an analogy would help: imagine you need to peel a billion potatoes. You could either use a couple of super-duper-peelomatics that can peel 10000 potatoes a second each or you could buy a 100,000 cheapo-peels that can peel 10 potatoes a second each.
With 2 super-duper-peelomatics it would take 50,000 seconds to peel all the potatoes whereas with our cheapo-peels it would only take 1000 seconds.
So although the throughput of each cheapo-peel is far less than that of the peelomatic because theres loads of them working together the end result is a lot faster. All rather communist.
This is a similar idea to the paintball thing jamie and adam demonstrated.
-
@dale said:
Maybe this is at the base of what I am trying to understand in how CPU, and GPU processing differs....In other words even with a slower CPU based machine you would still be able to produce a rendering of equal quality with those of a GPU based machine, if you are willing to let it cook for a longer time period? If this is the case then about how much longer?
They won't differ in quality, only in speed, if same rendering algorithm is used. At the moment far mode advanced rendering algorithms are implemented on CPU and as GPUs are at the moment some what memory limited - but that will change. So in the end, there will only be a speed difference.
CUDA or OpenCL matter mostly to the developer. They wont affect to rendering quality. Well... CUDA is only for nvidia, so it might affect on the end user, if he has no "right" hardware.
-
@remus said:
The name GPU is a bit of a misnomer in that sense. Its best to just forget the 'graphics' part of the name and think of it as lots of small processors. Each individual processor is very slow compared to a CPU but because theres lots of them the total amount of calculations that can be performed is greater than that of a CPU, thus giving you the rendering performance we're seeing with the current crop of GPU based unbiased renderers.
The actual work being done by the GPU and the CPU is the same, so if you leave the CPU chugging away to do as many calculations as the GPU it will produce a practically identical result to that of the GPU.
Thank you that even puts the paintball example in a better light.
But you've really done it now, you said the word "Unbiased"
I know I'm all over the map here, but it appears beat just to follow this discussion to wherever it goes.You will have to excuse me, I have been doing a lot of reading up on the subject of rendering, and a little knowledge is a dangerous thing (Dad used to say). So once again as I understand it "biased" rendering, the meaning is quite literal, in that the algorithm places a predetermined limitation(bias) on the process, mostly to preserve processing power and time. This then would mean that an unbiased renderer would place no limitations on the paths it takes to solve the equation?
-
Not sure if this was addressed, but 64 bit means more addressable memory, but speed is a function of the bus width? Doesn't dedicated graphics memory still need to be addressed by the computer, thus the OS. I read elsewhere that if you have 2G. on a graphic card, that this amount of memory addresses are not be available to the CPU, a big chunk of the memory in a 32 bit system. Is that right, since the graphics card still needs to run conventional programs? Is the CPU slower as in clock cycles?
I remember when the computer bus was ISA(?), and special graphic cards were made so that Cad programs could display faster. Isn't this the same thing? As the cost of multi-core comes down, won't this kind of card have less value?
-
Your interpretation of biased Vs. unbiased is essentially correct, although to be pedantic even unbiased render engines make a few very basic assumptions. For example, in reality, light bounces around for ages reflecting of loads of things. if this behaviour was modelled exactly your render would take months to complete and look the same as a render from the current crop of unbiased renderers. To combat this unbiased renderers have a parameter called 'max number of ray bounces' which limits the number of bounces a ray goes through before its terminated, the higher you set this value the less biased your render is, although obviously it will take longer to render. There are more examples but the above is the one i remember.
About asking questions, im very happy to help as it helps me organise all the ideas in my head which i havent had to do until now.
-
@dale said:
@remus said:
With regards to why path tracing is used for GPU based renderers (at the moment), its to do with the fact that each path can be calculated independently of the others, thus each thread on the GPU can be run separately from the other and does not need to wait for results form other threads to continue running.
Are any rendering programs equipped to cut the calculation time involved in rendering by essentially saving a base photon path trace, and only alter what is necessary in a scene if you for instance alter a material?
I am jumping back to this right now, because I was searching for a bookmark on a thread at the Kerkythea Forum which discussed the above, since I also take written notes I found them and In them I made reference to being able to "lock the photon map" by changing settings in the "Irradiance Estimators", confessing that at the moment this is over my head, but I remember thinking that it could be useful someday. If I find the link I'll post it out of interest.
Advertisement