CPU vs GPU Discussion
-
So in terms of real time renders, the race is not yet on between multi-core CPU, and dedicated GPUs? And, with a GPU (to render), if my databases are not big, unless I need to run more programs at once, except for the bus size, 32 bits may (not?) do for a couple more years? (I have a unmentionable reason to ask:-) I wondered why we are stuck at 3Ghz. Don't know Moore's, read about heat but erroneously thought it had something to do with moving information faster then light.
-
Interesting. I just read up a bit on Moore's law, and for anyone whom like me who doesn't know anything about it, it is an observation made by Gordon Moore, co-founder if Intel in 1965 that the number of transistors per square inch of integrated circuit had doubled every two years since the integrated circuit was invented. His prediction was that this would continue. (He has since apparently altered it to Every 18 months).
What I found most interesting, beside the heat limitation is there appears to be two other limits. Please correct me if I'm wrong.
The way that these transistors and it seems even the way multi cores work are in parallel. So the workload is split in half. However in one of the articles I read the author pointed out that most computer operations are sequential, and I think this really applies to rendering algorithms. His example is of a spreadsheet, in that a cell in a spreadsheet relies on the value of another cell which in turn relies on the value of another cell.
Using this as an analogy for a a shot photon in rendering, in that, the result of the photons action relies on the surface characteristics of the object it strikes, this quickly becomes sequential, and the calculations must be exponentially more complex.
His assertion is that the only way to increase this is to speed up the calculation.
If to do this requires more transistors, and there is a limitation at the moment because of the amount of heat produced, then this seems to really shackle a CPU speed increase.
The second limit is that in order to take advantage of any computing power increase in speed, the software must be written in a way that it tells the processors what tasks they will handle, without this the computing power just isn't utilized. -
@dale said:
... does OpenCL seem like a reasonable or at least short term answer? (in the context of rendering anyway) Using both the power of the GPU and CPU, or am I misunderstanding (which is highly possible) this.
Probably OpenCL has value; mostly because making it possible for people outside the fairly small number that normally work on the software for graphics card drivers to actually do things with the GPUs at least offers a chance of some exciting new uses and algorithms to appear. Whether it will last is another matter - I have a suspicion that if and when massively parallel systems become the norm we will see GPUs pretty much disappear and just use a few (dozen/hundred) of the pool of CPUs to do the rendering work. Then again, maybe some of the brain structure research will lead to us building systems with thousands of CPUs of dozens of different types each specialised to different jobs. Why not imagine processors specialised for doing sound synthesis (we have DSPs that do some of that now), for polygon to pixel conversion, for floating point work, for string manipulation, for data moving, for watching user input signals and so on. We're used to thinking of a CPU being able to do all of that because it is cheaper with current designs and production technology. It probably won't always be like that.
-
@dale said:
Interesting. I just read up a bit on Moore's law, and for anyone whom like me who doesn't know anything about it, it is an observation made by Gordon Moore, co-founder if Intel in 1965 that the number of transistors per square inch of integrated circuit had doubled every two years since the integrated circuit was invented. His prediction was that this would continue. (He has since apparently altered it to Every 18 months).
That's the guy. Interesting how a simple observation and hypothesis that it might continue for a while turns into a law that the whole industry works frantically to keep going. For many years people assumed that it meant that computer speed would double each time. Not necessarily.
@dale said:
What I found most interesting, beside the heat limitation is there appears to be two other limits. Please correct me if I'm wrong.
The way that these transistors and it seems even the way multi cores work are in parallel. So the workload is split in half.Not completely correct but close enough unless you really want to study computer architecture and implementation technology. Which you probably don't.
@dale said:
However in one of the articles I read the author pointed out that most computer operations are sequential, and I think this really applies to rendering algorithms. His example is of a spreadsheet, in that a cell in a spreadsheet relies on the value of another cell which in turn relies on the value of another cell.
Using this as an analogy for a a shot photon in rendering, in that, the result of the photons action relies on the surface characteristics of the object it strikes, this quickly becomes sequential, and the calculations must be exponentially more complex.Yup. Most algorithms are sequential and that's partly innate and partly a result of a long history of single, sequential CPUs. Why would people spend time on parallel algorithms if there are no parallel machines to run them on! It's also damn hard. There are lots of problems with interlocks - I need this result before I can do this bit, just like a very big and complex building project. Imagine planning the build of an entire nation in a single project. Every detail, every purchase, every single nut and bolt and dab of loctite.
-
So to kind of summarize, from a rendering Archvis perspective: We will be relying on what the individual software takes advantage of, or the "trend" their developers buy in on, CPU or GPU or a Hybrid. This will probably be dependent on the development time and money the hardware developers throw at their systems, as the software developers won't expend a lot of their time and energy until the hardware is mature enough to warrant their time.
Since in terms of rendering the real difference is in the amount of time it takes to solve the algorithm, because they both will eventually arrive at the same solution, the real advantage of a GPU based system would come from being able to preview and rework your scenes and lighting for better results in a timely manner.( perhaps I'm simplifying here, as I have a feeling there may be other advantages if you are animating)
And as Tim pointed out, since the visualization community represents only a small percentage of computer use the development won't be spurred on by the revenues we will provide.
So there is really no way to anticipate how to prepare for the future? -
@dale said:
So there is really no way to anticipate how to prepare for the future?
Perfect straight line - there is an old aphorism relating to this that was coined by an old friend of mine by the name of ALan Kay (look him up - major, major, contributor to the modern world of computing)
"The best way to predict the future is to create it"And possibly the biggest single driver of the hardware world these days is gaming. Anything that will make games run faster will get researched and developed and sold cheaply. And porn. The entire point of building out a global broadband system was to distribute porn, it's just an inevitable result of any new technology. As soon as paint was invented, porn. Telegraph, porn. Cameras - porn.
So, I think that apps like SU will benefit massively from these two drives since games need ways to create the worlds and characters in them and porn needs... bandwidth (which means storage and memory and speed generally) and display quality and hands free UI interaction
-
My sister sent me this this morning, the next generation keyboard, I guess she's wrong about the music
-
Looked up Alan Kay. I guess among many other things, he's one of the Xerox team we can thank for the Mac (and in a sense because of the success of that graphical interface, Windows also). I ran across something, I believe it was in "Wired" on his idea that potentially coding can be reduced from thousand of lines to a few and achieve the same results, and his interest in computers that learn. Not only is he a significant contributor the the computing field but a real visionary. You run in a good crowd. (think he'd be interested in a little Ruby work on the side )
-
@dale said:
Looked up Alan Kay. I guess among many other things, he's one of the Xerox team we can thank for the Mac (and in a sense because of the success of that graphical interface, Windows also).
Oh and so much more. The whole field of Object Oriented Programming for example (yes I know about Simula and List, but Smalltalk was the first fully OOP system and is still the only one good enough to be worth the effort of critiquing) and a non-trivial part of the entire portable computing idiom (his 1979 PhD thesis covered the design, building, programming and documenting of a portable personal computer. In 1979)
@dale said:
I ran across something, I believe it was in "Wired" on his idea that potentially coding can be reduced from thousand of lines to a few and achieve the same results, and his interest in computers that learn. Not only is he a significant contributor the the computing field but a real visionary.
That's part of the power of OOP, when done right. You only program the parts that are different to what you already have code for.
@dale said:
You run in a good crowd. (think he'd be interested in a little Ruby work on the side )
Don't think so; Ruby has it's place, but it isn't in the same league as Smalltalk. Not even close. Take a look at http://www.vpri.org/index.html for what he's up to right now.
-
@dale said:
You run in a good crowd. (think he'd be interested in a little Ruby work on the side )
Don't think so; Ruby has it's place, but it isn't in the same league as Smalltalk. Not even close. Take a look at http://www.vpri.org/index.html for what he's up to right now.[/quote]
This man really uses his talent well.
Interesting group of co-horts. If I'm not mistaken the first name on the Board of Advisors, John Perry Barlow is an old Timothy Leary apostle who ended up as a lyricist for the Grateful Dead. -
A really inspirational read. This paper written by Alan Kay entitled " The Real Computer Revolution Hasn't Happened Yet".
Yes it is a little off topic, but what the hell.
Advertisement