What can an affordable super computer like parallela do ?
-
Hi,
I was looking at this Kickstarter project "Parallela" by Adapteva.
It is about realizing a 99$ super computer. I was wondering since the capabilities of Sketchup will evolve over time bringing in new and complex graphic manipulation and rendering features it should be a good idea to get Sketchup running on a super computer one day. More triangle handling capability, scenes with multiple layers, complex and custom scene lighting, may be even a sketchup community based animation film, real life like rendering to sketchup models and scenes. I read that real life rendering and scene convertion takes hours even for a clip that is just a second long. So how about getting Sketchup to run on a 99$ super computer.This could be great of open source graphics community.
Only 5 days left to fund 350,000$ or more. So people please support this project.
http://www.kickstarter.com/projects/adapteva/parallella-a-supercomputer-for-everyone -
It only has 1GB of RAM so if you cant upgrade that I wouldn't bother.
-
@liam887 said:
It only has 1GB of RAM so if you cant upgrade that I wouldn't bother.
You're missing the point. It's a 16 (or 64) core reconfigurable parallel system with separate memory for each core, plus an ARM9 CPU to control the overall system. Memory is not shared among the cores as in a typical current multi-core machine and so each core gets full access and performance. Sharing memory - like on your Mac or pc - is a performance bottleneck that requires a lot of clever hardware to try to work around.
Massively parallel systems will require a different approach to programming in order to get the benefits that (ought to be) are available. 128 cores might not result in overall performance that is anywhere near 128 times as high as a traditional CPU but the total system cost could be much less than before. You have to spend time and cycles communicating between cores, for example. Software is quite different and the industry is really going to need to make massive changes to cope. It's all very exciting. -
@tim said:
@liam887 said:
It only has 1GB of RAM so if you cant upgrade that I wouldn't bother.
You're missing the point. It's a 16 (or 64) core reconfigurable parallel system with separate memory for each core, plus an ARM9 CPU to control the overall system. Memory is not shared among the cores as in a typical current multi-core machine and so each core gets full access and performance. Sharing memory - like on your Mac or pc - is a performance bottleneck that requires a lot of clever hardware to try to work around.
Massively parallel systems will require a different approach to programming in order to get the benefits that (ought to be) are available. 128 cores might not result in overall performance that is anywhere near 128 times as high as a traditional CPU but the total system cost could be much less than before. You have to spend time and cycles communicating between cores, for example. Software is quite different and the industry is really going to need to make massive changes to cope. It's all very exciting.Oh no I completely understand but what I was getting at is what you said above already the software wont utilize those cores so it pointless.
-
@liam887 said:
Oh no I completely understand but what I was getting at is what you said above already the software wont utilize those cores so it pointless.
We'd have to agree on a definition of pointless I think. Right now, for running anything like SU, yes it would not be very useful. In the future it - the general case of massively parallel systems rather than this particular board - is almost certainly essential, which I think makes it have a big, shiny, flashing, sparkly point. If ones interest is in building new exciting computational doohickeys for the future this project would be like catnip. If ones interest is in using current tools to design and build the things SU is good for then at best the project would be a "hmm, cool idea, wonder if it will ever work, now what the blazes did I do to get that mess from subdivide-and-smooth?".
-
I note that the project got fully funded with money to spare. It will be interesting to see what, if anything comes of it!
-
@tim said:
@liam887 said:
Oh no I completely understand but what I was getting at is what you said above already the software wont utilize those cores so it pointless.
We'd have to agree on a definition of pointless I think. Right now, for running anything like SU, yes it would not be very useful. In the future it - the general case of massively parallel systems rather than this particular board - is almost certainly essential, which I think makes it have a big, shiny, flashing, sparkly point. If ones interest is in building new exciting computational doohickeys for the future this project would be like catnip. If ones interest is in using current tools to design and build the things SU is good for then at best the project would be a "hmm, cool idea, wonder if it will ever work, now what the blazes did I do to get that mess from subdivide-and-smooth?".
Do you think this is a new idea.
Massive parallel systems have been studied for years. For some types of task they do have an advantage and others where the problem is more of a serial issue they are not and in fact can even be slower that a single core syetm. You can do an internet search and find where test have been run on simple tasks to show the time to complete task comparisons. Should be, could be etc. are just that. In addition parallel systems are a real head ach to find bugs, correct and make sure you have not introducted other problems.
Just look at MS and other " high tech" orginizations and the bugs they have at release and they basically use the users to test for them and then bugs take years to fix if ever. For one of our suppliers for a " simple system" we fored a regression test taht would run 24/7 for days before we would even appove a " simple" sofware drop.
The baaisc point I am trying to make is there needs to be a lots of good sytem enginerring put into what is to be done beofre the design even starts -
@mac1 said:
Do you think this is a new idea.
No, not at all - I was programming a 128 CPU Transputing Surface back in 1983, when I was an IBM Resarch Fellow. I do, however think it has become effectively a new idea because of the almost total lack of interest by the mainstream software world in the intervening thirty years. Single CPUs got very fast very quickly in the early 80's and Transputers got lost in the fallout.
Hardly anyone that has graduated with a comp.sci. Or related degree in that time has much of a clue about parallelism. It's going to be a big problem, very soon. -
Not really. You just have not been working with the correct org.
Advertisement