Hi there.
@ecuadorian said:
Thanks for the very informative reply, Dave.
A friend in another forum is asking whether this will be GPU-based. Based on your response, seems like pre-calculations are currently computed only on the CPU, while navigation will use mostly the GPU via OpenGL or DirectX, with the CPU handling real-time reflections and shadow maps. Am I correct?
Yes, this is correct the CPU computes what we call the LumenRT "Live Cube" and the the GPU displays it and allows you navigagte around the Live Cube.
@ecuadorian said:
I'm used to rather long pre-calculation times thanks to LightUp. Usually 35-40 minutes for a house like this one, using a 10cm mesh. However, Adam Billiard, its creator, has stated in LightUp's forum that he plans to move the pre-calcuations from the CPU to the GPU, using it as a "compute resource", "like Octane" (Maybe CUDA or OpenCL ). When this is implemented, there will be "no waiting", according to him. Do you plan to move LumenRT's pre-calculations to the GPU to reduce waiting times, as well?
It's possible. As GPU's and GPU software like CUDA/OpenCL become more powerful and more mainstream, the heavy graphics computing tasks will be offloaded to the GPU. I don't think we are quite there yet unless you own a very high-powered GPU cluster like an NVIDIA Tesla card setup.
@ecuadorian said:
Also, I guess a "LiveCube" (I'd prefer "pre-lit model" as "LiveCube" sounds like a simple panoramic still image) will reside in the GPU RAM, so the more complex the model and the finer the illumination mesh, the more GPU memory you'll need to be able to display it, right?
Yes, this is true. Most midrange graphics cards such as the the NVIDIA GeForce 460 can handle faily large models with 512 mb of graphics memory or more.
@ecuadorian said:
The need to pre-calculate and store this information will also mean there's a limit to the size and detail you can attain. If you want a bigger size, you'll need to lower the detail of the illumination solution.
Not really, it will just take a longer time to render (e.g. pre-compute). If you have a midrange GPU card it will handle most typical models just fine. If you include all kinds of fine modeling details with modeling polycounts above 500K or so, you may see some degradation in performance on the GPU side.
@ecuadorian said:
I also see no mention about a dynamic sun/sky/cloud system, object animation capabilities, or even an object library. These limitations would mean LumenRT will be directly competing with LightUp ($150) instead of Lumion (750€), so I hope you guys price it sensibly.
The first release of LumenRT will not include dynamic skies. Camera animation is done by exporting the Sketchup scene animation into LumenRT but there is no separate object animation system.