@andybot said:
That's actually part of my point - with the render engines, all that film/sensor size, lens length, aperture nonsense is all fungible. That's why adhering strictly to a 35mm equivalence seems kind of limited to me. The reason I mention conversions to different sensor formats is that if you are, say, trying to match a photo, it will very much depend on that particular camera setup, like for example the shallow dof of your mamiya. It won't work to just use the default 35mm dof equivalence.
I think you hit the mark here... and part of the main reason for me bringing this topic up.
Forget about us archviz artist using Maxwell Render and try to force it into real camera envelope of operation... we rarely do so, and try to "fake" it into our needs. Think of a real photographer getting into CGI in order to expand his offering and do more advanced things in his photography work. He will need a tool that talks the talk he is used too and that is all about real camera settings... sensor size, lenses, ISO, shutter speeds, aperture, shifts and more.
Not referring strictly to the SketchUP plugin here... but in general.
All the render engines since Maxwell Render was first introduced added some physical based / real camera workflow to their features - so there is something to it, a demand by us - the users. I actually use VRay most of the time... but with it's VRayPhysicalCamera and environmental HDRI's. I've also been testing Indigo, Thea and more for there "Camera Like" feature set and I really wanted to know what artists on this forum think about this.
I'm an archviz artist first, not a photographer, but I do tend to go with physically based workflow, trying to work as a photographer does. This usually gets you the most photo-real results. I don't think it is a rule you should blindly follow, but people have been accustomed to judge designs by looking at photos! the real human eye view is, oddly, not the view we judge things according to.
This is why we call it photo-real