Photomatching Issue
-
I do mean to remember having been able to render photomatche scenes correctly before. But I could be making that up ... I tend to do that...
-
Sounds like this may be a problem with vRay?
SU does not use 2-point perspective on Photomatch BTW, and it does not distort the view/photo.
It is actually a problem with Photomatch that it does not undistort the photos.
All lenses/cameras have some distortion, mainly radial, usually barrel on wide angle lenses and the opposite on teles.
And all sensors have some shift/offset (ie no sensors are placed perfectly in the center), both p&s and dSLRs, even the best and most expensive dSLRs. My high-end p&s cameras actually have their sensors a bit better centered than my dSLR.
Panorama stitching software like PTgui, which I use a lot, calculates these distortions with 3 parameters for radial distortions and 2 for shift (+ shear, which I believe is mainly for scanned photos). Those parameters are used for undistorting the photos before stitching.
Those parameters are also calculated and applied by photogrammetry software like Tgi3D PhotoScan. The photos are then undistorted when exported to the SU file, and the result is that very high accuracy can be achieved, because straight edges will appear as straight on the photos too, instead of curved as they may have been on the originals.
When using Photomatch with normal photos you will often find that edges that should have been straight are actually curved on the photos, like vertical walls/corners. It is difficult to insert new objects into a photo that is heavily distorted - the ultimate being a fisheye photo.
Many 3D programs, like Lightwave, have "real" cameras that actually distorts the rendered output to fit with the distorted photo backgrounds. PhotoScan etc do the opposite and undistorts the photo backgrounds instead, which makes it easy to model/use in SU.
That said, Photomatch is a great tool as long as you don't have heavily distorted photos, and know 100% that the red/green perspective helper lines are placed on horizontal lines that are exactly 90 degrees to each other. Very often in a city the block corner buildings are not 90 degrees, but follow the streets..
With PhotoScan you'll have no such problems with neither distorted photos nor non-90 degrees corners. -
I've had similar issues in the past and was never able to make photomatch work with Vray. If it wasn't the perspective having issues it was the textures going haywire. I eventually gave up and did something similar to the workflow that Broomstick is describing. Get it close in sketchup and then photoshop it the rest of the way. I would be nice in the future to see some revamp of the current photomatch tool in SU....or maybe TGI3D can take some simple setup features of Photoscan and implement a cheap alternative photomatch plugin.
-
I would pay good money for a photomatching tool in Sketchup which is similar to that of 3ds max, Bonzai3d etc - pick points in model - then on the background photo - have it calculated. None of this fiddling with handles nonsense.
-
Tom, Is this a plugin idea?-)
-
Not something I have the capabilities of doing. Beyond me.
-
I'm not sure if this is a solution to the problem you are talking about here but I've solved my scaling problem during photo match renders by changing the Zoom Factor (Vray Options -> Camera -> Zoom Factor). In my scene 0.4 was the correct factor to get the desired view.
-
If the problem is with 2 point perspective, render the project in 3D. Then import the image into PhotoShop and do your 3pt to 2pt perspective there. Focal length and FOV are not the critical factors. What you have to worry about is having the virtual camera at the same distance, angle, and height as the physical camera. If these are identical then building and background will match.
Also the design of point and shoots is optimized around manufacturing and small form factor. The optical center of the lens may be fudged a bit to make room for a battery, circuit board or lens motor.
-
It's all about the photo you are matching. Sometimes I have very little issue putting my rendering into a photo using photomatch and sometimes it does not work at all because the way SketchUp takes those match lines and creates the scene tab camera view. Sometimes its so distorted that when you just hit zoom ever so slightly (the trick to getting Vray to render the match) it zooms way out and the perspective vanishing points are reset. It's just one of those things you need to tread carefully around I guess.
-
@valerostudio said:
Sometimes its so distorted that when you just hit zoom ever so slightly (the trick to getting Vray to render the match) it zooms way out and the perspective vanishing points are reset. It's just one of those things you need to tread carefully around I guess.
I can relate to that.. in my last job I was photomatching some presumably cropped photos of a model where we had to insert our 3D. One of these views was so distorted, as soon as did an orbit the entire view would rotate along the camera-target axis rendering that view useless...
I had to manually adjust the perspective in sketchup that time, no amount of photomatching would help me...
-
Check this out guys: http://www.youtube.com/watch?v=MK7HAgONdaU&hd=1
maybe its not as perfect as the photomatch feature, but this one is much easier and hope it could help...regards,
-
@valerostudio said:
It's all about the photo you are matching. Sometimes I have very little issue putting my rendering into a photo using photomatch and sometimes it does not work at all because the way SketchUp takes those match lines and creates the scene tab camera view. Sometimes its so distorted that when you just hit zoom ever so slightly (the trick to getting Vray to render the match) it zooms way out and the perspective vanishing points are reset. It's just one of those things you need to tread carefully around I guess.
Upon reading your post more carefully, perhaps I understand the problem a little better. You say, "putting my rendering into a photo."
Photomatch is designed to allow you to model an existing scene from a photograph." Now if you design a building from scratch you can not drop that building into a photo without matching the point of origin and the vanishing points that would exist if the building were already in the photo.
To do this without a lot of painful trial and error you will need to know exactly where you expect the front corner of your building to touch the ground in the photo. You need to know the precise distance that point was from the optical center of your lens at the time you took the picture. You also need to know how high that optical center was when the photo was taken.
This can be done very reliably. The question is how much prep work on site are you willing to do at the time of the site photograph? Would you like me to step you through the process and make it into a tutorial for everyone else?
-
Has anyone had any success in figuring out how to fix this issue with VRAY and SU 2015? I have used photo match to setup multiple camera views.
Advertisement