Photomatching Issue
-
Have you moved the axis/origin to a lower corner of your model, and aligned the red/green axes before loading the PhotoMatch image? That really helps when aligning things. If your model does not have 90 degree corners and vertical/parallel walls you could make a temporary cube to help when aligning.
And yes, it is perfectly possible to achieve the same settings for the camera as the one assigned by Photomatch. It's just a position, rotation and focal length (fov) setting.
When rendering you should us the same height as the height of the photo, and then it's much easier to make a composite afterwards, and probably also with Vray rendering (which I don't use). -
Understood, I have never tried to render a photomatch, and my render application will not render a watermark.
Addenda: I was able to build a simple model using photomatch, save its view, render it, and insert the photomatch image into the render as a background image. Guess that is where you are having problems, hope some vray expert can point you in the right direction.
-
Hi there,
I read this thread a while ago, and since I had some photomatched models, gave it a try at rendering.Turns out, valerostudio is right: I used the vray tools plugins by thom thom to get the image aspect right, but when I try to render, the point of view seems to render from a bit more far away.
Eventually I cam up with this workflow: you divide the original photo widht by its height, so to get the image aspect. This value you put in the image aspect in the output section of vray options.
Next you make the photomatch.
At this point, set up lights and everything for the render, and make sure the zoom factor of the physical camera is set to 1!
Before you render, export an image of the photomatched model, with the background.
Now, shoot the render, making sure you save the alpha channel too.
In Photoshop, import both the exported 2D image and the render, and align by eye the two images (it's not so difficult).
Finally, get rid of the photo with the model on top, and put the original photo as a background. Job done!
-
I found myself with problems last time I tried to render a photomatch. I think there might be a bug about...
-
The optical center of the lens needs to be in perfect alignment with the optical center of the sensor, but with point and shoots this is not always the case. I am not sure why the camera designers do this, you would have to ask them. To test, set up your camera on a tripod parallel a block wall. Put an "X" on the wall at the height of the center of your lens. Shift camera left and right to align center of lens to the "X" (Do not rotate camera, just slide it left/right and don't use viewfinder or LCD screen - you will have to do it by eye and with a tape measure). Once things are as close as you can get them, make a photo. If that x is not dead center in the image you and photoMatch will never be happy together.
If you are using the surface modeler ($1,000 plugin) from TGI3D I don't think you will have a problem since it knows how to force a correlation between matching points on multiple photos taken from different angles. It can also cancel out lens aberrations if you do proper calibration on the camera or cameras.
PhotoMatch is a sketchy tool and not in the good sense of sketchy.
-
with vray, I believe it has to do with it not being able to render 2pt perspective. The photomatch distorts the view, so if you type in the field of view (select the zoom icon and re-type the number in the bottom right input window) and the view pops out of 2pt, then you can render that view in vray and it will match. This is a similar bug to not being able to render parallel projection. Vray camera can only generate physically accurate camera views. Have you tried the lens shift option?
-
I do mean to remember having been able to render photomatche scenes correctly before. But I could be making that up ... I tend to do that...
-
Sounds like this may be a problem with vRay?
SU does not use 2-point perspective on Photomatch BTW, and it does not distort the view/photo.
It is actually a problem with Photomatch that it does not undistort the photos.
All lenses/cameras have some distortion, mainly radial, usually barrel on wide angle lenses and the opposite on teles.
And all sensors have some shift/offset (ie no sensors are placed perfectly in the center), both p&s and dSLRs, even the best and most expensive dSLRs. My high-end p&s cameras actually have their sensors a bit better centered than my dSLR.
Panorama stitching software like PTgui, which I use a lot, calculates these distortions with 3 parameters for radial distortions and 2 for shift (+ shear, which I believe is mainly for scanned photos). Those parameters are used for undistorting the photos before stitching.
Those parameters are also calculated and applied by photogrammetry software like Tgi3D PhotoScan. The photos are then undistorted when exported to the SU file, and the result is that very high accuracy can be achieved, because straight edges will appear as straight on the photos too, instead of curved as they may have been on the originals.
When using Photomatch with normal photos you will often find that edges that should have been straight are actually curved on the photos, like vertical walls/corners. It is difficult to insert new objects into a photo that is heavily distorted - the ultimate being a fisheye photo.
Many 3D programs, like Lightwave, have "real" cameras that actually distorts the rendered output to fit with the distorted photo backgrounds. PhotoScan etc do the opposite and undistorts the photo backgrounds instead, which makes it easy to model/use in SU.
That said, Photomatch is a great tool as long as you don't have heavily distorted photos, and know 100% that the red/green perspective helper lines are placed on horizontal lines that are exactly 90 degrees to each other. Very often in a city the block corner buildings are not 90 degrees, but follow the streets..
With PhotoScan you'll have no such problems with neither distorted photos nor non-90 degrees corners. -
I've had similar issues in the past and was never able to make photomatch work with Vray. If it wasn't the perspective having issues it was the textures going haywire. I eventually gave up and did something similar to the workflow that Broomstick is describing. Get it close in sketchup and then photoshop it the rest of the way. I would be nice in the future to see some revamp of the current photomatch tool in SU....or maybe TGI3D can take some simple setup features of Photoscan and implement a cheap alternative photomatch plugin.
-
I would pay good money for a photomatching tool in Sketchup which is similar to that of 3ds max, Bonzai3d etc - pick points in model - then on the background photo - have it calculated. None of this fiddling with handles nonsense.
-
Tom, Is this a plugin idea?-)
-
Not something I have the capabilities of doing. Beyond me.
-
I'm not sure if this is a solution to the problem you are talking about here but I've solved my scaling problem during photo match renders by changing the Zoom Factor (Vray Options -> Camera -> Zoom Factor). In my scene 0.4 was the correct factor to get the desired view.
-
If the problem is with 2 point perspective, render the project in 3D. Then import the image into PhotoShop and do your 3pt to 2pt perspective there. Focal length and FOV are not the critical factors. What you have to worry about is having the virtual camera at the same distance, angle, and height as the physical camera. If these are identical then building and background will match.
Also the design of point and shoots is optimized around manufacturing and small form factor. The optical center of the lens may be fudged a bit to make room for a battery, circuit board or lens motor.
-
It's all about the photo you are matching. Sometimes I have very little issue putting my rendering into a photo using photomatch and sometimes it does not work at all because the way SketchUp takes those match lines and creates the scene tab camera view. Sometimes its so distorted that when you just hit zoom ever so slightly (the trick to getting Vray to render the match) it zooms way out and the perspective vanishing points are reset. It's just one of those things you need to tread carefully around I guess.
-
@valerostudio said:
Sometimes its so distorted that when you just hit zoom ever so slightly (the trick to getting Vray to render the match) it zooms way out and the perspective vanishing points are reset. It's just one of those things you need to tread carefully around I guess.
I can relate to that.. in my last job I was photomatching some presumably cropped photos of a model where we had to insert our 3D. One of these views was so distorted, as soon as did an orbit the entire view would rotate along the camera-target axis rendering that view useless...
I had to manually adjust the perspective in sketchup that time, no amount of photomatching would help me...
-
Check this out guys: http://www.youtube.com/watch?v=MK7HAgONdaU&hd=1
maybe its not as perfect as the photomatch feature, but this one is much easier and hope it could help...regards,
-
@valerostudio said:
It's all about the photo you are matching. Sometimes I have very little issue putting my rendering into a photo using photomatch and sometimes it does not work at all because the way SketchUp takes those match lines and creates the scene tab camera view. Sometimes its so distorted that when you just hit zoom ever so slightly (the trick to getting Vray to render the match) it zooms way out and the perspective vanishing points are reset. It's just one of those things you need to tread carefully around I guess.
Upon reading your post more carefully, perhaps I understand the problem a little better. You say, "putting my rendering into a photo."
Photomatch is designed to allow you to model an existing scene from a photograph." Now if you design a building from scratch you can not drop that building into a photo without matching the point of origin and the vanishing points that would exist if the building were already in the photo.
To do this without a lot of painful trial and error you will need to know exactly where you expect the front corner of your building to touch the ground in the photo. You need to know the precise distance that point was from the optical center of your lens at the time you took the picture. You also need to know how high that optical center was when the photo was taken.
This can be done very reliably. The question is how much prep work on site are you willing to do at the time of the site photograph? Would you like me to step you through the process and make it into a tutorial for everyone else?
-
Has anyone had any success in figuring out how to fix this issue with VRAY and SU 2015? I have used photo match to setup multiple camera views.
Advertisement