Match Photo has two separate worlds
-
August,
That is about what i find as well. I don't know if I've even got such accuracy with two photos, but I recall it work pretty well. In the end what is the use for MatchPhoto? Perhaps you can get it to work for you. For example if you just want projected textures and both models are in scale, move them together after projecting?
Bertier,
That's cool. Now you need a tool to place in situ as well!
To do Match Photo without a rectilinear building, perhaps you need some right-angled level lines too. Try yellow mason's line, a line level,and a few stakes. -
In Match Photo once you establish some planes that relate to the photo you can work off the geometry you have. It's best for rectilinear buildings. I wonder if it is possible to locate just "anything" in the photo however. Perhaps if you are able to set all the balls based on a horizontal grid. That grid is then drawn on a horizontal plane in SU and you extend lines up to where the ball is seen in the photo.
Photo-to-photo seems like a lot of back and forth. Might be worth a test, as I gather you are doing this for "fun". There are similar cloud applications and experiments out there.
-
The photos of Bertier's tool remind me of what I have been considering for a later stage of my project.
After I have the foundation and floor joists in place in the model, I want to model the earth between the foundation walls. I think this is a similar problem to Bertier's, except that I will not need his tool because I will have the building foundation to determine horizontal and vertical.
To find matching points of the irregular earth surface, I have thought of adapting a technique from Hollywood: My idea is to cut ping-pong balls in half and place them about one foot apart all across the irregular surface. Ping-pong balls come not only in white but in bright fluorescent yellow, which should be ideal.
In the photos, no matter what angle they are seen from, the perimeter of the ball will identify exactly where the center of the ball is so I can match those center points from photo to photo.
In Bertier's photos, the yellow flowers in the grass are similar to the ping-pong balls. There are patterns in the flowers that let you match the same flower from photo to photo.
Has anyone done anything like this before? Should I start a new thread to ask the question?
Thanks.
P.S. I'm still working on the original furnace-box problem. I haven't had any time for it the past few days.
-
It is for "fun" in the sense that I am trying new things so that I can learn about them as I experiment.
And it also has a pragmatic purpose. I am trying to model the space under one corner of my house to design exactly how a new furnace can be installed there, what it will need for a mounting base, how it will connect to the existing ducts, where the water and drain pipes are, etc.
A local furnace contractor wants nearly $2000 to do the installation, plus the cost of the furnace. I am hoping that my SU model will allow me to do it myself, or mostly, substituting intense planning for hands-on experience in this kind of installation. All the basic construction, electrical, and plumbing skills I already have, I just have not done exactly this before. So in that sense this is a "paying" project. I am imagining that this SU model will be worth as much as $1000 to me.
I am also expecting that this little bit of code from Chris Fulmer will be helpful.
http://forums.sketchucation.com/viewtopic.php?f=323&t=28154&start=15#p245418This Ruby code draws construction lines through the camera eye point and wherever in the view the mouse is clicked. So I am imagining that I can go through one photo, placing C-lines through each ping-pong ball, then do the same for another photo, and where the lines cross in the model will be my surface points.
Of course, that is assuming the lines cross. Given my current problems with Match Photo, I'm being cautious about that idea.
-
@pbacot said:
There are similar cloud applications and experiments out there.
Are you familiar with any specifically? I'm not even sure what terms to google.
-
ShiftN for make verticals before process
-
@unknownuser said:
ShiftN
Cool app. But I think it would mess up Match Photo every bit as bad as cropping does.
If it is just for photo textures, I think it might work very well, but but using photos to get 3D position is a different problem.
Thanks.
-
@thomthom said:
Did you remove lens distortion? (Manual recommend so)
There are so many things that affect the accuracy of PM.I had been reasonably confident about my lens because my Sony camera said it had a Carl Zeiss lens. So I had done no post-processing of the images. I had done some checking and the lens does a very good job of keeping straight lines straight, even at fairly wide angle settings.
However...
I measured the big rectangular box and made the model box the exact dimensions. After another hour of fitting the photos to the rectangular model, I have come to two conclusions:
- There are lines that I had been assuming were parallel to the outer edges of the unit which are not exactly parallel. In particular, there is one corner flashing that has definite bend at one end.
- The camera pixels are not "square". After much fiddling, I have all four photos in very close alignment with the exception that the overall width of the images is too narrow to fit the model box by about the same visual amount in each photo.
I had previously thought that non-square pixels would only matter if I rotated the camera to take a "portrait" style instead of a "landscape" style photo. But I have realized from this exercise that Match Photo assumes square pixels and an error of, for example, 3% in the width will produce a different error in the model according to the angle that face is at with respect to the camera. Adjusting that width to fit will thus produce different proportionate errors in height -- and that is what I had been seeing.
So at this point, it looks like I need to set up an accurate test to find out what post-processing it will take to get square pixels from this camera.
I am hopeful that I will be able to do something simple like expand the width of each photo by, for example, 3.5% and then be able to much more quickly get them to match each other in Match Photo and thus be able to produce a consistent model much more quickly than trying to compromise between images that will never match.
Thanks for all the help and suggestions.
August
-
@august said:
It is for "fun" in the sense that I am trying new things so that I can learn about them as I experiment.
I share that with you !
-
-
I found only these notes in the SketchUp manual about pre-processing photos:
- Do not crop photos. Match Photo currently requires that the point you aimed the camera at is located in the center of the image (also called the center of projection). Although it may seem possible to use a cropped image, typically vertical lines will not align well across a cropped image and the results will be unsatisfactory.
- Do not warp photos. Images which have been manually warped using an image processing program, or specialized camera, are not supported by Match Photo.
- Remove barrel distortion or issues where straight lines are bent away from the center of the image. Barrel distortion typically occurs on wide angle lens cameras. Use a third-party product to eliminate barrel distortion from images prior to using them within Match Photo. All cameras have a little bit of this distortion and it is typically worse around the edges of the image.
Point 1 is the most repeated advice about Match Photo.
Point 2, about warped photos, would seem to me to exclude using ShiftN. I can imagine using ShiftN to apply photo textures, but I presume its distortion of the photos will make them unsuitable for Match Photo.
Point 3 is what I assume Thomthom was referring to by "lens distortion". This lens appears to be excellent in that respect. It is extremely hard to find any evidence of barrel distortion.
But, as I reported earlier, I appear to have non-square pixels, the images as displayed are all a bit narrower than the model.
If anyone has any experience with how to accurately test that aspect of a camera lens, I would appreciate any suggestions. I will probably be working out my own tests this coming weekend.
I presume I will need a way to accurately identify the position of the center of the lens while the camera is on a tripod pointed at a flat surface that is exactly perpendicular to the camera's line of sight.
Theory is easy. Construction still awaits a design.
A brute-force approach would be to simply try different values for the adjustment until I find one that seems pretty good. But since it can take up to an hour of fussing and fiddling to reach what I consider my best compromise, that approach is guaranteed to take a few hours.
Again, brilliant ideas are always welcome.
Thanks,
August
Advertisement