@roger said:
@mitcorb I am just guessing:
In a sense this is somewhat like fingerprint matching. The FBI considers a finger print a match if you can find seven points of correspondence in two samples. So you manually must define a minimum number of matching points in two photos. You must also define for the computer the focal length and optical characteristics of your lenses. With digital cameras this is all listed in the EXIF information. So the computer now has some information how matching points relate to each other in the two photographs. From this it can reverse engineer the relative positions of the two cameras.
One important thing it does is calculate the distortion amd sensor shift of the lens, and output undistorted images to a skp project. This is very important to get the high precision, as all lenses have some distortion, and no sensors are placed exactly in the middle of the lens center.
BTW, it can use photos without EXIF, and I have no problems using a 0.7x converter (which is not known by the EXIF writer)as it only uses the EXIF data as a starting point.
@unknownuser said:
Then the program has enough confidence to go back into the photos and begin to do pattern matching based on shapes, areas of high contrast, and color information matching and make assumptions that allow the computer to make reasonable point matching decisions on its own. From there it is just trigonometry to create a point cloud as the basis for a mesh.
The beauty of it is that you do not end up with a huge point cloud mesh (unless that's what you want). You fully control the density of the mesh, and you can have some parts with lots of details and other parts covered by only a few polys. And you can also easily smooth areas or add more details manually and alter the shapes as you like.