Tgi3D SU PhotoScan's new Image Based Surface Modeler
-
@gaieus said:
It's still a magic to me (i.e. I cannot believe).
Really cool plugin indeed (and yes, I can hardly wait for those archaeology things to try it out!)
Do you have any good links to tutorials, instructions and methods for doing archeoligical excavations and measurements? And/or some examples of use of surveying equipemtn and photogrammetry in such contexts?
I would like (maybe) to approach a local archeologists and hear what he thinks about using such tools as PhotoScan, and eventually offer my services He is actually the head of the county conservator office (don't know if that means anything in English...)
Looking forward to see your work in that area too!
According to how much time yu spend here answering questions you can't be much out doing field work? -
what level of precision you can get by photoscan? (may be I've already asked...)
best regards
/paolo -
@publied said:
what level of precision you can get by photoscan? (may be I've already asked...)
Very high! It depends on your photos though, and the length of your reference measure (the longer the better). You can get at least as good (or better) accuracy as with other photogrammetry tools like Imagemodeler (now only available to Autodesk/Max/Maya users) and Photomodeler, which I've been using for years. Once you have got a good calibration you have all the power of SU available in addition to the powerful tgi3D tools, which is a lot better than Imagemodeler and Photomodeler IMO.
I'm definitely not going back to Imagemodeler and Photomodeler.. -
Thanks Bjorn!
Paolo,
Tgi3D calibration tool can indeed provide a quite high level of accuracy. The actual level of accuracy obtained depends on several parameters including image resolution, image quality, vantage point distribution, calibration point placement care etc. With a modern 5+ megapixel camera under good conditions you can achieve better than 1 part in 2000 relative 3d distance accuracy, which translates to around 10 microns in a 2cm sized object, which is comparable to a digital caliper, or millimeter accuracy in a couple meter sized object which compares to better laser range-meters on the market. More importantly Tgi3D reliably reports the level of accuracy actually obtained.
Regards,
-
@gulcanocali said:
around 10 microns in a 2cm sized object, which is comparable to a digital caliper, or millimeter accuracy in a couple meter sized object which compares to better laser range-meters on the market.
what i can saying? gulp!
i'm a bit astonished and well amazed. positively. that sound very good!(and just now i'm reading the review on cathcup3)
-
It will be interesting to see if you have the product priced right. At $200 to $300 a copy I bet you would get 10x times the business. But it looks like a great product.
-
@Gulcan:
I am very impressed with this innovation. I am not sure how the process forms the volume of the mesh, but this is a very valuable feature and using Sketchup as the "engine(?)"This product is in my sights, but I am still hampered by outdated hardware. Hopefully, not much longer. I just need to sort out what I need in my next system.
-
@mitcorb I am just guessing:
In a sense this is somewhat like fingerprint matching. The FBI considers a finger print a match if you can find seven points of correspondence in two samples. So you manually must define a minimum number of matching points in two photos. You must also define for the computer the focal length and optical characteristics of your lenses. With digital cameras this is all listed in the EXIF information. So the computer now has some information how matching points relate to each other in the two photographs. From this it can reverse engineer the relative positions of the two cameras.
Then the program has enough confidence to go back into the photos and begin to do pattern matching based on shapes, areas of high contrast, and color information matching and make assumptions that allow the computer to make reasonable point matching decisions on its own. From there it is just trigonometry to create a point cloud as the basis for a mesh.
-
@roger said:
@mitcorb I am just guessing:
In a sense this is somewhat like fingerprint matching. The FBI considers a finger print a match if you can find seven points of correspondence in two samples. So you manually must define a minimum number of matching points in two photos. You must also define for the computer the focal length and optical characteristics of your lenses. With digital cameras this is all listed in the EXIF information. So the computer now has some information how matching points relate to each other in the two photographs. From this it can reverse engineer the relative positions of the two cameras.
One important thing it does is calculate the distortion amd sensor shift of the lens, and output undistorted images to a skp project. This is very important to get the high precision, as all lenses have some distortion, and no sensors are placed exactly in the middle of the lens center.
BTW, it can use photos without EXIF, and I have no problems using a 0.7x converter (which is not known by the EXIF writer)as it only uses the EXIF data as a starting point.@unknownuser said:
Then the program has enough confidence to go back into the photos and begin to do pattern matching based on shapes, areas of high contrast, and color information matching and make assumptions that allow the computer to make reasonable point matching decisions on its own. From there it is just trigonometry to create a point cloud as the basis for a mesh.
The beauty of it is that you do not end up with a huge point cloud mesh (unless that's what you want). You fully control the density of the mesh, and you can have some parts with lots of details and other parts covered by only a few polys. And you can also easily smooth areas or add more details manually and alter the shapes as you like.
-
Thanks for your comments and insight. You are driving me crazy with interest.
Advertisement