Tgi3D SU PhotoScan's new Image Based Surface Modeler
-
Hello All,
A few weeks ago we have released new versions (v1.22 and more recently v1.23) of Tgi3D SU Amorph and Tgi3D SU PhotoScan. We have focused mostly on PhotoScan this time and included major improvements and innovations in both the Calibration and Metrology Tool and features in SketchUp plug-in part.
The “automatic surface matching to photographs” feature has gone through a major change and transformed into the powerful Image-Based Surface Modeler Tool which includes Initial Estimate, Single-Segment Surface, Multi-Segment Surface, Refine-Estimate and Upsample Mesh commands with their own toolbar icons. We have included much better and faster algorithms that can process bigger sections at a time that can even include folds/occlusions. Below is an uncut video that shows the work in real-time.
[flash=480,385:3d8qzpvc]http://www.youtube.com/v/740XUrYJAgM?fs=1&hl=en_US[/flash:3d8qzpvc]
The Calibration Tool has also been updated for better usability with UI updates, additional features to enhance workflow and reduce errors. The updates include:
-
Powerful general calibration support for all point and shoot cameras.
-
Displaying and using 2D projections of the reconstructed 3D points.
-
Assignment of X, Y and Z axes and origin in the 3D viewer, which is preserved in exported files.
-
Sensor size estimation support.
-
Better detection of bad calibration points for safer calibration results.
-
Support for reducing image resolution in Sketchup exports.
-
Improvement in the UI for better usability.
You can download 30-Day Trial Editions of both PhotoScan and Amorph, as well as a free Training Edition of Tgi3D SU Amorph from the downloads page on our website.
Enjoy!
-
-
Looks interesting, even if I don't understand how the magic happens
How much time was spent on the model before this video? ...photo-calibrating, setting up the scenes, etc...
-
Hi Marcus,
@d12dozr said:
How much time was spent on the model before this video? ...photo-calibrating, setting up the scenes, etc...
Heeey, we don't include preparation time in Speed Modeling challenges...
Good question of course . Indeed we only show the modeling time in the video, as our intention was to showcase the power of the new "Image Based Surface Modeler" tool. The preparation contained the following steps with "approximate" times
planning, approx 15 mins including setting the scene taking the photos of the toy rhino, 15-20 mins (?) calibration and export to SketchUp, 1-2 hrs
Calibration step is the point where you would spend a lot of the time, especially at the beginning. The Calibration process does have a learning curve, but you quickly speed up as you get familiar with the process. Also if the pictures have a good number of features that are easily identifiable in multiple photos, like corners, etc. the calibration process is much easier. BTW, the black background remains from an earlier experiment, you don't have to segment the images.
Also, keep in mind that in the video we are using only two of the photographs, but the calibration is done for all 15 photos. The total time would be much less for, say, 3 photographs.
Regards,
-
@gulcanocali said:
Heeey, we don't include preparation time in Speed Modeling challenges...
Thanks for the explanation, Gulcan, very cool product you guys have developed -
It is a very powerful program/plugin, and it is also great fun to watch your flat photos being shrink wrapped with 3D forms
A tip for shooting good photo sequences:
Usually you tidy up clutter around the objects you are photographing.
When shooting for PhotoScan you should do the opposite.
The more the better, as long as it doesn't occlude your "target".
On a clean, smooth beach or in an empty white room it isn't easy to find good points to use for calibrating.
One great thing with this tool is that, unlike the competition, is that once calibrated you don't have to use a single one of the calibration points when modeling inside SU. -
It's still a magic to me (i.e. I cannot believe).
Really cool plugin indeed (and yes, I can hardly wait for those archaeology things to try it out!)
-
@gaieus said:
It's still a magic to me (i.e. I cannot believe).
Really cool plugin indeed (and yes, I can hardly wait for those archaeology things to try it out!)
Do you have any good links to tutorials, instructions and methods for doing archeoligical excavations and measurements? And/or some examples of use of surveying equipemtn and photogrammetry in such contexts?
I would like (maybe) to approach a local archeologists and hear what he thinks about using such tools as PhotoScan, and eventually offer my services He is actually the head of the county conservator office (don't know if that means anything in English...)
Looking forward to see your work in that area too!
According to how much time yu spend here answering questions you can't be much out doing field work? -
what level of precision you can get by photoscan? (may be I've already asked...)
best regards
/paolo -
@publied said:
what level of precision you can get by photoscan? (may be I've already asked...)
Very high! It depends on your photos though, and the length of your reference measure (the longer the better). You can get at least as good (or better) accuracy as with other photogrammetry tools like Imagemodeler (now only available to Autodesk/Max/Maya users) and Photomodeler, which I've been using for years. Once you have got a good calibration you have all the power of SU available in addition to the powerful tgi3D tools, which is a lot better than Imagemodeler and Photomodeler IMO.
I'm definitely not going back to Imagemodeler and Photomodeler.. -
Thanks Bjorn!
Paolo,
Tgi3D calibration tool can indeed provide a quite high level of accuracy. The actual level of accuracy obtained depends on several parameters including image resolution, image quality, vantage point distribution, calibration point placement care etc. With a modern 5+ megapixel camera under good conditions you can achieve better than 1 part in 2000 relative 3d distance accuracy, which translates to around 10 microns in a 2cm sized object, which is comparable to a digital caliper, or millimeter accuracy in a couple meter sized object which compares to better laser range-meters on the market. More importantly Tgi3D reliably reports the level of accuracy actually obtained.
Regards,
-
@gulcanocali said:
around 10 microns in a 2cm sized object, which is comparable to a digital caliper, or millimeter accuracy in a couple meter sized object which compares to better laser range-meters on the market.
what i can saying? gulp!
i'm a bit astonished and well amazed. positively. that sound very good!(and just now i'm reading the review on cathcup3)
-
It will be interesting to see if you have the product priced right. At $200 to $300 a copy I bet you would get 10x times the business. But it looks like a great product.
-
@Gulcan:
I am very impressed with this innovation. I am not sure how the process forms the volume of the mesh, but this is a very valuable feature and using Sketchup as the "engine(?)"This product is in my sights, but I am still hampered by outdated hardware. Hopefully, not much longer. I just need to sort out what I need in my next system.
-
@mitcorb I am just guessing:
In a sense this is somewhat like fingerprint matching. The FBI considers a finger print a match if you can find seven points of correspondence in two samples. So you manually must define a minimum number of matching points in two photos. You must also define for the computer the focal length and optical characteristics of your lenses. With digital cameras this is all listed in the EXIF information. So the computer now has some information how matching points relate to each other in the two photographs. From this it can reverse engineer the relative positions of the two cameras.
Then the program has enough confidence to go back into the photos and begin to do pattern matching based on shapes, areas of high contrast, and color information matching and make assumptions that allow the computer to make reasonable point matching decisions on its own. From there it is just trigonometry to create a point cloud as the basis for a mesh.
-
@roger said:
@mitcorb I am just guessing:
In a sense this is somewhat like fingerprint matching. The FBI considers a finger print a match if you can find seven points of correspondence in two samples. So you manually must define a minimum number of matching points in two photos. You must also define for the computer the focal length and optical characteristics of your lenses. With digital cameras this is all listed in the EXIF information. So the computer now has some information how matching points relate to each other in the two photographs. From this it can reverse engineer the relative positions of the two cameras.
One important thing it does is calculate the distortion amd sensor shift of the lens, and output undistorted images to a skp project. This is very important to get the high precision, as all lenses have some distortion, and no sensors are placed exactly in the middle of the lens center.
BTW, it can use photos without EXIF, and I have no problems using a 0.7x converter (which is not known by the EXIF writer)as it only uses the EXIF data as a starting point.@unknownuser said:
Then the program has enough confidence to go back into the photos and begin to do pattern matching based on shapes, areas of high contrast, and color information matching and make assumptions that allow the computer to make reasonable point matching decisions on its own. From there it is just trigonometry to create a point cloud as the basis for a mesh.
The beauty of it is that you do not end up with a huge point cloud mesh (unless that's what you want). You fully control the density of the mesh, and you can have some parts with lots of details and other parts covered by only a few polys. And you can also easily smooth areas or add more details manually and alter the shapes as you like.
-
Thanks for your comments and insight. You are driving me crazy with interest.
Advertisement