Co planar test
-
@garry k said:
What is interesting is that if I use this it takes 24 seconds
model.start_operation "curved rail"
...
model.commit_operationIf I omit the start_operation and commit_operation then it takes 8 seconds.
What times do you get when you set the
disable_ui
argument ofmodel.start_operation
to true? -
Thanks Jolran,
I've got the fill_from_mesh code working and I can now generate the handrail in 0.326 seconds and this is using commit with the disable UI argument.
Thanks Thomas,
I'm sort of beyond that now. Everything is working like a dream. I'm going to go back to my stair code and move to meshes. There should be a significant improvement here as well.
-
@unknownuser said:
I've got the fill_from_mesh code working and I can now generate the handrail in 0.326 seconds and this is using commit with the disable UI argument.
Fill_from_mesh, huh ? I'll try to switch to that.
-
I think I've read that fill_from_mesh bypasses a bunch of checking. In my case I am totally confident that the faces the mesh generates are good to go.
I believe that the add_faces_from_mesh will handle duplicate points etc.
-
Cheers!
Will definately try fill_from_mesh later. I'm experiencing same thing as you, poly_mesh.add_polygon works fine for a while but times adds up exponentially when geometry gets denser.
In my case I'm storing Points in nested arrays. Might not be optimal to do so.
Saw some benchmark at StackOFLW from Array VS Hash lookups, and there where quite some difference in speed there. -
@garry k said:
I believe that the add_faces_from_mesh will handle duplicate points etc.
The polygonmesh handles duplicate points.
The fastest way I found to create PolygonMesh is to feed all the 3d points you are going to use to the PolygonMesh and then use the index that mesh.add_point returns to build a set of indicies which you use to make the polygons (add_polygon). Also, if you have an idea of roughly how many points and polygons the mesh will consist of, specify then when you create the PolygonMesh instance.
-
@unknownuser said:
The fastest way I found to create PolygonMesh is to feed all the 3d points you are going to use to the PolygonMesh and then use the index that mesh.add_point returns to build a set of indicies which you use to make the polygons (add_polygon).
This sounds great.
I was relying on building nested arrays/Hashes for storing hiearchies of polygonial-Points to build geometry, but PolygonMesh.class seams more suitable to the task.
Completely overlooked.
Wasent such a big deal in the beggining dealing with only 1 "mesh-points object", but multiple objects starting to make the array lookups nastier.
See there are som nice API methods in there as well for looking up polygons, quickly. No digging..I persume Calling methods (polygons & Points) to Polygonmesh.class is a Little faster than using Ruby Array/Hash lookups as well ? Since I gather they are dealt in C ?
-
The polygon mesh structure sounds very similar to ESRI's TIN structure (Triangular Irregular Network). I've used these strategies very successfully with huge data sets such as map topology containing more than 10 million points polygons. TIN stores unique points. Poly's can share points via common indexes. The neat thing about this is you can edit 1 point and all poly's that have that point in common automatically adjust themselves.
This data is stored in far memory which may or may not be fragmented. Memory fragmentation depends on other applications that are running concurrently. Borland implemented their own memory management which insulated our applications from external fragmentation. Of course as programmers we often created our own memory fragmentation issues.
Thus memory preallocation for the mesh may be more or less helpful. When a data structure grows beyond its memory capacity - it requests more memory. I will assume that most of these data structures require contiguous memory. If there is available free memory then the C realloc grabs more memory ensuring that the memory is still contiguous. If however there is no free memory immediately preceeding its own memory then a new block is aquired and all the data must be moved over to the new block and finally the old block is freed.
The TList that I have mentioned before aquires memory at the beginning in blocks that can contain 16 pointers (4 or 8 bytes depending on 32 or 64 bit OS). As you keep adding items into the list (contiguous array) and it grows and grows the algorithm changes and grabs bigger blocks. 16 to 32 to 64 ... up to 256 element blocks.
With my handrail - I ended up with 2067 points and 3420 polygons (lots of point sharing).
There was a huge difference moving from individual add_face to add_faces_from_mesh 24 seconds to 1 second. Changing to fill_from_mesh dropped it down to 1/3 of a second.Playing around with preallocation did not make any noticeable difference at all. I did not play around adding with mesh.add_point - this would have required a rewrite.
-
Right.. Had to google a bit there
Very interesting BTW.My program cant decide on in advance how many Points there will be, so preallocation won't apply(probably) in my case.
I was speculating more about if there would be any advantages in using polygonmesh VS arrays/hashes as a storage for Points building up during the program phase.
As a container, in the case when the endproduct will be a mesh from Points. Also fething data will be frequent.. And I'm talking plural polygonmeshes.I can see some advantages with lookup methods built into the class. But creating a new Polygonmesh.object must come at a price though, so I'm still wondering whether I should wait with the polygonmesh object for when it's actually needed and keep creating nested lists of arrays..
Anyway. It looks like you got what you came for
This question might not belong to this topic..
-
@jolran said:
I persume Calling methods (polygons & Points) to Polygonmesh.class is a Little faster than using Ruby Array/Hash lookups as well ? Since I gather they are dealt in C ?
Not necessarily. A Ruby Hash lookup will be faster than the linear traversing the C code is doing. When I need performance in Ruby I often find that creating a Hash cache object yields the most performance gain.
As always with these things - never assume - test it.
-
@garry k said:
Playing around with preallocation did not make any noticeable difference at all. I did not play around adding with mesh.add_point - this would have required a rewrite.
Every time to feed the PolygonMesh a Point3d object it will traverse all the existing points and check if is considered to have equal position. It does that to determine if it should merge points and reuse an existing index or add a new item. This process is currently not optimized so it's an O(N^2) operation. Now think of that when you create a triangle with three Point3d points.
Now, if you use add_point to populate the points you are going to user first - then use the indices collected when you generate your polygons you have performed the minimum amount of O(N^2) operations needed - and add_polygon will simply just add the indices to its internal list in O(1) time for each index.
-
@unknownuser said:
Not necessarily. A Ruby Hash lookup will be faster than the linear traversing the C code is doing. When I need performance in Ruby I often find that creating a Hash cache object yields the most performance gain.
As always with these things - never assume - test it.
Thanks! That IS what I was wondering.
WIll do some test indeed
-
Thomas,
That explains why a very large mesh starts taking a long time. For a small mesh there may be very little noticeable benefit using add_point. But for a large mesh it would.
With my handrail example - I go up the rail 1 section at a time adding polygons to the mesh. There are 53 points describing this handrails profile and there are 40 cross sections of 53 points describing the 39 sections. I am reading that the index increments in create order ( providing that there aren't duplicates ). So I will know up front how many unique points to add. I could do 2 passes - the first one adding in ALL the points and the second one adding in the polygons using the indexes. I might even be able to predict the number of polygons in the first pass before I actually start adding in the points.
The other strategy I could try is add 53 points (first time) then add 53 points for second profile then add in the polygons.
I believe that law of diminishing returns has set in - 1/3 of a second is very acceptable. But I am intrigued by this strategy.
So when can we expect some form of 3D tree, KD, AABB, BSP, Oct tree ... ???
-
Yea, if it's fast enough for what you do then now need to optimize. But just wanted to give a general idea of what's going on. In case you need to scale up. We should do a blog post on this topic.
I did in fact try recently with a Oct tree implementation we had for vertices - dirty hack. And it gave great performance improvements. But the memory usage was sky high - I suspect it was due to the overhead of using vertices (much more complex objects.).
So I cannot promise anything - other than tell you that we have been looking into it.
-
Thomas,
I've done C and C++ implementations of a number of different 3D index schemes, quad tree, rtree, bsp and oct trees. I'm thinking that you would have to implement this right in Sketchup. The issue isn't with the tree - it is with your == to determine if a point is a duplicate or not - that is already in sketchup. The key bits are whether to fully balance the tree or not - time vs memory. I wouldn't see the need to put the entire vertex into the tree. The x y z and index should be sufficient. Just use the tree to avoid O(N^2)
Anyway - I did switch over to add_point and then after sufficient points are in the mesh then I added in the polygons. Essentially it was add 53 points, add 53 points add 90 polygons add 53 points add 90 polygons.
Time reduced from 0.375 down to 0.175. So - I believe that I've now pretty much hit the wall. No further optimizations make any sense.
The BLOG idea makes sense - I really didn't feel that I had enough information to do all this until yesterday. Mind you I've only been programming in Ruby a short time. Although 20 years with C / C++ didn't hurt.
-
Thomas,
Another thought - you could provide an option with add_point that bypasses checking for duplicates. Only use if you are certain that you have points that are not duplicates.
That could easily speed things up for situations like brick work, latices etc.
Having said that - perhaps we could load the polygon mesh without checking PolygonMesh.points = pts
-
With an Point3D version of the Oct tree I don't think the point comparison will be an issue any more. From the tests with vertices it scaled very well.
Though I have some more ideas for the PolygonMesh class.
Advertisement