Colour individual ConstructionPoints?
-
They can (by layers) but that doesn't say anything to me about ruby - if they can or cannot.
-
I wasn't able to assign materials to CPoints using normal SU tools. But I was able to adding material using Ruby. But it did not display.
-
An interesting and a bit relevant discussion on the Help Forums:
http://www.google.com/support/forum/p/sketchup/thread?tid=3a5e4523189d0a86&hl=en&fid=3a5e4523189d0a8600047c9e6a68de25 -
hm..
you might not be able to assign material via ruby anyway - might have tested wrong... -
Yea, and colour by layer makes CPoints have different colours, but no way to control it individually. I reported it as a bug and hopefully it'll change in the future.
I'd expect them to behave like edges - that when you set the Style to Edges Color by Material Cpoint and CLines also use their material colour.
-
How about this topic going to the developers' discussion?
-
Wasn't really a Ruby question. Just explored the possibility that Ruby could work around it. It's a dead thread anyway. Coloured CPoints in SU isn't possible no matter what you try.
-
A
ConstructionPoint
, orConstructionLine
, is aDrawingElement
and that can have amaterial
.
Socline.material="red"
should work, and indeed it does 'set' in the console and returns no error, BUT afterwards when you query it,cline.material
returnsnil
!
It seems that clines/cpoints always have nil material but will display with their set color - i.e. you can globally have 'red' colored ConstructionLines, usingmodel.rendering_options["ConstructionColor"]="red"
but individually they are material='nil'. Ironicallymodel.rendering_options["ConstructionColor"]=nil
looks like it's worked but then again the previous color is unchanged when queried...So to recap - a cline can't have a material and clines must have a color...
-
If coloured construction points are impossible..how does this work?
http://www.pointools.com/pointools-plug-in-for-sketchup.php
I've been playing with insight3d..which is beta,(try it and you'll see. TIP: when its processing data..don't use other apps, playmusic etc..just be cool, leave it to run..it crashes a lot if you don't) but frankly brilliant..it really works with a good data set of consecutive images it can create very nice, tight point clouds of real world buildings. insight3d + Sketchup could be real game changer. So, anyone up for writing a ruby script to get coloured insight3d point clouds into SU?
-
I don't think they are using colored construction points. Looks more likely to be OpenGL points or something.
-
OpenGL within the 'draw' methods of a 'Tool' lets you color temporary graphics - 'lines', 'points' and with v8M1 now even 'faces'... however these are not part of the model's geometry and are only temporary 'markers' while the tool is running...
Cpoints cannot be 'colored' - except that the color of 'Guides' is adjustable under a Style [accessible via the API] - but that is a global-setting and all cpoints/clines will take that new color when using that Style...
-
You can colour them differently by setting colour by layer (and of course, putting them on all sorts of layers) but then that will affect faces as well (unless you meticulously put faces yet on another layer where they can retain whatever colour you add to that layer).
I would go nuts if I had to organise a model this way.
-
OK, this looks exiting (catch that Chateau facade!!): (sorry for the long post)
http://opensourcephotogrammetry.blogspot.com/
http://grail.cs.washington.edu/software/pmvs/I've been using the insight3d application for a few evenings with mixed results..it will make a point cloud OK (really well actually from a tightly shot set of images - shot 1-2m apart with lots of overlap) but it seems to fall over if you use large data sets.
I can get a BW point cloud into SU OK (very slow, it won't export the colour data in the txt file anyway and the vrml export does not seem to export any polygon faces - which would be great help in re-orientating the cloud in SU (?); if you can ever get the vrml into SU of course).
What Insight3D does really well (and this python app by the look) is create calibrated cameras and camera positions for each shot - it seems to do that really well. Exactly the job that drives you nuts if you use match-photo alone with 4/5/6 or more images. I use a Canon DSLR with an EFS 10-22mm for shots. It has almost zero barrel distortion at 17mm and I still have trouble matching more than 3/4 images with reasonable accuracy (even after running PTlens over them).
Aside from accuracy, which is basic at best, a big issue with match-photo is when you go round a corner and lose sight of the origin. You're left to guess the origin, nudge the handles a bit, guess again, nudge etc etc. (or start over and match 2 models together) You can get a result of sorts but its way off ideal. There has to be better way?!
...so what would be perfect is some very clever guy (not me for sure) could write a ruby that could:
-
Read camera data output from either insight3d or this "Bundler + CMVS + PMVS2" python app.
-
Load the point cloud (with an option to reduce it to say whatever% ; a manageable density) and set up all the camera positions for each image.
-
Set up match photo scenes from the images ready to start modelling.
It would be fantastic to be able to start photo-modeling with perfectly camera matched and scaled scenes, with a light point cloud to snap to. (coloured or not)
That should let us model much more complex scenes and measured surveys with much greater accuracy. We'd have a real low cost alternative to the very expensive commercial photogrammetry packages LiDAR scanners and SU plugs (which I won't mention)?
I couldn't write a ruby script to save my life but.. anyone think its possible?
-
-
Coloured point clouds in SU: well maybe, kind of?
I just made a test "cloud" of multi coloured text full stops (Tahoma) in SU; arrow hidden, hidden leader line..they sit there in space as "face me" squares that remain a constant size..and they can be any colour you like - but you can't snap to them (which may be good!). I would think that one could make a coloured point cloud with them, if one knew what one was doing (which sadly I don't). Maybe one could place a construction point at the center of the full stops to get a snap-to functionality.
Just found these too, fantastic. Watch the videos?
http://www.pgrammetry.com/forum/
http://meshlab.sourceforge.net/
http://blog.neonascent.net/archives/bundler-photogrammetry-package/
http://blog.neonascent.net/archives/photosynth-toolkit/
http://blog.neonascent.net/archives/cameraexport-photosynth-to-camera-projection-in-3ds-max/ -
Does Insight 3d or the open source app let you make markers of key points? Like could make the point cloud in those apps and simplify the pointcloud in those apps to only focus on the cornes of biulding and other important locations?
I wrote an importer for Voodoo camera tracker (that someone told me is not working for them....I need to check ihit really works). But it does a pretty good job making a point cloud from a film. Then my script imports the pointcloud and the cameras. Voodoo lets you specify the most important points and only export those. It would be handy if the apps you've listed would do the same.
Chris
-
-
Hi Chris
..yes.
insight3d will generate and match automatic points to calibrate the cameras with, but you can export just user vertices if you use "Modelling/Triangulate user vertices" instead of Triangulate all or Trusted Vertices. You can export the Camera data as an individual .txt file too. Insight3D seems to work mostly but it is very buggy (crashes, won't open saved files etc). I haven't tried "Bundler + CMVS + PMVS2" yet but it looks like a much more robust (but complicated) solution but I'm pretty sure you could do the same thing with it.I made a test of 3 awful images (just 3 images, various focal lengths, bushes in the way, nothing vertical or horizontal..some walls look true but don't be fooled, the ground this house is on has been subsiding for 130 years) just to see what insight3d did..it seems to have worked in spite of the obstructions. I placed quite few of my own points in and it matched enough to work with.
I tried to match photo these images in SU and they are a nightmare..which it why I chose them. Almost all the Photogrammetry/photomatch demo's I've seen are of nice square US office blocks or otherwise unobstructed large buildings, not many of those in London.
I have a zip of:
- the original images,
- the full point cloud
- the user points I placed, the ones that it matched
- the camera data file
if you want to look at them I'll send them Chris (10MB)?
-
I just checked out your Voodoo camera tracker video..just brilliant.
It's very close to what would be needed..could you import each image and line it up with the camera and point cloud too?
Advertisement