Conifer image
-
Ian,
I cannot really give you any advice (there are 2D tree and png experts around here like tomdesk).
The only thing I can add is that the result you posted is really nice - though true that with such a file size it would be almost impossible to work in SU when there are dozens or hundreds of plants like this.
So you do really need to find a way to make it slower - I hope someone can help you out here! -
thanks Gaieus
I spent the better part of 2 summers compiling the photography - trying to photograph plants with a OptiBlue background in my makeshift (cargovan) studio - then the fun task of masking the images - so I do have the images - just need to define an effective way to process them to be usefull
thanks
~ian -
Ian,
I exported the image from your model, simply resized it to 25% and reloaded it into the model. I cannot really see much difference but the file size became about the quarter of the original.
I guess you could go on experimenting how much you can resize your images (as png-s cannot be "compressed) without any visible loss. SU somehow degrades the quality of the images anyway so you cannot really exploit all the fine details of a hi-res picture.
Also the ruby service you are using will put soo many endpoints and edges along the side of the image (with hight settings) that are inefficient. You could even use the mere png file (with alpha transparency) the only problem is than the shadow for SU cannot interpret the alpha transparency as transparent in images.
So when using the script, you can get a decent "cut-around" of the image which will cast tree-shape shadows then. But these shadows do not need to be very detailed so some lower setting would do the job easily.
I also turned your model into a component and set to "always face camera". For some reason I could not upload it to the WH so here's a link to my space:
http://www.gaieus.hu/su/skps/IansTree.skpHave fun!
-
Ian...I played with your image a bit this morning as well, reducing the size as Gaieus suggested, also reducing the color depth of the image (which helps with file size a lot). Of course the more you do to reduce size the more the image degrades.
The question is begged: how you are going to use them...? To me, the degraded image actually integrates into an SU model better than the beautiful and crisp original. But if my focus were entirely the plant materials I'd be less inclined to do much to them.
Also, I did a little test of 50 components...one using your 1.7meg image, one using my 140k image, both using Smustard's edges: no noticeable difference orbiting, etc. The only difference was the file size (the 1.6meg difference between the two image sizes). I'm thinking, but I don't have several large images, or the time, to test it, there may not be the problem you'd expect using the larger images since the extra bytes are contained within the image definition, and not spread out in extra 3d computations...you gurus can comment on this?
Anyway, looks to me the Smustard service is doing everything it has promised...I played a little with it during beta testing (it worked well then) and the edges seem to be a bit tighter and fewer in number now. Gai is right about reducing them for added functionality of a big model (if you really need this for some use), and I think (can't prove) a component with a unexploded image and a separate cut up shadow face is a bit "faster"...plus, you also have no image constraints while determining the shadow's configuration and the edge count you can bear. (But what a beautiful shadow it makes now!)
As far as 2.5d...a quick and dirty is to simply rotate the top of the image back a few degrees (I usually use 10deg, 15deg seems to be the max before this trick becomes obvious to me). Beyond that, it is a matter of cutting up the image and building some minimal 3d illusion as you start looking more from the top of the component always facing you.
-
thanks Tom and Gaieus
appreaciate the comments - my primary purpose for the plant images are for my landscape design business, so I can get closer to the actual look n feel of the landscape.
I will experiment further and report back
~ian
-
Ian,
Nice image for using ImageProfile
I can agree with the previous comments that the high-accuracy outline might be unnecessary, and using a lower-accuracy outline would save some model weight and still give good enough shadows for SketchUp.
As for the "FaceMe" option - the script saves the image and its profile as a .skp file in the same folder as the original image. If you insert that .skp file into a model, it should exhibit the FaceMe behavior. If you edit the .skp, you will not see FaceMe behavior. This is because the file saved is intended to be inserted as a component, and it is the model itself (not the stuff in the model) that has the FaceMe behavior. It is like other FaceMe components in that regard. Hope that makes sense.
-
I'm not sure I follow. So if I have an image of a tree (.png for example) what would the process be to use that image as a faceme object in SU? When I try to do this the component doesn't actually face the camera, it's skewed off at some funny angle, any idea why that is?
-
Jesse...be sure to have the png and the camera/model both facing front before making the face-me component (at least, forgetting this is when I get funny facing face-me's :`)
-
Thanks for getting back to me Tom. The step I was not aware of is that I have to be facing front when I make the component. Thank you very much.
-
Also, have a look at this exxcellent tutorial by our member, Eric (aka Boofredlay) here:
http://www.sketchucation.com/creating-a-2d-face-me-tree-in-google-sketchup/
Advertisement