Kemp's Outpost
-
There are several areas in which AI is much better than normal render engines: these are textiles! I've decided to render architecture classically and render textiles in Skechup diffusion and then mix the two together in Photoshop.
In your case it's about architecture. rendering is better than diffusion
-
Thanks for doing these comparisons.. very cool to see.
-
@jo-ke said:
The light is often better in diffusion.
Yeah, in those few cases in which the sun isn't casting shadows in 4 different directions, maybe..
-
@marked001 said:
Thanks for doing these comparisons.. very cool to see.
Glad you appreciate them for that reason. Thanks.
-
This would be a great article on the site here...
https://sketchucation.com/forums/viewtopic.php?p=695131#p695131
Its useful, informative and honest feedback from a power user's viewpoint
It reminded of another I read last month on the topic of 3d rendering and AI
https://uxdesign.cc/using-ai-for-3d-rendering-a-practical-guide-for-designers-a2a037ed1ad0
-
Just read the article. Yes, he sums up rather well the advantages and weaknesses, especially that of control.
Good for initial concept ideas, but... not a solution for "on purpose" projects. -
I have just spent several minutes smiling like a loon. Thanks mate.
-
@mike amos said:
I have just spent several minutes smiling like a loon. Thanks mate.
Loon: North American waterfowl, primary character from "On Golden Pond" that forebodes doom and gloom.
Loon: a relative of Daffy Duck and/or Woody Wood Pecker, characters that laugh at doom and gloom.
In either case, I assume that is a positive thing.
-
@duanekemp said:
not a solution for "on purpose" projects.
A couple of weeks ago, while I was modeling that compost turning machine you probably saw on sketchup facebook page, a friend came out with one of those "magic" AI modeling website..
He prompted the robot to produce "A square sofa pillow with white and red stripes" to put it in a rendering.
The robot proposed 4 different (half decent) pillows and he was like "You see? You can use this stuff for actual work".So I was curious to test it for my "actual work" and prompted in more or less the following (which were the exact requirements for my animated model) just to see to what extent it could be "useful":
I need a self-propelled compost heap turning machine, about 3 meters tall and 4 meters wide.
It should consist of 3 draw calls, the first one for the main body, the second one for the animated tracks and the third one for the animated roller.
The main material should be orange paint with compost splats coming from below and the "AMIU Puglia S.p.a" logo on the back of the cabin.
The textures should be packed for Unity HDRP metallic-smoothness PBR shader.
I need the UV chart to be split in 2 different UDIM tiles, a 4k set for the main body and a 2k set for the animated parts.
It should be rigged to follow a spline and I need constraints on the tracks and the roller in order to follow accordingly whenever the model animates along the splineThe robot proposed 4 different machines similar to a coffee grinder (static models, with only base color map, about 1m x 1m x 1m large, with no tracks or roller whatsoever).
Two were blue and two were green.
None of them was at least orange.So yeah.. that's it
-
@panixia said:
@duanekemp said:
not a solution for "on purpose" projects.
The robot proposed 4 different machines similar to a coffee grinder (static models, with only base color map, about 1m x 1m x 1m large, with no tracks or roller whatsoever).
Two were blue and two were green.
None of them was at least orange.So yeah.. that's it
Marcello, that just made me laugh out loud.
-
there are examples that look virtually the same but one took 1 hour while other - 19 hours.
Why such HUGE gaps? I never realized all this AI generation is so time consuming
Is this done locally? -
@rv1974 said:
there are examples that look virtually the same but one took 1 hour while other - 19 hours.
Why such HUGE gaps? I never realized all this AI generation is so time consuming
Is this done locally?These are final selections out of a collection of a hundred+ images saved over days of testing with different settings. The gap does not indicate the time to create the image, it is only the designation given when saving out an image. So, after going to bed and starting the next day, that's already many, many hours between tests, let alone series of tests before finding something I felt worthy of saving. Then, when finished, the family helped select their favorites, which means allot of saved images are not presented. So, it takes no time at all. But, to find something worth saving may take allot of time and myriads of tries, which explains the difference in the gaps between shots.
I hope that makes sense anyway.
-
Very interessting post.
Su is quite good but there is little opportunity to influence the result.
I took your Sketchup output and tried to create an image with Stable Diffusion and controlnet. Here you can have a much bigger influence on the image.
I hope you like the result
-
mixing the thea output and the stable diffusion in thea gave me that result:
I think it has the benenfits of both worlds: accuracy of a render and dirt/light of the AI
-
@jo-ke said:
Very interessting post.
Su diffusion is quite good but there is little opportunity to influence the result.
I took your Sketchup output and tried to create an image with Stable Diffusion and controlnet. Here you can have a much bigger influence on the image.
I hope you like the result
And until Trimble adjusts the parameters to allow a bit of control, more than text prompt suggestions, it will remain a sort of "toy" for concept. I'm still rather curious as to why SketchUp is not allowing more control over the effect of the plugin on its interpretation of geometry or simply extending the limits of respecting geometry to not malform the geometry and lines in its output. Perhaps a license thing... don't know.
-
It is the first version, i hope it will get better and better
-
Damn cool model!
Very interesting the way Diffusion interprets the model. Some very cool, some kinda impressionistic, but I like the variety.
Your Thea renders are very cool.
-
@jo-ke said:
I hope you like the result
Reminds me of Disney Animation movie backgrounds from the '70s. Cool stuff.
-
@bryan k said:
Damn cool model!
Very interesting the way Diffusion interprets the model. Some very cool, some kinda impressionistic, but I like the variety.
Your Thea renders are very cool.
Thank you Bryan. Much appreciated.
Impressionistic is a good word to describe the interpretation-ism happening here.
-
Wow Duane, I was missing this post... amazing,
Just saw it on your Artstation page:
https://duanekemp.artstation.com/projects/WBqK8N
Advertisement