Kemp's Outpost
-
@rv1974 said:
there are examples that look virtually the same but one took 1 hour while other - 19 hours.
Why such HUGE gaps? I never realized all this AI generation is so time consuming
Is this done locally?These are final selections out of a collection of a hundred+ images saved over days of testing with different settings. The gap does not indicate the time to create the image, it is only the designation given when saving out an image. So, after going to bed and starting the next day, that's already many, many hours between tests, let alone series of tests before finding something I felt worthy of saving. Then, when finished, the family helped select their favorites, which means allot of saved images are not presented. So, it takes no time at all. But, to find something worth saving may take allot of time and myriads of tries, which explains the difference in the gaps between shots.
I hope that makes sense anyway.
-
Very interessting post.
Su is quite good but there is little opportunity to influence the result.
I took your Sketchup output and tried to create an image with Stable Diffusion and controlnet. Here you can have a much bigger influence on the image.
I hope you like the result
-
mixing the thea output and the stable diffusion in thea gave me that result:
I think it has the benenfits of both worlds: accuracy of a render and dirt/light of the AI
-
@jo-ke said:
Very interessting post.
Su diffusion is quite good but there is little opportunity to influence the result.
I took your Sketchup output and tried to create an image with Stable Diffusion and controlnet. Here you can have a much bigger influence on the image.
I hope you like the result
And until Trimble adjusts the parameters to allow a bit of control, more than text prompt suggestions, it will remain a sort of "toy" for concept. I'm still rather curious as to why SketchUp is not allowing more control over the effect of the plugin on its interpretation of geometry or simply extending the limits of respecting geometry to not malform the geometry and lines in its output. Perhaps a license thing... don't know.
-
It is the first version, i hope it will get better and better
-
Damn cool model!
Very interesting the way Diffusion interprets the model. Some very cool, some kinda impressionistic, but I like the variety.
Your Thea renders are very cool.
-
@jo-ke said:
I hope you like the result
Reminds me of Disney Animation movie backgrounds from the '70s. Cool stuff.
-
@bryan k said:
Damn cool model!
Very interesting the way Diffusion interprets the model. Some very cool, some kinda impressionistic, but I like the variety.
Your Thea renders are very cool.
Thank you Bryan. Much appreciated.
Impressionistic is a good word to describe the interpretation-ism happening here.
-
Wow Duane, I was missing this post... amazing,
Just saw it on your Artstation page:
https://duanekemp.artstation.com/projects/WBqK8N -
@majid !!!
Nice to see you here. Thanks for the compliment!
Advertisement