Playing arround with Sketchup diffusion
-
new test: merging Twilghtrender and sketchup diffusion
-
and another one.
It really gives a touch of reality
-
and another one with rayscaper
-
new test
-
Very interesting results.
-
It's very cool to see the interplay of accurate rendering and stable diffusion. Are you already using this technique for client renders, or is it just experimentation?
Cheers,
Thomas -
and another one sketchup to controlnet- stable diffusion
-
@pixelcruncher said:
It's very cool to see the interplay of accurate rendering and stable diffusion. Are you already using this technique for client renders, or is it just experimentation?
Cheers,
Thomasnot yet, but I am trying to get better results with combining different methods. But I am still not happy with it.
-
@jo-ke said:
not yet, but I am trying to get better results with combining different methods. But I am still not happy with it.
Have you tried ComfyUI yet jo-ke? It might help you dial it in but the learning curve is high.
If you are not familiar there is a ton of stuff on Youtube plus theres a great site https://comfyworkflows.com/ where the images shared contain the metadata of how they were made.
-
yes, but after an update ComyUI does not work anymore. I am more familiar with automatic1111
In the End: I get the best results with renderings. Here: Twinmotion
-
twinmotion + stablediffusion gives the perfect imperfections...
I love this combination
-
@jo-ke said:
In the End: I get the best results with renderings. Here: Twinmotion
Rolling up your sleeves and getting your hands dirty still looks best. Diffusion can help you in terms of direction to take with lighting and colour palette. But for a final result I still believe you need human creativity.
But AI will get there and it will be interesting to see how the likes of Vray, D5, TwinMotion etc respond.
These shots are though. Really nice output.
-
@jo-ke said:
I am testing Sketchup diffusion.
I took an old model and let it go through the new app sketchup diffusion. the result is quite good but the quality looks a bit like a sketch. I took the Picture to stable diffusion and developped it further with image2image and realistic vision. The result is really great...I find it interesting that the sketchup dufusion and stable diffusion gave a very similar treatment to both images. For instance the sky is almost identical as is the lighting. The only difference I can see is the material mapping and roughness. I wonder why this is?
-
@rich o brien said:
@jo-ke said:
not yet, but I am trying to get better results with combining different methods. But I am still not happy with it.
Have you tried ComfyUI yet jo-ke? It might help you dial it in but the learning curve is high.
If you are not familiar there is a ton of stuff on Youtube plus theres a great site https://comfyworkflows.com/ where the images shared contain the metadata of how they were made.
What are benefits of Comfy (in comparison to A1111)?
-
@solo said:
So, this whole AI thing may be more than a fad?
So far the only use (from all this AI craze) I see is 3D people enhancement in stills.
-
@rich o brien said:
@jo-ke said:
In the End: I get the best results with renderings. Here: Twinmotion
Rolling up your sleeves and getting your hands dirty still looks best. Diffusion can help you in terms of direction to take with lighting and colour palette. But for a final result I still believe you need human creativity.
But AI will get there and it will be interesting to see how the likes of Vray, D5, TwinMotion etc respond.
These shots are though. Really nice output.
Meanwhile Chaos implemented DLSS 3.5 thing in Vantage - and it it really makes wonders
-
@rv1974 said:
What are benefits of Comfy (in comparison to A1111)?
Its node based so its easier to follow logic if your comfortable in node based environments.
If not stick with A1111.
-
@l i am said:
I find it interesting that the sketchup dufusion and stable diffusion gave a very similar treatment to both images. For instance the sky is almost identical as is the lighting. The only difference I can see is the material mapping and roughness. I wonder why this is?
The Sketchup Diffusion Output is the Stable diffusion input. That is the reason both look similar but stable diffusion img2img devellops it a bit further to a better result.
-
interesting observation on AI:
-
overlay in photoshop:
-sketchtup
-skectchupdiffusion
-rayscapernice result
Advertisement