A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community’s icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
I just leave this link here as counter point (somewhat NSFW):
https://www.reddit.com/r/StableDiffusion/comments/11un888/flamboyant_origami_fgures/
A whole lot of weird stuff can be created by bashing things together with AI. The beauty of AI is after all that you can “edit” with high level concepts, not just raw pixels.
And as for humans and dogs: https://imgur.com/a/TdXO7tz
That’s not concept mixing, also, it’s not proper origami (paper doesn’t fold like that). The AI knows “realistic swan” and “origami swan”, meaning it has a gradient from “realistic” to “origami”, crucially: Not changing the subject, only the style. It also knows “realistic human”, now follow the gradient down to “origami human” and there you are. It’s the same capability that lets it draw a realistic mickey mouse.
It having understanding of two different subjects, say, “swan” and “human”, however, doesn’t mean that it has a gradient between the two, much less a usable one. It might be able to match up the legs and blend that a bit because the anatomy somehow matches, and well a beak is a protrusion and it might try to match it with the nose. Wings and arms? Well it has probably seen pictures of angels, and now we’re nowhere close to a proper chimera. There’s a model specialised on chimeras (gods is that ponycat cute) but when you flick through the examples you’ll see that it’s quite limited if you don’t happen to get lucky: You often get properties of both chimera ingredients but they’re not connected in any reasonable way. Which is different from the behaviour of base sdxl, which is way more prone to bail out and put the ingredients next to each other. If you want it to blend things reliably you’ll have to train a specialised model using appropriate input data, like e.g. this one.