- cross-posted to:
- technology@beehaw.org
- cross-posted to:
- technology@beehaw.org
Each of these reads like an extremely horny and angry man yelling their basest desires at Pornhub’s search function.
Each of these reads like an extremely horny and angry man yelling their basest desires at Pornhub’s search function.
Strictly speaking, you arguably don’t either. Your knowledge of the world is based on your past experiences.
You do have more-sophisticated models than current generative AIs do, though, to construct things out of aspects of the world that you have experienced before.
The current crop are effectively more-sophisticated than simply pasting together content – try making an image and then adding “blue hair” or something, and you can get the same hair, but recolored. And they ability to replicate artistic styles is based on commonalities in seen works, but you don’t wind up seeing chunks of material just done by that artist. But you’re right that they are considerably more limited then a human.
Like, you have a concept of relative characteristics, and the current generative AIs do not. You can tell a human artist “make those breasts bigger”, and they can extrapolate from a model built on things they’ve seen before. The current crop of generative AIs cannot. But I expect that the first bigger-breast generative AI is going to attract users, based on a lot of what generative AIs are being used for now.
There is also, as I understand it, some understanding of depth in images in some existing systems, but the current generative AIs don’t have a full 3d model of what they are rendering.
But they’ll get more-sophisticated.
I would imagine that there will be a combination of techniques. LLMs may be used, but I doubt that they will be pure LLMs.