this post was submitted on 02 Mar 2024
388 points (96.4% liked)
Technology
59292 readers
4160 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I beg to differ.
The llm is executing a function on a diffusion image model. The llm does not generate the image itself
This doesn't contradict what the OP said. ChatGPT is now an interface to both an LLM and a diffusion-based image generator.
You’re being pedantic—and confidently ignorant. The product is called “ChatGPT” and through that you can access multiple models. Like ChatGPT 3.5, or DALL•E.
ChatGPT is just a front-end that maintains a session that gets fed to an LLM each time you add a reply, and now has access to image gen, too, so I was wrong.
Yeah, but the model that does the images is actually Dall-e, you are just using gpt's interface to create them
So, I’m using ChatGPT.
Thank you for agreeing with me.
Sure, sure, was not desagreeing, technically you are using ChatGPT. Just pointing out that the model itself handling the image creation is not chatgpt
Pedantic.
Imbecile.
How so?
I mean, the GPT model is a LLM and ChatGPT uses DALL-E in the background to create images. So depending on definition you’re both correct :-)
Depending on how I define anything means I’m always correct I guess. 🤷♂️
Girl on the right probably killed a Spanish swordsmith back in the day.