this post was submitted on 28 Jan 2025
185 points (96.0% liked)

World News

39988 readers
2647 users here now

A community for discussing events around the World

Rules:

Similarly, if you see posts along these lines, do not engage. Report them, block them, and live a happier life than they do. We see too many slapfights that boil down to "Mom! He's bugging me!" and "I'm not touching you!" Going forward, slapfights will result in removed comments and temp bans to cool off.

We ask that the users report any comment or post that violate the rules, to use critical thinking when reading, posting or commenting. Users that post off-topic spam, advocate violence, have multiple comments or posts removed, weaponize reports or violate the code of conduct will be banned.

All posts and comments will be reviewed on a case-by-case basis. This means that some content that violates the rules may be allowed, while other content that does not violate the rules may be removed. The moderators retain the right to remove any content and ban users.


Lemmy World Partners

News [email protected]

Politics [email protected]

World Politics [email protected]


Recommendations

For Firefox users, there is media bias / propaganda / fact check plugin.

https://addons.mozilla.org/en-US/firefox/addon/media-bias-fact-check/

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 8 points 1 day ago (33 children)

It still can’t count the Rs in strawberry, I’m not worried.

[–] [email protected] 2 points 1 day ago (22 children)
[–] [email protected] 8 points 1 day ago* (last edited 1 day ago) (20 children)

No. It literally cannot count the number of R letters in strawberry. It says 2, there are 3. ChatGPT had this problem, but it seems it is fixed. However if you say “are you sure?” It says 2 again.

Ask ChatGPT to make an image of a cat without a tail. Impossible. Odd, I know, but one of those weird AI issues

[–] [email protected] 3 points 1 day ago (3 children)

Because there aren't enough pictures of tail-less cats out there to train on.

It's literally impossible for it to give you a cat with no tail because it can't find enough to copy and ends up regurgitating cats with tails.

Same for a glass of water spilling over, it can't show you an overfilled glass of water because there aren't enough pictures available for it to copy.

This is why telling a chatbot to generate a picture for you will never be a real replacement for an artist who can draw what you ask them to.

[–] [email protected] 3 points 1 day ago (1 children)

Not really it's supposed to understand what a tail is, what a cat is, and which part of the cat is the tail. That's how the "brain" behind AI works

[–] [email protected] -1 points 1 day ago* (last edited 1 day ago) (2 children)

It searches the internet for cats without tails and then generates an image from a summary of what it finds, which contains more cats with tails than without.

That's how this Machine Learning progam works

[–] [email protected] 1 points 10 hours ago (1 children)

It doesn't search the internet for cats, it is pre-trained on a large set of labelled images and learns how to predict images from labels. The fact that there are lots of cats (most of which have tails) and not many examples of things "with no tail" is pretty much why it doesn't work, though.

[–] [email protected] 0 points 10 hours ago (1 children)

And where did it happen to find all those pictures of cats?

[–] [email protected] 1 points 8 hours ago (1 children)

It's not the "where" specifically I'm correcting, it's the "when." The model is trained, then the query is run against the trained model. The query doesn't involve any kind of internet search.

[–] [email protected] -1 points 8 hours ago (1 children)

And I care about "how" it works and "what" data it uses because I don't have to walk on eggshells to preserve the sanctity of an autocomplete software

You need to curb your pathetic ego and really think hard about how feeding the open internet to an ML program with a LLM slapped onto it is actually any more useful than the sum of its parts.

[–] [email protected] 2 points 5 hours ago

Dawg you're unhinged

[–] [email protected] 1 points 15 hours ago (1 children)

That isn't at all how something like a diffusion based model works actually.

[–] [email protected] 0 points 14 hours ago (1 children)

So what training data does it use?

They found data to train it that isn't just the open internet?

[–] [email protected] 1 points 12 hours ago (1 children)

Regardless of training data, it isn't matching to anything it's found and squigglying shit up or whatever was implied. Diffusion models are trained to iteratively convert noise into an image based on text and the current iteration's features. This is why they take multiple runs and also they do that thing where the image generation sort of transforms over multiple steps from a decreasingly undifferentiated soup of shape and color. My point was that they aren't doing some search across the web, either externally or via internal storage of scraped training data, to "match" your prompt to something. They are iterating from a start of static noise through multiple passes to a "finished" image, where each pass's transformation of the image components is a complex and dynamic probabilistic function built from, but not directly mapping to in any way we'd consider it, the training data.

[–] [email protected] 0 points 12 hours ago* (last edited 12 hours ago) (1 children)

Oh ok so training data doesn't matter?

It can generate any requested image without ever being trained?

Or does data not matter when it makes your agument invalid?

Tell me how you moving the bar proves that AI is more intelligent than the sum of its parts?

[–] [email protected] 2 points 6 hours ago

Ah, you seem to be engaging in bad faith. Oh, well, hopefully those reading at least now between understanding what these models are doing and can engage in more informed and coherent discussion on the subject. Good luck or whatever to you!

[–] [email protected] 3 points 1 day ago

Oh, that’s another good test. It definitely failed.

There are lots of Manx photos though.

Manx images: https://duckduckgo.com/?q=manx&iax=images&ia=images

[–] [email protected] 0 points 21 hours ago (2 children)

so.... with all the supposed reasoning stuff they can do, and supposed "extrapolation of knowledge" they cannot figure out that a tail is part of a cat, and which part it is.

[–] [email protected] 2 points 8 hours ago

The "reasoning" models and the image generation models are not the same technology and shouldn't be compared against the same baseline.

[–] [email protected] 2 points 14 hours ago (1 children)

The "reasoning" you are seeing is it finding human conversations online, and summerizing them

[–] [email protected] -1 points 10 hours ago

I'm not seeing any reasoning, that was the point of my comment. That's why I said "supposed"

load more comments (16 replies)
load more comments (17 replies)
load more comments (27 replies)