this post was submitted on 30 Aug 2024
34 points (74.3% liked)

Technology

59187 readers
2182 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
all 8 comments
sorted by: hot top controversial new old
[–] [email protected] 26 points 2 months ago (2 children)

My theory about what happened next — which is supported by conversations I’ve had with researchers in artificial intelligence, some of whom worked on Bing — is that many of the stories about my experience with Sydney were scraped from the web and fed into other A.I. systems.

These systems, then, learned to associate my name with the demise of a prominent chatbot. In other words, they saw me as a threat.

🤦‍♀️

[–] [email protected] 13 points 2 months ago

I'm tired of people ascribing any sort of intelligence to AI. It's not thinking, it's not seeing you as a threat, it's just predicting a probable response based on its training data.

[–] [email protected] 18 points 2 months ago* (last edited 2 months ago) (1 children)

This guy is a moron.

If the bots are saying they hate him and that he sucks, it’s because that’s what the general consensus was from all the data they scrapped not because the bot is scared of him as an AI killer.

[–] [email protected] 7 points 2 months ago

The bots are not reliable summarizes like that. They often can't tell the difference between the author and the subject of a piece of writing.

[–] [email protected] 8 points 2 months ago* (last edited 2 months ago) (1 children)

My theory about what happened next — which is supported by conversations I’ve had with researchers in artificial intelligence, some of whom worked on Bing — is that many of the stories about my experience with Sydney were scraped from the web and fed into other A.I. systems.

These systems, then, learned to associate my name with the demise of a prominent chatbot. In other words, they saw me as a threat.

LLMs predict text, they don't have feelings or awareness. Even if a researcher did say that I call to attention the Google chatbot programmer who thought an LLM became sentient because it said so when generating text.

Guys, my paper is sentient, it says so.

If the AI says he's disonhest and sensational that's because enough people on the internet have said so that the AI considers it to be true.

[–] [email protected] 4 points 2 months ago

It doesn't take people on the internet saying it though; just an association with people saying something and the name, which happens to people who write news articles about something.

[–] [email protected] 3 points 2 months ago* (last edited 2 months ago)

Prompt Google’s Gemini for its opinion of me, and it may respond, as it did one recent day, that my “focus on sensationalism can sometimes overshadow deeper analysis.”

Based on this article, it turns out the chatbots do get things right sometimes. The rest of his article made it pretty clear he's aware that they're not intelligent, but he just couldn't resist the sensational opening of 'AIs hate me!'