this post was submitted on 21 Nov 2024
82 points (96.6% liked)
Technology
59504 readers
2952 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I’ve found myself thinking “well, you just helped teach the AI about that one…” various times when reading content online.
It’s a strange thing to know that a form of the basilisk is real. Things posted will help AI get better, if only my teeny tiny increments each time.
AI learning isn't the issue, its not something we will be able to put a lid on either way. Either it destroys or saves the world. It doesn't need to learn much to do so besides evolving actual self-agency and sovereign thought.
What is a huge issue is the secretive non-consentual mining of peoples identity and expressions.
And then acting all normal about It.
So... there is no Artificial Intelligence. The AI cannot hurt you. It is just a (buggy) statistical language parsing system. It does not think, it does not plan, it does not have goals, it does not understand, and it doesn't even really "learn" in a meaningful sense.
If we're talking about machine learning systems based on multi-dimensionl statistical analyses, then it will do neither. Both extremes are sensationalism and arguments based on the idea that either such outcome will come from the current boom of ML technology is utter nonsense designed to drive engagement.
Oh, is that all?
No one on the planet has any idea how to replicate the functionality of consciousness. Sam Altman would very much like you to believe that his company is close to achieving this so that VCs will see the public interest and throw more money at him. Sam Altman is a snake oil salesman.
This is absolutely true and correct and the collection and aggregation of data on human behavior should be scaring the shit out of everyone. The potential for authoritarian abuses of such data collection and tracking is disturbing.
I didn’t say it was an issue. I just said it was a strange feeling to know AI is watching us talk past each other.
I sort of misread your comment as saying the basilisk is inevitable which is a thought i would describe as least oopsie-issue-level.
Still there are many other people bent on directly poisoning AI to counteract the learning but i just fear that will get it to dangerously rogue mentally challenged AI faster then if we aimed for maximum coherent intelligence and hope that benevolence is an emergent behavior from it.
But more at hand. If we build AI by grossly exploiting our own fellow-humans. How do we expect it will treat us once it reaches a state of independent learning.