121
OpenAI’s Sam Altman is becoming one of the most powerful people on Earth. We should be very afraid
(www.theguardian.com)
This is a most excellent place for technology news and articles.
I'm going to attract downvotes, but this article doesn't convince me he's becoming powerful and that we should be very afraid. He's a grifter, sleezy, and making a shit ton of money.
Anyone who has used these tools knows they are useful, but they aren't the great investment the investors claim they are.
Being able to fool a lot of people into believing the intelligence doesn't make it good. When it can fool experts in a field, actively learn, or solve problems without training on the issue, that's impressive.
Generative AI is just a new method of signal processing. The input signal, the text prompt, is passed through a function (the model) to produce another signal (the response). The model is produced by a lot of input text, which can largely be noise.
To get AGI it needs to be able to process a lot of noise, and many different signals. "Reading text" can be one "signal" on a "communication" channel - you can have vision, and sound on it too - body language, speech. But a neural network with human ability would require all five senses, and reflexes to them - fear, guilt, trust, comfort, etc. We are no where near that.
The article seems to be based on a number of flawed premises.
Firstly, that chatgpt is the only LLM. It's not, and better, stronger, cheaper alternatives are likely to emerge.
Secondly, that LLMs are a step on the way to AGI. Like any minute now they're going to evolve. They're not, they're a one trick pony which is making coherent sentences. That's it.
The most terrifying AIs aren't even LLMs.
The same AI imaging systems that we used to get a 'picture' of a black hole a few years ago can be trained on the EM signatures of HDMI cables, meaning they now have a TEMPEST like system that can reliably read and decode any monitor within a thousand feet.
That same system can be trained on social media temperature and be used to identify all sorts of metrics such as degree of depression, gender, career, degree of sociopathy, and a ton of other things like whether a person is pregnant even before they know it.
LLMs are a toy, crosslinking AIs are a menace.
But hardly anyone is looking at the serious problem.
For now it's all butthurt artists angry that people are making porn with their publicly available art without paying them.
The real issue is our right to privacy and how we will be targeted once that is irrelevant.
Imagine if Drumpf gets into office again and one of his junior suckups says "Hey we can identify every social media account that spoke bad about you, and have a good chance of connecting them with a real world address and identity.". With SCOTUS given presidential immunity, what do you think he will do with that knowledge?
Exactly. And that's why we're in a bubble. Once the execs are finally convinced by their tech people that LLMs aren't some kind of magic bullet, we'll see a pretty big correction. As an investor, I'm not exactly looking forward to that, but as someone who works in tech, I'm honestly not worried about my job.
Strong agree here. You hit on a lot of the core issues on LLMs, so I'll say my opinions on the economic aspects.
It's been more than a year since chatGPT released this plague of "slap AI on the product and consumers will put their children down for collateral to buy!" which imo we haven't seen whatsoever. Investors still have a hard-on for the term AI that goes into the stratosphere but even that is starting to change a little.
Consumers level of AI distrust has risen considerably and consumers have seen past the hype. Wrapping this back around to the CEOs level of power, I just don't think LLMs are actually going to have enough marketability for general consumers to become juggernaut corpos.
LLMs absolutely have use cases but they don't fit into most consumer products. No one wants AI washers or rice cookers or friggin AI spoons and shoehorning them in decreases interest in the product.
That's also how I feel about "smart" devices in general. I don't want a smart refrigerator, I just want it to work. The same goes for other appliances, like my laundry machine, dishwasher, and rice cooker. The one area I kind of want it, TVs, has been ruined by stupid tracking and ads.
What's going to kill AI isn't AI itself, it's AI being forced into products where it doesn't make sense, and then ads being thrown in on top to try to make some sort of profit from it.
Not sure anyone ever says this and then has net negative votes. This one is no exception
Honestly, I'm actually surprised. I didn't think it would be a popular opinion
It's just funny how often it works out this way.
And silicons' nowhere near as energy efficient as biological neurons. There needs to be a massive energy breakthrough like fusion or actual biological processors becoming a thing to see any significant improvements.
The one comment I have here is that you may be overlooking the impact LLMs will have on the tech sector.
Basically Homeless just created a wasp-shooting real-world first-person shooter machine with high speed, accuracy, and strength motors, controllers, etc, controlled via Python, using Claude with little knowledge of how to do the hardware or software.
The productivity aspects, especially among those who go through the education system from this day forward, will be forever changed. There are already plenty of developers who wouldn't give up what they now have access to. Despite the black hole of money it is now, power and wealth will come over time.
... Is homeless a company? Are we talking about a video game.. a robot... White Anglo Saxon Protestants... What?
Also how does this relate to LLMs?
IIRC, "Basically Homeless" is the name of some content creator and/or YouTube channel.
Yea, I figured using a proper noun would give a clue, but oh well.
I don't have access to it at work. I like what I am able to do with with my own license of Jet brains ai, but it still leaves a lot to be desired.
I agree overall, but fooling experts isn’t what would make AI valuable. Being able to do valuable tasks would make it valuable. And it’s just not good enough at valuable tasks to be valuable.