464
Leaked Documents Show Nvidia Scraping ‘A Human Lifetime’ of Videos Per Day to Train AI
(www.404media.co)
This is a most excellent place for technology news and articles.
I feel like the amount of training data required for these AIs serves as a pretty compelling argument as to why AI is clearly nowhere near human intelligence. It shouldn't take thousands of human lifetimes of data to train an AI if it's truly near human-level intelligence. In fact, I think it's an argument for them not being intelligent whatsoever. With that much training data, everything that could be asked of them should be in the training data. And yet they still fail at any task not in their data.
Put simply; a human needs less than 1 lifetime of training data to be more intelligent than AI. If it hasn't already solved it, I don't think throwing more training data/compute at the problem will solve this.
You’ve had the entire history of evolution to get the instinct you have today.
Nature Vs Nurture is a huge ongoing debate.
Just because it takes longer to train doesn’t mean it’s not intelligent, kids develop slower than chimps.
Also intelligent doesn’t really mean anything, I personally think Intelligence is the ability to distillate unusable amounts of raw data and intuit a result beneficial to one’s self. But very few people agree with me.
I see intelligence as filling areas of concept space within an econiche in a way that proves functional for actions within that space. I think we are discovering more that "nature" has little commitment, and is just optimizing preparedness for expected levels of entropy within the functional eco-niche.
Most people haven't even started paying attention to distributed systems building shared enactive models, but they are already capable of things that should be considered groundbreaking considering the time and finances of development.
That being said, localized narrow generative models are just building large individual models of predictive process that doesn't by default actively update information.
People who attack AI for just being prediction machines really need to look into predictive processing, or learn how much we organics just guess and confabulate ontop of vestigial social priors.
But no, corpos are using it so computer bad human good, even though the main issue here is the humans that have unlimited power and are encouraged into bad actions due to flawed social posturing systems and the confabulating of wealth with competency.