Peanutbjelly

joined 1 year ago
[–] [email protected] -1 points 1 year ago (1 children)

Reminds me of the article saying open ai is doomed because it can only last about thirty years with its current level of expenditure.

[–] [email protected] -1 points 1 year ago

Too much musk news. Had a dream less than an hour ago where i ended up in a car with elon. He started peacocking and got violent when i brought up zuck.

While it was a neat experience to beat up musk in a dream, id rather not have him in my dreams.

[–] [email protected] 0 points 1 year ago

i still think tesla did a poor job in conveying the limitations on the larger scale. they piggybacked waymo's capability and practice without matching it, which is probably why so many are over reliant. i've always been against mass-producing semi-autonomous vehicles to the general public. this is why.

and then this garbage is used to attack the general concept of autonomous vehicles, which may become a fantastic life-saver, because then it can safely drive these assholes around.

[–] [email protected] -1 points 1 year ago* (last edited 1 year ago)

replying despite your warning. i also won't be offended if you don't read. and the frustration is fair.

TLDR: intelligence is weird, complex, and abstract. it is very difficult for us to comprehend the complex nature of intelligence alien to our own. the human mind is a very specific combination of different intelligent functions.

funny you mention about the technology not being an existential threat, as the two researchers that i'd mentioned were recently paired at the monk debate arguing against the "existential threat" narrative.

getting into the deep end of the topic, i think most with a decent understanding of it would agree it is a form of "intelligence" alien to what most people would understand.

technically a calculator can be seen as a very basic computational intelligence, although very limited in capability or purpose outside of a greater system. LLMs mirror the stochastic word generation element of our intelligence, and a lot of weird neat amazing things that come with the particular type of intelligent system that we've created, but it definitely lacks much of what would be needed to mirror our own brand of intelligence. it's so alien in function, yet so capable at representing information that we are used to, it is almost impossible not to anthropomorphise.

i'm currently excited by the work being done in understanding our own intelligence as well

but how would you represent a function so complex and abstracted as this in a system like GPT? if qualia is an emergent experience developed through evolution reliant on the particular structure and makeup of our brains, you would need more than the aforementioned system at any level of compute. while i don't think the principle function would be impossible to emulate, i don't think it'd come about by upscaling GPT models. we will develop other facsimiles more aligned with the specific intentions we have for the tool the intelligence is designed and directed to be. i think we can sculpt some useful forms of intelligence out of upscaled and altered generative models, although yann lecun might disagree. either way, there's still a fair way to go, and a lot of really neat developments to expect in the near future. (we just have to make sure the gains aren't hoarded like every other technological gain of the past half century)

[–] [email protected] -1 points 1 year ago* (last edited 1 year ago) (2 children)

note that my snarky tone in this response is due to befuddlement and not an intent to insult or argue with you.

.

what a weirdly strict semantic requirement that you are emphasising as law. it's a good thing you are emphasising it so strongly, or we might see people use it while interviewing the guy who wrote the book on generative deep learning

or see it used in silly places like MIT or stanford.

what kind of grifter institutions would be so unprofessional?

oh no, melanie mitchell is using a header saying that she "writes about AI." are you really suggesting melanie mitchell is uninformed?

or.. yann lecun? "Researcher in AI."

do you know who yann lecun is? do you know what back-propagation is?

these are some of the most respectable and well known names in the field. these were the first few darts i threw, and i'm unsurprised that i'm hitting bullseyes. i'm sure i could find many more examples if i kept going.

maybe you're assuming any use of AI means AGI, but most people i know of in the field just say "AGI" when talking about AGI.

if you don't like how non-specific it is in definition and use, that's fine, and there's an argument to be made there, but you're stating your opinion and preference as consensus in the field that the term should just never be used.

i think your enthusiasm needs to run a little deeper before being so critical. the intense yet uninformed nature of your opinion would also explain how you find that adam has "still been more right about “AI” than anyone else recently."

what white papers am i missing that emphasise this rule so vehemently?

[–] [email protected] 0 points 1 year ago* (last edited 1 year ago) (4 children)

Does nobody remember how utterly uninformed Conover's previous takes on ai were? And I still know whole communities of people who basically live in vr. They are doing just fine.

Look here if you just want to hate on tech and tech enthusiasts. Don't look here for a reasoned and thoughtful conversation.

Also can we stop trying to paint AI enthusiasts in a bad light by acting like everyone into AI is an NFT grifter?

It's intellectually dishonest.

The way it's usually presented would make you think we have Yann LeCun and Melanie Mitchell in full fratboy drip promoting their NFTs.

[–] [email protected] 0 points 1 year ago

Almost like Amazon should have some responsibility in properly vetting their sellers. This isn't the only case of bad quality bootlegs on Amazon. They have no decent incentive to fix it if they are making more money from it. It doesn't help when the blame is filtered through the smokescreen of ephemeral merchants.

[–] [email protected] -1 points 1 year ago

Absolutely magical.

[–] [email protected] -1 points 1 year ago* (last edited 1 year ago)

then what the fuck are you even arguing? i never said "we should do NO regulation!" my criticism was against blaming A.I. for things that aren't problems created by A.I.

i said "you have given no argument against A.I. currently that doesn’t boil down to “the actual problem is unsolvable, so get rid of all automation and technology!” when addressed."

because you haven't made a cohesive point towards anything i've specifically said this entire fucking time.

are you just instigating debate for... a completely unrelated thing to anything i said in the first place? you just wanted to be argumentative and pissy?

i was addressing the general anti-A.I. stance that is heavily pushed in media right now, which is generally unfounded and unreasonable.

I.E. addressing op's article with "Existing datasets still exist. The bigger focus is in crossing modalities and refining content." i'm saying there is a lot of UNREASONABLE flak towards A.I. you freaked out at that? who's the one with no nuance?

your entire response structure is just.. for the sake of creating your own argument instead of actually addressing my main concern of unreasonable bias and push against the general concept of A.I. as a whole.

i'm not continuing with you because you are just making your own argument and being aggressive.

I never said "we can't have any regulation"

i even specifically said " i have advocated for particular A.I. tools to get much more regulation for over 5-10 years. how long have you been addressing the issue?"

jesus christ you are just an angry accusatory ball of sloppy opinions.

maybe try a conversation next time instead of aggressively wasting people's time.

[–] [email protected] -1 points 1 year ago* (last edited 1 year ago) (2 children)

it's like you just ignored my main points.

get rid of the A.I. = the problem is still the problem. has been especially for the past 50 years, any non-A.I. advancement continues the trend in the exact same way. you solved nothing.

get rid of the actual problem = you did it! now all of technology is a good thing instead of a bad thing.

false information? already a problem without A.I. always has been. media control, paid propagandists etc. if anything, A.I. might encourage the main population to learn what critical thought is. it's still just as bad if you get rid of A.I.

" CLAIMING you care about it, only to complain every single time any regulation or way to control this is proposed, because you either don’t actually care and are just saying it for rhetoric" think this is called a strawman. i have advocated for particular A.I. tools to get much more regulation for over 5-10 years. how long have you been addressing the issue?

you have given no argument against A.I. currently that doesn't boil down to "the actual problem is unsolvable, so get rid of all automation and technology!" when addressed.

which again, solves nothing, and doesn't improve anything.

should i tie your opinions to the actual result of your actions?

say you succeed. A.I. is gone. nothing has changed. inequality is still getting worse and everything is terrible. congratulations! you managed to prevent countless scientific discoveries that could help countless people. congrats, the blind and deaf lose their potential assistants. the physically challenged lose potential house-helpers. etc.

on top of that, we lose the biggest argument for socializing the economy going forward, through massive automation that can't be ignored or denied while we demand a fair economy.

for some reason i expect i'm wasting my time trying to convince you, as your argument seems more emotionally motivated than rationalized.

[–] [email protected] -1 points 1 year ago

Are we talking about data science??

There needs to be strict regulation on models used specifically for user manipulation and advertising. Through statistics, these guys know more about you than you do. That's why it feels like they are listening in.

Can we have more focus and education around data analysis and public influence? Right now the majority of people don't even know there is a battle of knowledge and influence that they are losing.

[–] [email protected] -1 points 1 year ago* (last edited 1 year ago) (4 children)

they are different things. it's not exclusively large companies working on and understanding the technology. there's a fantastic open-source community, and a lot of users of their creations.

would destroying the open-source community help prevent the big-tech from taking over? that battle has already been lost and needs correction. crying about the evil of A.I. doesn't actually solve anything. "proper" regulation is also relative. we need entirely new paradigms of understanding things like "I.P." which aren't based on a century of lobbying from companies like disney. etc.

and yes, understanding how something works is important for actually understanding the effects, when a lot of tosh is spewed from media sites that only care to say what gets people to engage.

i'd say a fraction of what i see as vaguely directed anger towards anything A.I. is actually relegated to areas that are actual severe and important breaches of public trust and safety, and i think the advertising industry should be the absolute focal point on the danger of A.I.

Are you also arguing against every other technology that has had their benefits hoarded by the rich?

view more: ‹ prev next ›