this post was submitted on 20 Apr 2024
698 points (94.6% liked)

Showerthoughts

29698 readers
1590 users here now

A "Showerthought" is a simple term used to describe the thoughts that pop into your head while you're doing everyday things like taking a shower, driving, or just daydreaming. A showerthought should offer a unique perspective on an ordinary part of life.

Rules

  1. All posts must be showerthoughts
  2. The entire showerthought must be in the title
  3. Avoid politics
    1. NEW RULE as of 5 Nov 2024, trying it out
    2. Political posts often end up being circle jerks (not offering unique perspective) or enflaming (too much work for mods).
    3. Try c/politicaldiscussion, volunteer as a mod here, or start your own community.
  4. Posts must be original/unique
  5. Adhere to Lemmy's Code of Conduct-----

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 17 points 7 months ago (2 children)

You're using AI to mean AGI and LLMs to mean AI. That's on you though, everyone else knows what we're talking about.

[–] [email protected] -2 points 7 months ago (5 children)

Words have meanings. Marketing morons are not linguists.

[–] [email protected] 5 points 7 months ago

artificial intelligence noun

1 : the capability of computer systems or algorithms to imitate intelligent human behavior

also, plural artificial intelligences : a computer, computer system, or set of algorithms having this capability

2 : a branch of computer science dealing with the simulation of intelligent behavior in computers

https://www.merriam-webster.com/dictionary/artificial%20intelligence

[–] [email protected] 4 points 7 months ago (1 children)

As someone who still says a kilobyte is 1024 bytes, i agree with your sentiment.

[–] [email protected] 1 points 6 months ago

Amen. Kibibytes my ass ;)

[–] [email protected] 3 points 6 months ago* (last edited 6 months ago) (1 children)

Words might have meanings but AI has been used by researchers to refer to toy neutral networks longer than most people on Lemmy have been alive.

This insistence that AI must refer to human type intelligence is also such a weird distortion of language. Intelligence has never been a binary, human level indicator. When people say that a dog is intelligent, or an ant hive shows signs of intelligence, they don't mean it can do what a human can. Why should AI be any different?

[–] [email protected] 0 points 6 months ago (1 children)

You honestly don't seem to understand. This is not about the extent of intelligence. This is about actual understanding. Being able to classify a logical problem / a thought into concepts and processing it based on properties of such concepts and relations to other concepts. Deep learning, as impressive as the results may appear, is not that. You just throw a training data at a few billion "switches" and flip switches until you get close enough to a desired result, without being able to predict how the outcome will be if a tiny change happens in input data.

[–] [email protected] 4 points 6 months ago (1 children)

I mean that's a problem, but it's distinct from the word "intelligence".

An intelligent dog can't classify a logic problem either, but we're still happy to call them intelligent.

[–] [email protected] 1 points 6 months ago

With regards to the dog & my description of intelligence, you are wrong: Based on all that we know and observe, a dog (any animal, really) understands concepts and causal relations to varying degrees. That's true intelligence.

When you want to have artificial intelligence, even the most basic software can have some kind of limited understanding that actually fits this attempt at a definition - it's just that the functionality will be very limited and pretty much appear useless.

Think of it this way: deterministic algorithm -> has concepts and causal relations (but no consciousness, obviously), results are predictable (deterministic) and can be explained deep learning / neural networks -> does not implicitly have concepts nor causal relations, results are statistical (based on previous result observations) and can not be explained -> there's actually a whole sector of science looking into how to model such systems way to a solution Addition: the input / output filters of pattern recognition systems are typically fed through quasi-deterministic algorithms to "smoothen" the results (make output more grammatically correct, filter words, translate languages)

If you took enough deterministic algorithms, typically tailored to very specific problems & their solutions, and were able to use those as building blocks for a larger system that is able to understand a larger part of the environment, then you would get something resembling AI. Such a system could be tested (verified) on sample data, but it should not require training on data.

Example: You could program image recognition using math to find certain shapes, which in turn - together with colour ranges and/or contrasts - could be used to associate object types, for which causal relations can be defined, upon which other parts of an AI could then base decision processes. This process has potential for error, but in a similar way that humans can mischaracterize the things we see - we also sometimes do not recognize an object correctly.

[–] [email protected] 2 points 7 months ago

I've given up trying to enforce the traditional definitions of "moot", "to beg the question", "nonplussed", and "literally" it's helped my mental health. A little. I suggest you do the same, it's a losing battle and the only person who gets hurt is you.

[–] [email protected] 1 points 7 months ago (1 children)

Op is an idiot though hope we can agree with that one.

Telling everyone else how they should use language is just an ultimately moronic move. After all we're not French, we don't have a central authority for how language works.

[–] [email protected] 0 points 6 months ago

Telling everyone else how they should use language is just an ultimately moronic move. After all we’re not French, we don’t have a central authority for how language works.

There's a difference between objecting to misuse of language and "telling everyone how they should use language" - you may not have intended it, but you used a straw man argument there.

What we all should be acutely aware of (but unfortunately many are not) is how language is used to harm humans, animals or our planet.

Fascists use language to create "outgroups" which they then proceed to dehumanize and eventually violate or murder. Capitalists speak about investor risks to justify return on invest, and proceed to lobby for de-regulation of markets that causes human and animal suffering through price gouging and factory farming livestock. Tech corporations speak about "Artificial Intelligence" and proceed to persuade regulators that - because there's "intelligent" systems - this software may be used for autonomous systems that proceed to cause injury and death on malfunctions.

Yes, all such harm can be caused by individuals in daily life - individuals can be murderers or extort people on something they really need, or a drunk driver can cause an accident that kills people. However, the language that normalizes or facilitates such atrocities or dangers on a large scale, is dangerous and therefore I will proceed to continue calling out those who want to label the shitty penny market LLMs and other deep learning systems as "AI".

[–] [email protected] -4 points 7 months ago (2 children)

Nobody has yet met this challenge:

Anyone who claims LLMs aren’t AGI should present a text processing task an AGI could accomplish that an LLM cannot.

Or if you disagree with my

[–] [email protected] 1 points 7 months ago (2 children)

Oops accidentally submitted. If someone disagrees with this as a fair challenge, let me know why.

I’ve been presenting this challenge repeatedly and in my experience it leads very quickly to the fact that nobody — especially not the experts — has a precise definition of AGI

[–] [email protected] 1 points 7 months ago
[–] [email protected] 1 points 6 months ago

While they are amazingly effective at many problems we throw at them, I'm not convinced that they're generally intelligent. What I do know is that in their current form, they are not tractable systems for anything but relatively small problems since compute and memory costs increase quadratically with the number of steps.

[–] [email protected] 0 points 6 months ago

"Write an essay on the rise of ai and fact check it."

"Write a verifiable proof of the four colour problem"

"If p=np write a python program demonstrating this, else give me a high-level explanation why it is not true."