this post was submitted on 21 Feb 2024
289 points (95.0% liked)

Technology

59322 readers
5106 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

ChatGPT has meltdown and starts sending alarming messages to users::AI system has started speaking nonsense, talking Spanglish without prompting, and worrying users by suggesting it is in the room with them

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 5 points 8 months ago (1 children)

It appears, that with the increase in popularity of machine learning, the percentage of people who properly source and sanitize their training data has steeply decreased.

As you stated, a MLAI can only be as good as the data it was trained on, and is usually way worse. The popularity and application of MLAIs built with questionable practices scare me, though, at least their fuckups will keep me employed and likely more busy than ever.

[–] [email protected] -3 points 8 months ago (1 children)

LLM's are not "machine learning", they are neural-networks.

Different category.

ML is small potatoes, ttbomk.

Decision-tree stuff.

Neural-nets are black-boxes, with back-propagation training of the neural-net to get closer to ( layer by layer, training-instance by training-instance ) the intended result.

ML is what one does on one's own machine with some python libraries,

ChatGPT ( 3, 3.5, or 4, don't know which ) cost something like $100,000,000 to rent the machines required for mixing the training-data & the model ( I'm assuming about $20/hr per machine, so an OCEAN of machines, to do it )

_ /\ _

[–] [email protected] 1 points 8 months ago* (last edited 8 months ago)

Neural nets are a technology which is part of the umbrella term "machine learning". Deep learning is also a term which is part of machine learning, just more specialized towards large NN models.

You can absolutely train NNs on your own machine, after all, that's what I did for my masters before Chatgpt and all that, defining the layers myself, and also what I do right now with CNNs. That said, LLMs do tend to become so large that anyone without a super computer can at most fine tune them.

"Decision tree stuff" would be regular AI, which can be turned into ML by adding a "learning method" like a KNN or neural net, genetic algorithm, etc., which isn't much more than a more complex decision tree where decision thresholds (weights) were automatically estimated by analysis of a dataset. More complex learning methods are even capable of fine tuning themselves during operation (LLMs, KNN, etc.), as you stated.

One big difference from other learning methods and to NN based methods, is that NN likes to add non-weighted layers which, instead of making decisions, transform the data to allow for a more diverse decision process.

EDIT: Some corrections, now that I'm fully awake.

While very similar in structure and function, the NN is indeed no decision tree. It functions much the same as one, as is a basic requirement for most types of AI, but whereas every node in a decision tree has unique branches with their own unique nodes, all of a NN's nodes are interconnected to all nodes of the following layer. This is also one of the strong points of a NN, as something that seemed outrageous to it a moment ago might have become much more plausible when looking at it from a different point of view, such as after a transformative layer.

Also, other learning methods usually don't have layers, or, if one were to define "layer" as "one-shot decision process", they pretty much only have a single or two layers. In contrast, the NN can theoretically have an infinite amount of layers, allowing for pretty much infinite complexity as long as the inputted data is not abstracted beyond reason.

At last, NN don't back-propage by default, though they make it easy to enable such features given enough processing power and optionally enough bandwidth (in the case of chatGPT). LLMs are a little different, as I'm decently sure they implement back-propagation as part of the technologies definition, just like KNN.

This became a little longer than I had hoped, it's just a fascinating topic. I hope you don't mind that I went into more detail than necessary, it was mostly for the random passersby.