this post was submitted on 11 Sep 2023
154 points (92.8% liked)

Technology

58142 readers
5311 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

The Inventor Behind a Rush of AI Copyright Suits Is Trying to Show His Bot Is Sentient::Stephen Thaler's series of high-profile copyright cases has made headlines worldwide. He’s done it to demonstrate his AI is capable of independent thought.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 114 points 1 year ago* (last edited 1 year ago) (27 children)

What stupid bullshit. There is nothing remotely close to an artificial general intelligence in a large language model. This person is a crackpot fool. There is no way for a LLM to have persistent memory. Everything outside of the model that pre and post processes information is where the smoke and mirrors exist. This just just databases and standard code.

The actual model is just a system of categorization and tensor math. It is complex vector math. That is it. There is nothing else going on inside the model. If you want to modify it, you need to recalculate a bunch of math as it relates to the existing vectors/tensor tables. All of this math is static. It can't change. It can't adapt. It can't plan. It has some surprising features that one might not expect to be embedded in human language alone, but that is all this is. Try offline, open source, AI. Use Oobabooga, get models from Hugging Face, start with something like a Llama2 7B. This is not hard. You do not need a graphics card. There are lots of models that work great on just a CPU. You will need a good amount of RAM for running a really good model. A 7B is like talking to a teenager prone to lying, a 13B is like a 20 year old, a 30B at 8bit quantization is like an inexperienced late twenty-something. A 70B at 4 bit quantization is like a 30yo with a masters degree. A 70B at 4 bits will need around 14+ CPU logical cores, and 64GB of system memory to generate around 2 tokens a second, this is around 1-2 words per second and is about as slow as is practical.

Don't believe anything you read in bullshit media about AI right now, and ignore the proprietary stalkerware garbage. The open source offline AI world is the future and it is yours to do as you please. Try it! It is fun.

[–] [email protected] 19 points 1 year ago (9 children)

Wow, that's some of the most concrete, down-to-earth explanation of what everyone is calling AI. Thanks.

I'm technical, but haven't found a good article explaining today's AI in a way I can grasp well enough to help my non-technical friends and family. Any recommendations? Maybe something you've written?

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (1 children)

Yann LeCun is the main person behind open source offline AI as far as putting the pieces in place and events that lead to where we are now. Maybe think of him as the Dennis Ritchie or Stallman of AI research. https://piped.video/watch?v=OgWaowYiBPM

I am not the brightest kid in the room. I'm just learning this stuff in practice and sharing some of what I have picked up thus far. I am at a wall when it comes to things like understanding rank 3 tensors or greater, and I still can't figure out exactly how the categorization network is implemented. I think that last one has to do with Transformers and has something to do with rotation of vectors in an efficient way, but I haven't figured it out intuitively yet. Thanks for the complement through.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago)

Oh crap, you already done lost me in the second half there, but I'll give the link a watch.

Thanks again!

load more comments (7 replies)
load more comments (24 replies)