this post was submitted on 26 Aug 2024
105 points (88.9% liked)

No Stupid Questions

35865 readers
1536 users here now

No such thing. Ask away!

!nostupidquestions is a community dedicated to being helpful and answering each others' questions on various topics.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must be legitimate questions. All post titles must include a question.

All posts must be legitimate questions, and all post titles must include a question. Questions that are joke or trolling questions, memes, song lyrics as title, etc. are not allowed here. See Rule 6 for all exceptions.



Rule 2- Your question subject cannot be illegal or NSFW material.

Your question subject cannot be illegal or NSFW material. You will be warned first, banned second.



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Questions which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding META posts and joke questions.

Provided it is about the community itself, you may post non-question posts using the [META] tag on your post title.

On fridays, you are allowed to post meme and troll questions, on the condition that it's in text format only, and conforms with our other rules. These posts MUST include the [NSQ Friday] tag in their title.

If you post a serious question on friday and are looking only for legitimate answers, then please include the [Serious] tag on your post. Irrelevant replies will then be removed by moderators.



Rule 7- You can't intentionally annoy, mock, or harass other members.

If you intentionally annoy, mock, harass, or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- Majority of bots aren't allowed to participate here.



Credits

Our breathtaking icon was bestowed upon us by @Cevilia!

The greatest banner of all time: by @TheOneWithTheHair!

founded 1 year ago
MODERATORS
 

By "good" I mean code that is written professionally and concisely (and obviously works as intended). Apart from personal interest and understanding what the machine spits out, is there any legit reason anyone should learn advanced coding techniques? Specifically in an engineering perspective?

If not, learning how to write code seems a tad trivial now.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 30 points 2 months ago* (last edited 2 months ago) (2 children)

Great question.

is there any legit reason anyone should learn advanced coding techniques?

Don't buy the hype. LLMs can produce all kinds of useful things but they don't know anything at all.

No LLM has ever engineered anything. And there's ~~no~~ sparse (concession to a good point made in response) current evidence that any AI ever will.

Current learning models are like trained animals in a circus. They can learn to do any impressive thing you an imagine, by sheer rote repetition.

That means they can engineer a solution to any problem that has already been solved millions of times already. As long as the work has very little new/novel value and requires no innovation whatsoever, learning models do great work.

Horses and LLMs that solve advanced algebra don't understand algebra at all. It's a clever trick.

Understanding the problem and understanding how to politely ask the computer to do the right thing has always been the core job of a computer programmer.

The bit about "politely asking the computer to do the right thing" makes massive strides in convenience every decade or so. Learning models are another such massive stride. This is great. Hooray!

The bit about "understanding the problem" isn't within the capabilities of any current learning model or AI, and there's no current evidence that it ever will be.

Someday they will call the job "prompt engineering" and on that day it will still be the same exact job it is today, just with different bullshit to wade through to get it done.

[–] [email protected] 6 points 2 months ago

I appreciate your candor, I had a feeling it was cock and bull but you've answered my question fully.

[–] [email protected] 3 points 2 months ago (2 children)

Wait, if you can (or anyone else chipping in), please elaborate on something you've written.

When you say

That means they can engineer a solution to any problem that has already been solved millions of times already.

Hasn't Google already made advances through its Alpha Geometry AI?? Admittedly, that's a geometry setting which may be easier to code than other parts of Math and there isn't yet a clear indication AI will ever be able to reach a certain level of creativity that the human mind has, but at the same time it might get there by sheer volume of attempts.

Isn't this still engineering a solution? Sometimes even researchers reach new results by having a machine verify many cases (see the proof of the Four Color Theorem). It's true that in the Four Color Theorem researchers narrowed down the cases to try, but maybe a similar narrowing could be done by an AI (sooner or later)?

I don't know what I'm talking about, so I should shut up, but I'm hoping someone more knowledgeable will correct me, since I'm curious about this

[–] [email protected] 6 points 2 months ago* (last edited 2 months ago)

Isn't this still engineering a solution?

If we drop the word "engineering", we can focus on the point - geometry is another case where rote learning of repetition can do a pretty good job. Clever engineers can teach computers to do all kinds of things that look like novel engineering, but aren't.

LLMs can make computers look like they're good at something they're bad at.

And they offer hope that computers might someday not suck at what they suck at.

But history teaches us probably not. And current evidence in favor of a breakthrough in general artificial intelligence isn't actually compelling, at all.

Sometimes even researchers reach new results by having a machine verify many cases

Yes. Computers are good at that.

So far, they're no good at understanding the four color theorum, or at proposing novel approaches to solving it.

They might never be any good at that.

Stated more formally, P may equal NP, but probably not.

Edit: To be clear, I actually share a good bit of the same optimism. But I believe it'll be hard won work done by human engineers that gets us anywhere near there.

Ostensibly God created the universe in Lisp. But actually he knocked most of it together with hard-coded Perl hacks.

There's lots of exciting breakthroughs coming in computer science. But no one knows how long and what their impact will be. History teaches us it'll be less exciting than Popular Science promised us.

Edit 2: Sorry for the rambling response. Hopefully you find some of it useful.

I don't at all disagree that there's exciting stuff afoot. I also think it is being massively oversold.

[–] [email protected] 3 points 2 months ago

Hasn’t Google already made advances through its Alpha Geometry AI?? Admittedly, that’s a geometry setting which may be easier to code than other parts of Math and there isn’t yet a clear indication AI will ever be able to reach a certain level of creativity that the human mind has, but at the same time it might get there by sheer volume of attempts.

Wanted to focus a bit on this. The thing with AlphaGeometry and AlphaProof is that they really treat doing math as a game, not unlike chess. For example, AlphaGeometry has a basic set of rules, it can apply them and it knows when it is done. And when it is done, you can be 100% sure that the solution is correct, because the rules of the game are known; the 28/42 score reported in the article is really four perfect scores and three zeros. Those systems do use LLMs, but they really are only there to suggest to the system what to try doing next. There is a very enlightening picture in the AlphaGeometry paper here: https://www.nature.com/articles/s41586-023-06747-5#Fig1

You can automatically verify correctness of code the same way. For example Lean, the language AlphaProof uses internally, can be used for general programming. In general, we call similar programming techniques formal methods. But most people don't do this, since this is more time-consuming than normal programming, and in many cases we don't even know how to define the goal of our code (how to define correct rendering in a game?). So this is only really done when the correctness of the program is critical, like famously they verified the code of the automatic metro in Paris this way. And so most people don't try to make programming AI work this way.