this post was submitted on 21 Sep 2024
84 points (71.4% liked)

Technology

59424 readers
2821 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Please remove it if unallowed

I see alot of people in here who get mad at AI generated code and I am wondering why. I wrote a couple of bash scripts with the help of chatGPT and if anything, I think its great.

Now, I obviously didnt tell it to write the entire code by itself. That would be a horrible idea, instead, I would ask it questions along the way and test its output before putting it in my scripts.

I am fairly competent in writing programs. I know how and when to use arrays, loops, functions, conditionals, etc. I just dont know anything about bash's syntax. Now, I could have used any other languages I knew but chose bash because it made the most sense, that bash is shipped with most linux distros out of the box and one does not have to install another interpreter/compiler for another language. I dont like Bash because of its, dare I say weird syntax but it made the most sense for my purpose so I chose it. Also I have not written anything of this complexity before in Bash, just a bunch of commands in multiple seperate lines so that I dont have to type those one after another. But this one required many rather advanced features. I was not motivated to learn Bash, I just wanted to put my idea into action.

I did start with internet search. But guides I found were lacking. I could not find how to pass values into the function and return from a function easily, or removing trailing slash from directory path or how to loop over array or how to catch errors that occured in previous command or how to seperate letter and number from a string, etc.

That is where chatGPT helped greatly. I would ask chatGPT to write these pieces of code whenever I encountered them, then test its code with various input to see if it works as expected. If not, I would ask it again with what case failed and it would revise the code before I put it in my scripts.

Thanks to chatGPT, someone who has 0 knowledge about bash can write bash easily and quickly that is fairly advanced. I dont think it would take this quick to write what I wrote if I had to do it the old fashioned way, I would eventually write it but it would take far too long. Thanks to chatGPT I can just write all this quickly and forget about it. If I want to learn Bash and am motivated, I would certainly take time to learn it in a nice way.

What do you think? What negative experience do you have with AI chatbots that made you hate them?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 50 points 1 month ago (3 children)

The other day we were going over some SQL query with a younger colleague and I went “wait, what was the function for the length of a string in SQL Server?”, so he typed the whole question into chatgpt, which replied (extremely slowly) with some unrelated garbage.

I asked him to let me take the keyboard, typed “sql server string length” into google, saw LEN in the except from the first result, and went on to do what I'd wanted to do, while in another tab chatgpt was still spewing nonsense.

LLMs are slower, several orders of magnitude less accurate, and harder to use than existing alternatives, but they're extremely good at convincing their users that they know what they're doing and what they're talking about.

That causes the people using them to blindly copy their useless buggy code (that even if it worked and wasn't incomplete and full of bugs would be intended to solve a completely different problem, since users are incapable of properly asking what they want and LLMs would produce the wrong code most of the time even if asked properly), wasting everyone's time and learning nothing.

Not that blindly copying from stack overflow is any better, of course, but stack overflow or reddit answers come with comments and alternative answers that if you read them will go a long way to telling you whether the code you're copying will work for your particular situation or not.

LLMs give you none of that context, and are fundamentally incapable of doing the reasoning (and learning) that you'd do given different commented answers.

They'll just very convincingly tell you that their code is right, correct, and adequate to your requirements, and leave it to you (or whoever has to deal with your pull requests) to find out without any hints why it's not.

[–] [email protected] 8 points 1 month ago (1 children)

This is my big concern...not that people will use LLMs as a useful tool. That's inevitable. I fear that people will forget how to ask questions and learn for themselves.

[–] [email protected] 2 points 1 month ago

Exactly. Maybe you want the number of unicode code points in the string, or perhaps the byte length of the string. It's unclear what an LLM would give you, but the docs would clearly state what that length is measuring.

Use LLMs to come up with things to look up in the official docs, don't use it to replace reading docs. As the famous Russian proverb goes: trust, but verify. It's fine to trust what an LLM says, provided you also go double check what it says in more official docs.

[–] [email protected] 6 points 1 month ago

I can feel that frustrated look when someone uses chatGPT for such a tiny reason

[–] [email protected] 1 points 1 month ago* (last edited 1 month ago) (1 children)

I've been finding it a lot harder recently to find what I'm looking for when it comes to coding knowledge on search engines. I feel with an llm i can give it the wider context and it figures it exactly the sort of things I'm trying to find. Even more useful with trying to understand a complex error message you haven't seen before.

That being said. LLMs are not where my searching ends. I check to see where it got the information from so I can read the actual truth and not what it has conjured up.

[–] [email protected] 4 points 1 month ago* (last edited 1 month ago) (1 children)

I've been finding it a lot harder recently to find what I'm looking for when it comes to coding knowledge on search engines

Yeah, the enshittification has been getting worse and worse, probably because the same companies making the search engines are the ones trying to sell you the LLMs, and the only way to sell them is to make the alternatives worse.

That said, I still manage to find anything I need much faster and with less effort than dealing with an LLM would take, and where an LLM would simply get me a single answer (which I then would have to test and fix), while a search engine will give me multiple commented answers which I can compare and learn from.

I remembered another example: I was checking a pull request and it wouldn't compile; the programmer had apparently used an obscure internal function to check if a string was empty instead of string.IsNullOrWhitespace() (in C# internal means “I designed my classes wrong and I don't have time to redesign them from scratch; this member should be private or protected, but I need to access it from outside the class hierarchy, so I'll allow other classes in the same assembly to access it, but not ones outside of the assembly”; similar use case as friend in c++; it's used a lot in standard .NET libraries).

Now, that particular internal function isn't documented practically anywhere, and being internal can't be used outside its particular library, so it wouldn't pop up in any example the coder might have seen... but .NET is open source, and the library's source code is on GitHub, so chatgpt/copilot has been trained on it, so that's where the coder must have gotten it from.

The thing, though, is that LLM's being essentially statistic engines that'll just pop up the most statistically likely token after a given sequence of tokens, they have no way whatsoever to “know” that a function is internal. Or private, or protected, for that matter.

That function is used in the code they've been trained on to figure if a string is empty, so they're just as likely to output it as string.IsNullOrWhitespace() or string.IsNullOrEmpty().

Hell, if(condition) and if(!condition) are probably also equally likely in most places... and I for one don't want to have to debug code generated by something that can't tell those apart.

[–] [email protected] 1 points 1 month ago (1 children)

If you know what you need to find, then yeah search engines are still good. But as a tool for discovery they're massively shit now. You often need to be super specific to get what you want and almost at that point you already know it, you just need a reminder.

[–] [email protected] 2 points 1 month ago (1 children)

Are search engines worse than they used to be?

Definitely.

Am I still successfully using them several times a day to learn how to do what I want to do (and to help colleagues who use LLMs instead of search engines learn how to do what they want to do once they get frustrated enough to start swearing loudly enough for me to hear them)?

Also yes. And it's not taking significantly longer than it did when they were less enshittified.

Are LLMs a viable alternative to search engines, even as enshittified as they are today?

Fuck, no. They're slower, they're harder and more cumbersome to use, their results are useless on a good day and harmful on most, and they give you no context or sources to learn from, so best case scenario you get a suboptimal partial buggy solution to your problem which you can't learn anything useful from (even worse, if you learn it as the correct solution you'll never learn why it's suboptimal or, more probably, downright harmful).

If search engines ever get enshittified to the point of being truly useless, the alternative aren't LLMs. The alternative is to grab a fucking book (after making sure it wasn't defecated by an LLM), like we did before search engines were a thing.

[–] [email protected] 2 points 1 month ago

Cool I'll just try and find which book i need to read it from the millions and millions of books.

I haven't got an issue with reading books and whatnot. For coding specifically I always prefer to read documentation. But if I don't know what is needed for my current use case and search isn't helping. I'm not going to know where to begin. LLMs at least give me a jumping off point. They are not my be all and end all.

Discoverability of new tools and libraries via search is awful. Through LLMs, it's passable to point you in the right direction.