this post was submitted on 13 Feb 2025
1023 points (97.6% liked)

Technology

62936 readers
4776 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 3 points 2 days ago

I find this very offensive, wait until my chatgpt hears about this! It will have a witty comeback for you just you watch!

[–] [email protected] 31 points 5 days ago

Let me ask chatgpt what I think about this

[–] [email protected] 25 points 5 days ago (2 children)

Also your ability to search information on the web. Most people I've seen got no idea how to use a damn browser or how to search effectively, ai is gonna fuck that ability completely

[–] [email protected] 10 points 5 days ago

To be fair, the web has become flooded with AI slop. Search engines have never been more useless. I've started using kagi and I'm trying to be more intentional about it but after a bit of searching it's often easier to just ask claude

[–] [email protected] 12 points 5 days ago

Gen Zs are TERRIBLE at searching things online in my experience. I’m a sweet spot millennial, born close to the middle in 1987. Man oh man watching the 22 year olds who work for me try to google things hurts my brain.

[–] [email protected] 57 points 6 days ago

You mean an AI that literally generated text based on applying a mathematical function to input text doesn't do reasoning for me? (/s)

I'm pretty certain every programmer alive knew this was coming as soon as we saw people trying to use it years ago.

It's funny because I never get what I want out of AI. I've been thinking this whole time "am I just too dumb to ask the AI to do what I need?" Now I'm beginning to think "am I not dumb enough to find AI tools useful?"

[–] [email protected] 35 points 6 days ago (10 children)

You can either use AI to just vomit dubious information at you or you can use it as a tool to do stuff. The more specific the task, the better LLMs work. When I use LLMs for highly specific coding tasks that I couldn't do otherwise (I'm not a [good] coder), it does not make me worse at critical thinking.

I actually understand programming much better because of LLMs. I have to debug their code, do research so I know how to prompt it best to get what I want, do research into programming and software design principles, etc.

load more comments (10 replies)
[–] [email protected] 15 points 6 days ago (1 children)

Tinfoil hat me goes straight to: make the population dumber and they’re easier to manipulate.

It’s insane how people take LLM output as gospel. It’s a TOOL just like every other piece of technology.

[–] [email protected] 9 points 6 days ago (1 children)

I mostly use it for wordy things like filing out review forms HR make us do and writing templates for messages to customers

[–] [email protected] 7 points 5 days ago (1 children)

Exactly. It’s great for that, as long as you know what you want it to say and can verify it.

The issue is people who don’t critically think about the data they get from it, who I assume are the same type to forward Facebook memes as fact.

It’s a larger problem, where convenience takes priority over actually learning and understanding something yourself.

[–] [email protected] 6 points 5 days ago (1 children)

As you mentioned tho, not really specific to LLMs at all

[–] [email protected] 5 points 5 days ago (2 children)

Yeah it’s just escalating the issue due to its universal availability. It’s being used in lieu of Google by many people, who blindly trust whatever it spits out.

If it had a high technological floor of entry, it wouldn’t be as influential to the general public as it is.

load more comments (2 replies)
[–] [email protected] 10 points 5 days ago

Counterpoint - if you must rely on AI, you have to constantly exercise your critical thinking skills to parse through all its bullshit, or AI will eventually Darwin your ass when it tells you that bleach and ammonia make a lemon cleanser to die for.

[–] [email protected] 17 points 6 days ago (3 children)

The one thing that I learned when talking to chatGPT or any other AI on a technical subject is you have to ask the AI to cite its sources. Because AIs can absolutely bullshit without knowing it, and asking for the sources is critical to double checking.

[–] [email protected] 9 points 6 days ago* (last edited 6 days ago) (2 children)

I consider myself very average, and all my average interactions with AI have been abysmal failures that are hilariously wrong. I invested time and money into trying various models to help me with data analysis work, and they can't even do basic math or summaries of a PDF and the data contained within.

I was impressed with how good the things are at interpreting human fiction, jokes, writing and feelings. Which is really weird, in the context of our perceptions of what AI will be like, it's the exact opposite. The first AI's aren't emotionless robots, they're whiny, inaccurate, delusional and unpredictable bitches. That alone is worth the price of admission for the humor and silliness of it all, but certainly not worth upending society over, it's still just a huge novelty.

load more comments (2 replies)
[–] [email protected] 6 points 6 days ago (2 children)

I've found questions about niche tools tend to get worse answers. I was asking if some stuff about jpackage and it couldn't give me any working suggestions or correct information. Stuff I've asked about Docker was much better.

load more comments (2 replies)
load more comments (1 replies)
[–] [email protected] 22 points 6 days ago (8 children)

Damn. Guess we oughtta stop using AI like we do drugs/pron/ 😀

[–] [email protected] 12 points 6 days ago

Unlike those others, Microsoft could do something about this considering they are literally part of the problem.

And yet I doubt Copilot will be going anywhere.

load more comments (7 replies)
[–] [email protected] 26 points 6 days ago (7 children)

I grew up as a kid without the internet. Google on your phone and youtube kills your critical thinking skills.

[–] [email protected] 9 points 6 days ago (6 children)

AI makes it worse though. People will read a website they find on Google that someone wrote and say, "well that's just what some guy thinks." But when an AI says it, those same people think it's authoritative. And now that they can talk, including with believable simulations of emotional vocal inflections, it's going to get far, far worse.

Humans evolved to process auditory communications. We did not evolve to be able to read. So we tend to trust what we hear a lot more than we trust what we read. And companies like OpenAI are taking full advantage of that.

load more comments (6 replies)
load more comments (6 replies)
[–] [email protected] 14 points 6 days ago (3 children)

I was talking to someone who does software development, and he described his experiments with AI for coding.

He said that he was able to use it successfully and come to a solution that was elegant and appropriate.

However, what he did not do was learn how to solve the problem, or indeed learn anything that would help him in future work.

[–] [email protected] 17 points 6 days ago (1 children)

I'm a senior software dev that uses AI to help me with my job daily. There are endless tools in the software world all with their own instructions on how to use them. Often they have issues and the solutions aren't included in those instructions. It used to be that I had to go hunt down any references to the problem I was having though online forums in the hopes that somebody else figured out how to solve the issue but now I can ask AI and it generally gives me the answer I'm looking for.

If I had AI when I was still learning core engineering concepts I think shortcutting the learning process could be detrimental but now I just need to know how to get X done specifically with Y this one time and probably never again.

[–] [email protected] 6 points 6 days ago

100% this. I generally use AI to help with edge cases in software or languages that I already know well or for situations where I really don't care to learn the material because I'm never going to touch it again. In my case, for python or golang, I'll use AI to get me started in the right direction on a problem, then go read the docs to develop my solution. For some weird ugly regex that I just need to fix and never touch again I just ask AI, test the answer it gices, then play with it until it works because I'm never going to remember how to properly use a negative look-behind in regex when I need it again in five years.

I do think AI could be used to help the learning process, too, if used correctly. That said, it requires the student to be proactive in asking the AI questions about why something works or doesn't, then going to read additional information on the topic.

[–] [email protected] 7 points 6 days ago* (last edited 6 days ago) (2 children)

how does he know that the solution is elegant and appropriate?

load more comments (2 replies)
load more comments (1 replies)
[–] [email protected] 26 points 6 days ago
[–] [email protected] 7 points 5 days ago (2 children)

It’s going to remove all individuality and turn us into a homogeneous jelly-like society. We all think exactly the same since AI “smoothes out” the edges of extreme thinking.

[–] [email protected] 7 points 5 days ago

Copilot told me you're wrong and that I can't play with you anymore.

[–] [email protected] 3 points 5 days ago (3 children)

Vs text books? What's the difference?

load more comments (3 replies)
[–] [email protected] 17 points 6 days ago (12 children)

Idk man. I just used it the other day for recalling some regex syntax and it was a bit helpful. However, if you use it to help you generate the regex prompt, it won't do that successfully. However, it can break down the regex and explain it to you.

Ofc you all can say "just read the damn manual", sure I could do that too, but asking an generative a.i to explain a script can also be as effective.

load more comments (12 replies)
[–] [email protected] 9 points 5 days ago (5 children)

Is that it?

One of the things I like more about AI is that it explains to detail each command they output for you, granted, I am aware it can hallucinate, so if I have the slightest doubt about it I usually look in the web too (I use it a lot for Linux basic stuff and docker).

Some people would give a fuck about what it says and just copy & past unknowingly? Sure, that happened too in my teenage days when all the info was shared along many blogs and wikis...

As usual, it is not the AI tool who could fuck our critical thinking but ourselves.

load more comments (5 replies)
[–] [email protected] 20 points 6 days ago

Critical thinking skills are what hold me back from relying on ai

[–] [email protected] 20 points 6 days ago (1 children)

Remember the:

Personal computers were “bicycles for the mind.”

I guess with AI and social media it's more like melting your mind or something. I can't find another analogy. Like a baseball bat to your leg for the mind doesn't roll off the tongue.

I know Primeagen has turned off copilot because he said the "copilot pause" daunting and affects how he codes.

load more comments (1 replies)
[–] [email protected] 19 points 6 days ago

Of course. Relying on a lighter kills your ability to start a fire without one. Its nothing new.

[–] [email protected] 17 points 6 days ago* (last edited 6 days ago) (4 children)

Really? I just asked ChatGPT and this is what it had to say:

This claim is misleading because AI can enhance critical thinking by providing diverse perspectives, data analysis, and automating routine tasks, allowing users to focus on higher-order reasoning. Critical thinking depends on how AI is used—passively accepting outputs may weaken it, but actively questioning, interpreting, and applying AI-generated insights can strengthen cognitive skills.

[–] [email protected] 13 points 6 days ago

Not sure if sarcasm..

load more comments (3 replies)
[–] [email protected] 15 points 6 days ago* (last edited 6 days ago) (4 children)

When it was new to me I tried ChatGPT out of curiosity, like with any tech, and I just kept getting really annoyed at the expansive bullshit it gave to the simplest of input. "Give me a list of 3 X" lead to fluff-filled paragraphs for each. The bastard children of a bad encyclopedia and the annoying kid in school.

I realized I was understanding it wrong, and it was supposed to be understood not as a useful tool, but as close to interacting with a human, pointless prose and all. That just made me more annoyed. It still blows my mind people say they use it when writing.

load more comments (4 replies)
[–] [email protected] 6 points 5 days ago (2 children)

Just try using AI for a complicated mechanical repair. For instance draining the radiator fluid in your specific model of car, chances are googles AI model will throw in steps that are either wrong, or unnecessary. If you turn off your brain while using AI, you're likely to make mistakes that will go unnoticed until the thing you did is business necessary. AI should be a tool like a straight edge, it has it's purpose and it's up to you the operator to make sure you got the edges squared(so to speak).

load more comments (2 replies)
[–] [email protected] 11 points 6 days ago (1 children)

Weren't these assholes just gung-ho about forcing their shitty "AI" chatbots on us like ten minutes ago? Microsoft can go fuck itself right in the gates.

load more comments (1 replies)
load more comments
view more: next ›