ill use copilot in place of most of the times ive searched on stackoverflow or to do mundane things like generate repeated things but relying solely on it is the same as relying solely on stackoverflow.
Technology
This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.
Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.
Rules:
1: All Lemmy rules apply
2: Do not post low effort posts
3: NEVER post naziped*gore stuff
4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.
5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)
6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist
7: crypto related posts, unless essential, are disallowed
I don't even bother trying with AI, it's not been helpful to me a single time despite multiple attempts. That's a 0% success rate for me.
Developing with ChatGPT feels bizzarely like when Tony Stark invented a new element with Jarvis' assistance.
It's a prolonged back and forth, and you need to point out the AIs mistakes and work through a ton of iterations to get something that is close enough that you can tweak it and use, but it's SO much faster than trawling through Stack Overflow or hoping someone who knows more than you can answer a post for you.
Yeah if you treat it is a junior engineer, with the ability to instantly research a topic, and are prepared to engage in a conversation to work toward a working answer, then it can work extremely well.
Some of the best outcomes I’ve had have needed 20+ prompts, but I still arrived at a solution faster than any other method.
In the end, there is this great fear of "the AI is going to fully replace us developers" and the reality is that while that may be a possibility one day, it wont be any day soon.
You still need people with deep technical knowledge to pilot the AI and drive it to an implemented solution.
AI isnt the end of the industry, it has just greatly sped up the industry.
The interesting bit for me is that if you ask a rando some programming questions they will be 99% wrong on average I think.
Stack overflow still makes more sense though.
I've used chatgpt and gemini to build some simple powershell scripts for use in intune deployments. They've been fairly simple scripts. Very few have of them have been workable solutions out of the box, and they've often filled with hallucinated cmdlets that don't exist or are part of a thirdparty module that it doesn't tell me needs to be installed. It's not useless tho, because I am a lousy programmer its been good to give me a skeleton for which I can build a working script off of and debug myself.
I reiterate that I am a lousy programmer, but it has sped up my deployments because I haven't had to work from scratch. 5/10 its saved me a half hour here and there.
I'm a good programmer and I still find LLMs to be great for banging out python scripts to handle one-off tasks. I usually use Copilot, it seems best for that sort of thing. Often the first version of the script will have a bug or misunderstanding in it, but all you need to do is tell the LLM what it did wrong or paste the text of the exception into the chat and it'll usually fix its own mistakes quite well.
I could write those scripts myself by hand if I wanted to, but they'd take a lot longer and I'd be spending my time on boring stuff. Why not let a machine do the boring stuff? That's why we have technology.
This is the best summary I could come up with:
In recent years, computer programmers have flocked to chatbots like OpenAI's ChatGPT to help them code, dealing a blow to places like Stack Overflow, which had to lay off nearly 30 percent of its staff last year.
That's a staggeringly large proportion for a program that people are relying on to be accurate and precise, underlining what other end users like writers and teachers are experiencing: AI platforms like ChatGPT often hallucinate totally incorrectly answers out of thin air.
For the study, the researchers looked over 517 questions in Stack Overflow and analyzed ChatGPT's attempt to answer them.
The team also performed a linguistic analysis of 2,000 randomly selected ChatGPT answers and found they were "more formal and analytical" while portraying "less negative sentiment" — the sort of bland and cheery tone AI tends to produce.
The Purdue researchers polled 12 programmers — admittedly a small sample size — and found they preferred ChatGPT at a rate of 35 percent and didn't catch AI-generated mistakes at 39 percent.
The study demonstrates that ChatGPT still has major flaws — but that's cold comfort to people laid off from Stack Overflow or programmers who have to fix AI-generated mistakes in code.
The original article contains 340 words, the summary contains 199 words. Saved 41%. I'm a bot and I'm open source!
I would make some 1000 monkeys with typewriters comment, but I see what most actual contracted devs produce...