purpleprophy

joined 1 year ago
[–] [email protected] 8 points 9 months ago (1 children)

This might cheer you up: https://visualstudiomagazine.com/articles/2024/01/25/copilot-research.aspx

I don't think we have anything to worry about just yet. LLMs are nothing but well-trained parrots. They can't analyse problems or have intuitions about what will work for your particular situation. They'll either give you something general copied and pasted from elsewhere or spin you a yarn that sounds plausible but doesn't stand up to scrutiny.

Getting an AI to produce functional large-scale software requires someone to explain precisely the problem domain: each requirement, business rule, edge case, etc. At which point that person is basically a developer, because I've never met a project manager who thinks that granularly.

They could be good for generating boilerplate, inserting well-known algorithms, generating models from metadata, that sort of grunt work. I certainly wouldn't trust them with business logic.

[–] [email protected] 16 points 9 months ago* (last edited 9 months ago) (1 children)

What killed No Time to Die for me were the nanobots being declared unsolvable in the same movie that explicitly shows EMPs being used. I thought for sure that was a Chekhov's gun being set up but no, just bad writing.

[–] [email protected] 1 points 10 months ago

Woah, that's a blast from the past. I'll be havening a re-read tonight.

[–] [email protected] 4 points 11 months ago

This. Developers have to be very detail-oriented but a lot of managers are not. When this happens to me, I like to write the task up in bullet points (making assumptions where necessary) and ask my project manager to review, "just to make sure I understood correctly." If I've assumed something wrongly, he normally admits that he wasn't specific enough and we work it out together.