this post was submitted on 08 May 2024
1717 points (99.3% liked)
Technology
60073 readers
3456 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Do so at your own peril. Because the thing is, a person will learn from their mistakes and grow in knowledge and experience over time. An LLM is unlikely to do the same in a professional environment for two big reasons:
The company using the LLM would have to send data back to the creator of the LLM. This means their proprietary work could be at risk. The AI company could scoop them, or a data leak would be disastrous.
Alternatively, the LLM could self-learn and be solely in house without any external data connections. A company with an LLM will never go for this, because it would mean their model is improving and developing out of their control. Their customized version may end up being better than their the LLM company's future releases. Or, something might go terribly wrong with the model while it learns and adapts. If the LLM company isn't held legally liable, they're still going to lose that business going forward.
On top of that, you need your inexperienced noobs to one day become the ones checking the output of an LLM. They can't do that unless they get experience doing the work. Companies already have proprietary models that just require the right inputs and pressing a button. Engineers are still hired though to interpret the results, know what inputs are the right ones, and understand how the model works.
A company that tries replacing them with LLMs is going to lose in the long run to competitors.
Actually, nvidia recently announced RAG (Retrieval-Augmented Generation). Basically the idea is that you take an "off the shelf" LLM and then feed your local instance sensitive corporate data. It can then use that information in its responses.
So you really are "teaching" it every time you do a code review of the AI's merge request and say "Well.. that function doesn't exist" or "you didn't use useful variable names" and so forth. Which... is a lot more than I can say about a lot of even senior or principle engineers I have worked with over the years who are very much making mistakes that would get an intern assigned to sorting crayons.
Which, again, gets back to the idea of having less busywork. Less grunt work. Less charlie work. Instead, focus on developers who can actually contribute to a team and design meetings.
And the model I learned early in my career that I bring to every firm is to have interns be a reward for talented engineers and not a punishment for people who weren't paying attention in Nose Goes. Teaching a kid to write a bunch of utility functions does nothing they didn't learn (or not learn) in undergrad but it is a necessary evil... that an AI can do.
Instead, the people who are good at their jobs and contributing to the overall product? They probably have ideas they want to work on but don't have the cycles to flesh out. That is where interns come into play. They work with those devs and other staff and learn what it means to actually be part of a team. They get to work on really cool projects and their mentors get to ALSO work on really cool projects but maybe focus more on the REALLY interesting parts and less on the specific implementation.
And result is that your interns are now actually developers who are worth a damn.
Also: One of the most important things to teach a kid is that they owe the company nothing. If they aren't getting the raise they feel they deserve then they need to be updating their linkedin and interviewing elsewhere. That is good for the worker. And that also means that the companies that spend a lot of money training up grunts? They will lose them to the companies who are desperate for people who can lead projects and contribute to designs but haven't been wasting money on writing unit tests.