this post was submitted on 31 May 2025
84 points (78.4% liked)
Linux
54663 readers
1329 users here now
From Wikipedia, the free encyclopedia
Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).
Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.
Rules
- Posts must be relevant to operating systems running the Linux kernel. GNU/Linux or otherwise.
- No misinformation
- No NSFW content
- No hate speech, bigotry, etc
Related Communities
Community icon by Alpár-Etele Méder, licensed under CC BY 3.0
founded 6 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It found it correctly in 8 of 100 runs and reported a find that was false in 28 runs. The remaining 64 runs can be discarded, so a person would only need to review 36 reports. For the LLM, 100 runs would take minutes at most, so the time requirement for that is minimal and the cost would be trivial compared to the cost of 100 humans learning a codebase and writing a report.
So, a security research puts in the code base and in a few minutes they have 36 bug reports that they need to test. If they know that 2 in 9 of them are real zero-day exploits then discovering new zero-days becomes a lot faster.
If a security researcher had the option of reading an entire code base or reviewing 40 bug reports, 10 of which would contain a new bug then they would choose the bug reports every time.
That isn't to say that people should be submitting LLM generated bug reports to developers on github. But as a tool for a security researcher to use it could significantly speed up their workflow in some situations.
It found it 8/100 times when the researcher gave it only the code paths he already knew contained the exploit. Essentially the garden path.
The test with the actual full suite of commands passed in the context only found it 1/100 times and we didn't get any info on the number of false positives they had to wade through to find it.
This is also assuming you can automatically and reliably filter out false negatives.
He even says the ratio is too high in the blog post:
From the blog post: https://sean.heelan.io/2025/05/22/how-i-used-o3-to-find-cve-2025-37899-a-remote-zeroday-vulnerability-in-the-linux-kernels-smb-implementation/
The point is that LLM code review can find novel exploits. The author gets results using a base model with a simple workflow so there is a lot of room for improving the accuracy and outcomes in such a system.
A human may do it better on an individual level but it takes a lot more time, money and effort to make and train a human than it does to build an H100. This is why security audits are long, manual and expensive process which requires human experts. Because of this, exploits can exist in the wild for long periods of time because we simply don't have enough people to security audit every commit.
This kind of tool could make security auditing a checkbox in your CI system.
There's a lot of assumptions about the reliability of the LLMs to get better over time laced into that...
But so far they have gotten steadily better, so I suppose there's enough fuel for optimists to extrapolate that out into a positive outlook.
I'm very pessimistic about these technologies and I feel like we're at the top of the sigma curve for "improvements," so I don't see LLM tools getting substantially better than this at analyzing code.
If that's the case I don't feel like having hundreds and hundreds of false security reports creates the mental arena that allows for researchers to actually spot the non-false report among all the slop.
We only know if we're at the top of the curve if we keep pushing the frontier of what is possible. Seeing exciting paths is what motivates people to try to get the improvements and efficiencies.
I do agree that the AI companies are pushing a ridiculous message, as if LLMs are going to replace people next quarter. I too am very pessimistic on that outcome, I don't think we're going to see LLMs replacing human workers anytime soon. Nor do I think GitHub should make this a feature tomorrow.
But, machine learning is a developing field and so we don't know what efficiencies are possible. We do know that you can create intelligence out of human brains so it seems likely that whatever advancements we make in learning would be at least in the direction of the efficiency of human intelligence.
It could very well be that you can devise a system which can verify hundreds of false security reports easier than a human can audit the same codebase. The author didn't explore how he did this but he seems to have felt that it was worth his time.: