I’ll take a more secure Firefox. If this is how it is achieved, so be it.
Yeah I’m actually kinda into this. Even if the AI vomits up a bunch of hallucinated vulnerabilities, there’s a team of (presumably) capable people there to figure that out. Seems like a pretty valid use for the technology.
Hallucinated? From researched and documented code spelunking?
That’s…exactly my point though…
What is?
That even though the team is using AI to check for vulnerabilities, they’re trained and know when their AI is hallucinating and when it’s not.
I guess I’m not sure how hallucinating and reading from source code are overlapping. Do you think these models are just barfing back garbage nonsense?
Do you somehow not? Open source projects have been running out of resources because they’re overwhelmed with bogus bug reports filed by AI.




