Beep@lemmus.org to Technology@lemmy.worldEnglish · edit-23 days agoHardening Firefox with Anthropic’s Red Teamblog.mozilla.orgexternal-linkmessage-square9linkfedilinkarrow-up173arrow-down114file-text
arrow-up159arrow-down1external-linkHardening Firefox with Anthropic’s Red Teamblog.mozilla.orgBeep@lemmus.org to Technology@lemmy.worldEnglish · edit-23 days agomessage-square9linkfedilinkfile-text
minus-squarePabloSexcrowbar@piefed.sociallinkfedilinkEnglisharrow-up9·3 days agoYeah I’m actually kinda into this. Even if the AI vomits up a bunch of hallucinated vulnerabilities, there’s a team of (presumably) capable people there to figure that out. Seems like a pretty valid use for the technology.
minus-squarelIlIlIlIlIlIl@lemmy.worldlinkfedilinkEnglisharrow-up1arrow-down4·2 days agoHallucinated? From researched and documented code spelunking?
minus-squarePabloSexcrowbar@piefed.sociallinkfedilinkEnglisharrow-up3arrow-down1·2 days agoThat’s…exactly my point though…
minus-squarePabloSexcrowbar@piefed.sociallinkfedilinkEnglisharrow-up4arrow-down1·2 days agoThat even though the team is using AI to check for vulnerabilities, they’re trained and know when their AI is hallucinating and when it’s not.
minus-squarelIlIlIlIlIlIl@lemmy.worldlinkfedilinkEnglisharrow-up1arrow-down1·2 days agoI guess I’m not sure how hallucinating and reading from source code are overlapping. Do you think these models are just barfing back garbage nonsense?
minus-squarePabloSexcrowbar@piefed.sociallinkfedilinkEnglisharrow-up5arrow-down1·2 days agoDo you somehow not? Open source projects have been running out of resources because they’re overwhelmed with bogus bug reports filed by AI.
Yeah I’m actually kinda into this. Even if the AI vomits up a bunch of hallucinated vulnerabilities, there’s a team of (presumably) capable people there to figure that out. Seems like a pretty valid use for the technology.
Hallucinated? From researched and documented code spelunking?
That’s…exactly my point though…
What is?
That even though the team is using AI to check for vulnerabilities, they’re trained and know when their AI is hallucinating and when it’s not.
I guess I’m not sure how hallucinating and reading from source code are overlapping. Do you think these models are just barfing back garbage nonsense?
Do you somehow not? Open source projects have been running out of resources because they’re overwhelmed with bogus bug reports filed by AI.