Beep@lemmus.org to Technology@lemmy.worldEnglish · edit-23 days agoHardening Firefox with Anthropic’s Red Teamblog.mozilla.orgexternal-linkmessage-square9linkfedilinkarrow-up173arrow-down114file-text
arrow-up159arrow-down1external-linkHardening Firefox with Anthropic’s Red Teamblog.mozilla.orgBeep@lemmus.org to Technology@lemmy.worldEnglish · edit-23 days agomessage-square9linkfedilinkfile-text
minus-squarePabloSexcrowbar@piefed.sociallinkfedilinkEnglisharrow-up4arrow-down1·3 days agoThat even though the team is using AI to check for vulnerabilities, they’re trained and know when their AI is hallucinating and when it’s not.
minus-squarelIlIlIlIlIlIl@lemmy.worldlinkfedilinkEnglisharrow-up1arrow-down1·3 days agoI guess I’m not sure how hallucinating and reading from source code are overlapping. Do you think these models are just barfing back garbage nonsense?
minus-squarePabloSexcrowbar@piefed.sociallinkfedilinkEnglisharrow-up5arrow-down1·3 days agoDo you somehow not? Open source projects have been running out of resources because they’re overwhelmed with bogus bug reports filed by AI.
What is?
That even though the team is using AI to check for vulnerabilities, they’re trained and know when their AI is hallucinating and when it’s not.
I guess I’m not sure how hallucinating and reading from source code are overlapping. Do you think these models are just barfing back garbage nonsense?
Do you somehow not? Open source projects have been running out of resources because they’re overwhelmed with bogus bug reports filed by AI.