Another fun mistake the AI analyzer found:
One of the curl test servers (for SOCKS) had a help text output listing around ten command line options. One of the options it listed was never implemented and thus didn't work. The AI found out and reported.
Kind of cool.
Pablo Martínez
in reply to daniel:// stenberg:// • • •daniel:// stenberg://
in reply to Pablo Martínez • • •A new breed of analyzers
daniel.haxx.seBubu reshared this.
devSJR
in reply to Pablo Martínez • • •Yes, he discussed slob, but in all the cases it was purely machine fabricated and not checked by a human expert. And I think that's what it is all about. If there's a human in the loop, it can be a handy tool. On a side note, the European AI Act also requires humans in the loop for the final check. And back in the days when there were large hopes for decision support systems, people also said they don't work without the humans in the loop.
@bagder
daniel:// stenberg://
in reply to devSJR • • •Bradalot “”
in reply to daniel:// stenberg:// • • •@devSJR @pablo_martan
Another example of powerful tools used productively in the hands of experts (which famously create chaos in other contexts).
It's why "Don't try this at home." is a thing.
Osma A 🇫🇮🇺🇦
in reply to daniel:// stenberg:// • • •@bagder
Grampa
in reply to daniel:// stenberg:// • • •daniel:// stenberg://
Unknown parent • • •@nina_kali_nina all analyzer tools, including compilers, give a certain amount of false positives. I don't think we should expect AI tools to be any different. As long as frequency is manageable and there are decent ways to inhibit them.
The AI tools I've mentioned recently don't seem to have much more false positives than the state of the art static code analyzers we use also do.