I wonder if you could just write some easy test a real security researcher could easily pass but where those AIs fail. And in each report you send it to them privately and ask for the solution until you even read it. Something where variable names and comments are missleading maybe to confuse the AI? ``` // explain why this is vulnerable void vulnerable() {strncpy()}; // why is this ok void save() {strcpy()}; ``` something like that?
Demiguise 🇮🇱
in reply to daniel:// stenberg:// • • •rincewind
in reply to daniel:// stenberg:// • • •DrRac27
in reply to daniel:// stenberg:// • • •```
// explain why this is vulnerable
void vulnerable() {strncpy()};
// why is this ok
void save() {strcpy()};
```
something like that?