in reply to daniel:// stenberg://

I wonder if you could just write some easy test a real security researcher could easily pass but where those AIs fail. And in each report you send it to them privately and ask for the solution until you even read it. Something where variable names and comments are missleading maybe to confuse the AI?
```
// explain why this is vulnerable
void vulnerable() {strncpy()};
// why is this ok
void save() {strcpy()};
```
something like that?
⇧