Skip to main content

in reply to daniel:// stenberg://

same reason for #Linux I guess and same reason why I do all the #OS1337 code in #bash with only .config makefiles where needed:

Readable and thus easy to #audit code allows for #transparency, which is vital for #maintainability and #security...

After all, mistakes do happen and I'd rather have it easy find and fix than optimize every bit at the cost of unmaintainable code.

in reply to daniel:// stenberg://

thanks for a great write-up and some interesting insights. I completely agree that the people calling for "just re-write it" surely cannot comprehend a fraction of the scale of the task and the complexity that it would involve.
in reply to daniel:// stenberg://

the "Lines of libcurl code per function use" graph will show up on the curl dashboard so that I can keep an eye on how this develops going forward
in reply to daniel:// stenberg://

is this a very basic analysis, just counting total uses / total LoC? I'm curious whether published research has found correlations on a more tightly-scoped metric, like uses per-function or per-call graph.
in reply to Ted Mielczarek

@tedmielczarek you mean correlation as in if it reduces problems? I don't think we can ever tell that for sure, and we certainly cannot do it until a number of years have passed. Since the average security problem lingers for many years until found.
in reply to daniel:// stenberg://

yeah, like I've seen plenty of research that tries to study likelihood of defects based on things like function length/complexity/whatever.
in reply to daniel:// stenberg://

shameless plug: have you ever considered relying on static analyzers (e.g. SonarCloud / SonarLint) to try to detect out-of-bounds memory access, unsafe use of tainted (user-controlled) input and other common causes of security issues and bugs?
in reply to JB Lièvremont

@mithfindel we use lots of static code analyzers all the time and have done for many years. Static code analyzers can only do so much.
in reply to daniel:// stenberg://

to offer you some specific feedback: I also tried sonarcloud but disabled it again, it was not to my liking as it spammed every PR
This entry was edited (11 months ago)
in reply to daniel:// stenberg://

thanks!

As far as I can tell (I'm in the SonarLint team, not SonarCloud), providing feedback on each PR is one of the core principles of the "clean as you code" approach pushed by Sonar. And I can understand that in some cases, it can generate some unwanted noise 😅.

If you have the time, I believe our PMs love constructive feedback: most probably, if the curl project has a need for some pull requests not to be analyzed, chances are this need is also shared by other projects.

in reply to daniel:// stenberg://

IIRC, there is an explicit project setting to control this behavior, if it can help.
in reply to daniel:// stenberg://

I get that you're not into Rust - but this statement is just opening you up to flaming:

"Rust is cool, but the language, its ecosystem and its users are rookies and newbies for system library level use."

I certainly wouldn't be described as a rookie by anyone.

in reply to daniel:// stenberg://

It smacks of C elitism - I'm sure you didn't intend that (context: I'm a longterm C dev that left for greener pastures).

Do you mean the ABI of Rust itself? One could argue that its OK to whack an `extern C` wrapper onto the Rust lib and use symbol version scripts.

FWIW - I'm not in the "you should RIIR" crowd, I'm more in the upgrade-component-by-component crowd ^^

in reply to Ikey Doherty 🐍

I don't follow. When would I mean the ABI? When I say rust is new for system libraries? No I don't. Maybe I should just ask you how long rust has been able to return error instead of panicking on out of memory?
This entry was edited (11 months ago)
in reply to daniel:// stenberg://

you're conflating standard library with the language. There are various no-panic style decorator crates, the option to rebuild libstd, use no_std, or codepaths that don't panic.

The same argument is true in C libraries when `abort()` is called instead of returning an error.

in reply to Ikey Doherty 🐍

@ikey I can't separate them for a library written in rust. When a library cannot avoid panic, it is just... wrong.
in reply to daniel:// stenberg://

@ikey and yes, I believe they are fixing this problem. But this is what I mean when I say it is new. Very new.
in reply to daniel:// stenberg://

a sensible person would write the stubbed C FFI skeleton and invoke the Rust code via std::panic::catch_unwind so that any remaining panics were UB cases. I'm pretty sure curl avoids glib2 for similar reasons

I've said my piece - you apply different logic to C vs Rust because you favour C, and I got some PTO to take tbh. :)

in reply to daniel:// stenberg://

Its not hiding - its clearly displayed on my profile. And I've only recently switched to Rust after avoiding it for years, using C/C++/D/etc.

You demonstrate an unwillingness to be reasonable outside of C, and tbh I find the "we have less CVEs than the other guys" argument extremely brittle.

I've tried my utmost to be cordial here but its like talking to a brick wall. See ya.

in reply to daniel:// stenberg://

@ikey I guess that about sums up the 'rust crowd'. They tell you you are dumb for not using rust, and they won't listen when you are telling them about the reasons you are not using it.

A language that is defined by a compiler accepting the code, with biyearly update to said compiler, your code randomly failing to build with said update, for system libraries?

Eww. No, thanks. 😑

in reply to hramrach

@hramrach as a Rust gal I don't support the way Ikey handled the situation, but the "code randomly failing to build" is, like, incredibly common with C compilers and dare I say way more than with Rust. Just yesterday I had to debug a new clang/old xmlsec1 compile incompatibility. I can't recall the last time I had to do that with old Rust projects
This entry was edited (11 months ago)
in reply to daniel:// stenberg://

Thanks for your post, very interesting. Maintaining big C codebases is for sure a challenge.
in reply to Edmundo Ruiz Ghanem

@edmundo I don't know. But libraries that provide functions that let us reduce the amount of "difficult functions" would probably help.
in reply to daniel:// stenberg://

Excellent post - it’s great that you were able to get all of that data. From the data, have you seen any specific correlations between the introduction of certain practices (static analysis, linters, etc.) and a change in the number or type of vulnerabilities? Are you able to tell which good practices have had the biggest impact?
in reply to Karl Gutwin

@kgutwin it is very hard to see a direct correlation, partly because security problems linger a long time before being found. We need to give it more time before we can tell for sure.