Skip to main content


Last week I tweaked my number-of-test-cases-over-time in #curl graph to also feature number of lines of code, for comparison:
#curl
in reply to daniel:// stenberg://

for like half a second, I thought this chart showed that the number of lines per test was roughly 1:1.

(That being said, looks like it at least scales somewhat linearly together.)

in reply to Drew Hays

@dru89 I'd say it shows that we SLOWLY over time decrease number of lines of code per test case. Which seems good.
in reply to daniel:// stenberg://

Makes me curious if the ratio is roughly the same across the board, or if there are hotspots.
in reply to Gen X-Wing

you mean in regards to what areas of the code they test? There are still "white spots" for sure where test coverage is bad.
This entry was edited (1 year ago)
in reply to daniel:// stenberg://

I was thinking less like a manager and more like a nerd. Are there pieces of code that are verily heavily tested (more tests, more lines in the tests) vs others that aren’t?

Because the very heavily tested ones (if there are any) would be interesting to look at. Probably solving some cool technical problem then.

Vs scaffolding code such as “read a command like option”. Which is kinda boring.

So just out of pure childlike curiousness :)

in reply to Gen X-Wing

@breadbin we don't have an exact knowledge of that - as it's a quite complicated question to answer.
in reply to daniel:// stenberg://

I’ve seen coverage reports that tells you how many times a row was ran during the tests, but that can be skewed by things such as loops.

Either way, I just got curious as curl is a fairly unusual piece of software and quite interesting in many ways beyond “it’s one heck of a useful tool”:)

in reply to Gen X-Wing

@breadbin unfortunately, getting proper and true coverage data for the curl tests is a quite complicated beast so we don't have that.
⇧