in reply to daniel:// stenberg://

> Every CVE filed to MITRE is supposed to have a CVSS score set.

That's odd. I requested CVEs through them (cveform.mitre.org/), there is no field to provide a CVSS score.
Not sure if that makes it better or worse. Someone else without a clue of the system rates it, or the researcher initially rates it a higher to make it "more important"? Both terrible choices.

in reply to daniel:// stenberg://

I hear you loud and clear and agree. But as you say: "a little cog in a big machine". So many standards (like PCI-DSS) we have to adhere to, use CVSS as a basis and basically force us to use them. I think not a single week passes where I don't have to adjust the scores because they are to broard. And unfortunately it's unlikely things will change b/c too much money is bound to all that stuff.
in reply to daniel:// stenberg://

To me it seems like CVSS is trying to do a dozen things at the same time, and is being potentially provided by multiple groups with conflicting ideas on what it should be based on what thing they're trying to make it do, and their own personal goals (explicit or implicit). There's no way that could ever work.

Heck, even simple risk assessment is two-dimensional - how likely is the risk, and how big is the impact when it happens. This is trying to flatten that, plus many more properties, into a single axis, and of course it's useless.

It would be much better if the CVSS was just dropped wholesale, and replaced with scores that have a relevant target. At the very least we can have the two axes that normal risk assessment puts on it, which would solve most of the CVSS sillyness already. Add a third axis / score for user involvement, too, since a bug that can be triggered without users acting is much worse than one a user has to choose to run.

For example, your 9.8 "integer overflow could be abused" from a year ago would be marked as "impact minimal", "user invoked", "likelyhood certain". Maybe even give a somewhat formatted field to indicate which platforms are affected, so things like vulnerability scanners can check if their platform is even listed in the first place. Critical Windows vulnerability that does not exist on Macs should not give rise to a forced update.

in reply to daniel:// stenberg://

One approach you could take, which doesn't involve fictional CVSS scores but does avoid the risk of CISA (or others) causing you this kind of problem, would be to have a policy of using the mid-point of the severity range for each of your 4 severity bandings as the score.

That gives people a good idea of what the severity is, and avoids the problem you had here.

in reply to daniel:// stenberg://

very interesting read! I personally found CVSS as a good weapon to use against managers to convince them about severity of fixing stuff, as they like metrics. I wonder what could I use instead. High/critical doesn't have the same gravity as some number IMO 😀
Other than that, for me, it doesn't really have much difference without it, and I know better now not to focus on the score
This entry was edited (2 months ago)
in reply to daniel:// stenberg://

The sad thing is: it could be a good system if used properly. It captures a lot of useful properties in the vector, especially when you go beyond the base score and include temporal and environmental score.

In my fever dreams, I'm imagining a system that captures my overall system architecture including security requirements, security boundaries, etc., so when a CVE is coming in, it automatically calculates an environmental score, based e.g. on "that's a local vuln, hard ro exploit, on an appliance that doesn't really have any internal security boundaries anyways, on a network physically protected and behind seven firewalls, get lost" or "unauthenticated remote code execution in all configurations on a service handling valuable data, actually exposed to the Internet, actively exploited, all hands on deck NOW".

So from a vulnerability information consumer point of view, data points like exploitability, authenticated vs. unauthenticated, local or remote, actually confirmed by a person with an understanding of the code,make a huge difference, and it would be good to have them in machine readable format. Of course, a 9.8 slapped on by some analyst who doesn't even bother to look at the code doesn't help anyone.

in reply to daniel:// stenberg://

Really good article. My experience with "security experts" is that most actually have very limited knowledge in the field. And lack critical thinking. This leads to an almost blind trust in these tools that spit out reports on CVSS scores that can easily be exported to nice looking spreadsheets.

Unfortunately, those tend to be taken as gospel by management. Because management never have a clue about anything.

#security #infosec

This entry was edited (2 months ago)
in reply to daniel:// stenberg://

Well, maybe solving this allowing to use a custom project's policy is a way for a smooth transition for something better.
AFAIU end users (like vulnerability scanners) rely on LOW/HIGH/CRITICAL/... scale anyway. So, this number is just an intermediate most of the time between curl's scale and scanner's scale that use the same system.