← Back to all posts

Our Stance On Risk Scoring

It's a story that runs rampant throughout the security industry right now: your security team is dealing with a seemingly endless number of alerts, vulnerabilities, misconfigurations. What do you do when you have too much to do? The typical answer: prioritize and work on what matters most.

It really does make sense in theory, but when it comes down to practical application everyone has the same problem - how do you actually know what was prioritized makes sense for your enterprise? What if critical context was completely missed by the algorithm that creates the priority score for your organization? As a third party vendor, how are you supposed to have business context when your platform is first deployed? Even worse, when you can't understand the details of what went into these scores (because some are a black box), how can you re-adjust the scores to fit your business?

Our stance on risk scoring

All of these factors leads to where we've landed on risk scoring: it's a useful tool to get an approximate priority of what to fix first, but it's impossible to apply a generic risk score for all business contexts. Instead our goal as a company is to provide the analysis details used to check security concerns, and pass them downstream to our end users. With this, we hope to accomplish a few things:

  1. Our pass/fail determinations show their work, so anyone can double check how we got to our results.
  2. Risk scores can be implemented downstream from our checks. We are not a black box, so if you want a risk score based on our analysis we won't restrict access to our underlying data.
  3. We're not providing a false sense of security by providing scores we can't be confident in given your business context.

How then, do we recommend teams deal with too many alerts?

Of course, there will be cases where enterprises require an out of box risk score as a baseline feature for many tools. We don't believe we play in this space, instead what we would recommend:

  1. Employ automation to quickly investigate and take action on alerts before they require human input. Leveraging our APIs to automate analysis and then make determinations based on results can keep users focused on a smaller set of alerts.
  2. Implement your own risk score. This is easier said than done, but ultimately as the business only you are able to understand critical assets, data, users, etc. and this is context that you maybe don't want to send to a third party vendor.
  3. Leverage standardized scores where applicable - CVSS scores from NVD are publicly available and should be used as a base priority for CWEs where applicable. (https://nvd.nist.gov/)
  4. Ultimately signal to noise ratio should be monitored closely by your team, and alert tuning is critical. False positives should be viewed as opportunities for alert improvement.

If you're interested in trying our product or would like to discuss your thoughts on scoring feel free to reach out to [email protected]