With the next blog posts, we dive deeper into vulnerability management. It is challenging to encapsulate the complexity of vulnerability management in a just a few paragraphs. To fully cover it, one could easily write a complete guide or even a book. Therefore, I tried to find an appropriate balance between width and depth of the areas of vulnerability management for this series, so that you can dive deeper into the specific topics as needed based on the references.
We will cover the following areas:
The order of the topics is based on the approach to start with the WHAT and WHY and move to the HOW.
While this series was initially planned as a trilogy, I split the planned part 2 further as it would have been to long otherwise. This is what you can expect in the next posts:
In this blog post, we dive deeper into the WHAT and WHY of vulnerability management.
We will cover the following areas:
We will cover many options, but rest assured, it is possible to start with a small and pragmatic setup. Every step is better than no step at all.
Please note: The section about “Target Settings” is usually implemented by the GRC (Governance, Risk and Compliance) team and provided to the vulnerability management team. However, as this a crucial input, it is covered here on a consultative base. Do not be overwhelmed by it, is covers options, and there are lightweight approaches possible.
As with all, before you start, you should first gather which requirements you have (see also Part 1), what and why you want to achieve, and derive from that, what is in-scope and what is out-of-scope. It is essential to define a measurable target which can be worked and to know “when to stop”. Furthermore, the handling of in-scope elements needs to be defined, as not all assets are equally important. [1]
This can be summarised as Risk-based vulnerability management (RBVM), which prioritizes vulnerabilities based on the specific risks they pose, rather than their sheer number. It evaluates vulnerabilities within a broader risk profile, considering asset criticality, exploitability, and real-world threat intelligence. RBVM aligns remediation efforts with the organization’s risk tolerance, business objectives, and security posture, ensuring targeted and efficient protection. It focuses on vulnerabilities most likely to impact operations. This approach shifts from quantity to quality, helping organizations allocate resources effectively and address critical vulnerabilities while minimizing efforts on low-risk issues. [2]
While covered under the same management practice, technical vulnerabilities of purchased software and systems, and self-developed systems should be distinguished, as they partly have other processes [3, p. 117] [4, pp. 7-8]. For inhouse-developed application next to testing the running application, additionally the source code can be tested (Dynamic Application Security Testing and Static Application Security Testing respectively) [5, pp. 84-85] .
One need to understand the enterprise context, including the relevant laws and regulations, as well as the technical environment and the current level of capabilities. Then the target of the capabilities can be defined, and a gap analysis can be conducted to enable the creation of a project for the change. [6, pp. 66-67]
Scoping can be done in multiple dimensions. The following picture provides example dimensions which can be used for scoping. Not all dimensions need to be applicable for your organisation. However, it is not only important to define what is in scope, but also to define what is out of scope, to draw a clear line and avoiding misunderstandings.

Another scoping question is non-technical: Where does vulnerability management start? Does it start late once the software is installed, or is it already a topic before the purchasing process and is reflected in the contract (e.g. providing SBOM, SLA for fixing security vulnerabilities, years of maintenance, etc.)?
Once you know what you cover on terms of scope with vulnerability management, the time to resolve the vulnerability needs to be defined.
Not every system is as valuable as others, as the context plays a crucial role. [7, p. 21]. Therefore, a structured approach is needed to assess the risk. Depending on risk appetite, certain levels of risk are acceptable for a certain period of time.
You need to be prepared, as you should assume that every system has vulnerabilities eventually. [7, p. 21] We already discussed the difference between vulnerability rating and risks in Part 1.
Depending on the level of detail, you could also start with certain groups of systems [8, p. 46] [9, p. 20], which might be assessed by categories as:
Depending on the size of your IT-infrastructure, qualitative assessments are not enough. Which finding, on which host do you start working on, when there are 500 findings on your most important categories?
Target setting and prioritising of the finding handling should be based on the risk it poses, so that findings are addressed which actually matter the most for your organisation. There are certain criteria which are usually provided through the GRC team, if present. Smaller organisations need to apply a different way to estimate the risk and use it to prioritise.
While we have an understanding of a risk, how can a risk be assessed?
Risks can be assessed qualitatively (scale of qualifying attributes, e.g. for the impact: very low, low, medium, high, very high) or quantitatively (numeric values, e.g. expected value of monetary loss) [10, pp. 4, 15, 19], depending on your needs.
Both is possible and not mutually exclusive. However, a qualitative assessment is usually faster, less resource intensive, but less accurate, while a quantitative assessment is slower, more resource intensive, but more accurate.
The level of risk is defined as “significance of a risk expressed in terms of the combination of consequences and their likelihood” [10, p. 4]
The risk is typically calculated as Risk = Impact x Likelihood [11, p. 31]
Although this formula is well known, “[t]he de facto standard prioritization language is CVSS” [12]. In the following sub chapters we will look at different aspects that can be used to evaluate a risk and show why solely relying on the CVSS score is thought of way too short.
The Consequences (also called impact [13, p. 26] [14, p. 134]) are the “outcome of an event affecting the organisation” [10, p. 4]
The actual asset value or business impact needs to be considered, to make it specific for your organisation and assess the concrete risk of your organisation. [15] This highlights that the context plays a crucial role [7, p. 21], for example the data classification [16, p. 5] [17, p. 4] [18] [19, p. 1], as a web server, which hosts publicly available data (classification: public), is not the same as hosting data of a higher level of confidentiality (classification: internal, restricted, secret, …).
Dimensions that can be taken into account are:
The impact is also related to the time horizon (e.g. first 24 hours, 3 days, 7days, etc.) but the time horizons depend on your industry sector and your organisation [25, pp. 165-167]
The Factor Analysis of Information Risk (FAIR) methodology distinguishes between primary loss and secondary loss.
The primary loss includes direct loss for primary stakeholders, e.g. loss of revenue, wages paid when no work is performed due to an outtake, or replacement and restore costs. [26, p. 37]
Secondary loss is about potential reaction of secondary stakeholders to the primary event. This, at the first glance cryptic descriptions, encompasses reputational damage and fines. Although the loss itself is usually greater than primary losses, the frequency is quite small. [26, p. 38]
In the end, everything needs to be mapped to a value. As guidance or the basis to „quantify the financial impact of cyber incidents […]“ [27, p. 4] models as the FAIR-MAM for example, “an open, financial loss model“, can be used.
The likelihood describes the “chance of something happening” [10, p. 4] “from slightly above 0% to just below 100%” [13, p. 26].
There are several treat intelligence sources that can be used for the likelihood of the risk. We will look at it from a time perspective, starting from the past and looking into the future.
Certain Past:
CISA KEV: CISA maintains a catalogue of vulnerabilities „that have been exploited in the wild“ [28]. „[T]he main criteria for KEV catalog inclusion, is whether the vulnerability has been exploited or is under active exploitation.“ [29]
It can be used „as a ‚fire list‘ to drive immediate remediation efforts“ [30] and could lead to emergency patching, based on the other values supplied in the risk level calculation.
The KEV does not provide a likelihood, but you could either define a static likelihood for asset groups (especially internet facing assets, vital assets, etc.) with findings that are listed in the KEV, or handle these out of the normal assessment flow, by stating a hard SLA with a time to fix by yourself or orient on the due date of the KEV entry [31].
Due to the „limited scope of the CISA KEV (i.e., its focus on U.S. government systems and critical infrastructure)“ [32, p. 4] other KEV lists or threat intelligence of security companies should be considered.
Uncertain Past:
The latest metric from NIST is „Likely Exploited Vulnerabilities“ (LEV), which is described as a new metric to determine „the likelihood that the CVE has been observed to be exploited in the wild at some point in the past.“ [32, pp. 1, 5]
It was developed to cover the past aspect, as “EPSS was designed to not include past vulnerability exploitation as an input into its model. […] resulting in inaccurate scores (i.e., probabilities) for vulnerabilities that have been previously exploited.” [32, p. 3] The likelihood is consistently underestimated for these scenario [32, p. 1] and the EPSS “scores can spike for a very short period of time (1-2 days), then return to a moderate or low baseline. This presents potential problems for defenders using EPSS.“ [33]
Uncertain Future:
Next in line is the Exploit Prediction Scoring System. EPSS: “is estimating the probability of observing any exploitation attempts against a vulnerability in the next 30 days“ [34] The EPSS model provides a “probability score between 0 and 1 (0 and 100%)“ and should be seen as „pre-threat intelligence“. [35]
Combining Forces:
Taking the best of all the described metrics, the likelihood could be used as the maximum of the three. This would lead to a likelihood of 100% when the CVE is on the KEV list, or the maximum of the likelihood for exploitation in the next 30 days or from the past. [32, pp. 1, 5]
Furthermore, additional aspects from the FAIR methodology could be considered, as “contact frequency”, which might be different for internal and internet-facing assets, and the “probability of action” [26, pp. 30-31, 41]
Another aspect that can be incorporated into the likelihood is the exposure factor. Exposure factors [1, p. 12]
There is a difference, whether an threat agent can easily come in contact with an asset (or an part of it, e.g. the web service) [40, p. 4], or if the asset is in an segmented network area.
Therefore, it is important to incorporate the exposure of the affected component of the finding into the risk assessment.
The vulnerability level represents the technical severity and is usually represented by the CVSS score [36]. However, this depends a bit on the metrics used in the CVSS Score [37, pp. 4-6], and of course the version itself, which it V4 at time of writing.
In order to provide context, you might want to automatically apply environmental metrics [37, pp. 19-20] to the finding based on the defined protection needs of the asset.
But what do I do if there is no CVSS score? As not all vulnerabilities have a CVE and hence no associated CVSS score, it could be assessed by oneself with the CVSS calculator [38]. Depending on the number of findings this might be a viable option. However, many tools used in penetration tests also provide at least a severity which then could be mapped to a statis score.
The OWASP Risk Rating Methodology also takes into account the skill level, motive, opportunity and size of an assumed threat agent. [19]
However, this is one of the aspects that is kind of static and might be left out, when it is the same for all and not performed on another level for example assets groups e.g. internet facing assets and internal assets.
Another aspect that can be considered in the assessment is the role, to define a stakeholder specific risk.
Stakeholder-Specific Vulnerability Categorization (SVSS) is a methodology for prioritizing vulnerabilities based on the needs of the stakeholders involved in the vulnerability management process. SSVC is designed to be used by any stakeholder in the vulnerability management process, including finders, vendors, coordinators, deployers, and others. [39]
Although SVSS defines not a single factor to be included in the risk formular, but rather a specialised categorisation based on decision points, it might help to define a factor that can be included in the risk assessment.
The following risk equations are defined by SVSS:
While the previous sections of this chapter focused solely on assessing the risk, this section shifts the perspective. Rather than asking what could go wrong, we now look at what is already being done to prevent or contain it. Controls that offer protection (prevention), detection, response and recovery (corrective) [44, p. 26] [45, p. 8] [46, p. 3] can reduce the likelihood and/or impact of a risk [47, pp. 2,4].
There are general controls that aim at a modelled threat, and specific controls, which aim at the mitigation of a finding and hence of the risk.
The Control Analytics Model (CAM) extension to the FAIR model, shows that preventive controls can reduce the loss event frequency, while detection and response actions address the loss magnitude. [49, p. 16] Controls need to be carefully selected, as they might introduce additional risks themselves. [48, p. 20]
While not always directly related to the vulnerability and the finding directly, a control status as the compliance (e.g. CIS benchmarks, best practices, or other requirements) [50] could also be incorporated into the score.

Through safeguards in place, the risk itself is treated [51, p. 2], resulting in a residual risk. [9, pp. B-9] through a mitigation factor [48, p. 9]
The risk level before treatment is called inherent risk level, while the risk level after implementation of controls is called residual risk level. [52, pp. 33-35].
Questions for assessing existing controls are but not limited to: [48, p. 20]
General controls are not aiming at a vulnerability or finding, they aim at a modelled threat.
However, the general question remains, if such controls are effective to a new vulnerability and the concrete finding. There is no right or wrong for the assessment of the risk, as both schools of thoughts have valid arguments.
Nonetheless if a control is applied to address a finding, this would only reduce the risk related to this specific finding.
Systems have many dependencies, which can be taken into account. Everything that is “connected” might increase the risk to the system. These threats from dependencies are also an important parameter for the risk of the assessed asset itself [53, pp. 28-29], not only from a business perspective, but also from a technical perspective [25, p. 184].
Looking from the technical system perspective: One of the required systems could be unavailable due to a vulnerability. That would affect then also the dependent system. Therefore, this indirect risk could also be considered.
Another example is the hypervisor. If the hypervisor is compromised, the VM can be accessed, if there are no controls implemented, as confidential computing.
Given that, such indirect dependency risk must be calculated for the whole chain once the risk of any link changes. Such a change could have a cascading effect.
There need to be a definition with which factor the risk of dependencies is “inherited”, which might also be dependent on the controls implemented. Next to the inherited risk without any incident, there is an approach needed for the inherited risk regarding the dependencies with a) a suspected security incident with an investigation and b) a validated security incident as long as it is not resolved. If related systems are compromised, there might be a broader attack surface towards that system. Therefore, risks of related systems need to be taken into account accordingly.
Even interactions of (recent) workstation systems could be taken into account. Which leads us to the next topic, where next to other systems, also the users can be taken into account.
Let’s stay a moment at the workstation. The risk of a normal user account that is compromised (or has currently a high suspicious score) might not be very relevant, but the risk related to the admin user might be.
For certain findings, there might be user interactions with certain privileges required [37, pp. 11-13]. User accounts with a relatively high suspicious level might increase the risk of a certain finding.
Then the risk of the user/admin could be taken into account into the risk assessment for applicable findings. The remaining question is, how often this factor is updated, as a user score changes more frequently. There can be a static score (over a defined amount of days) [54] and a real time score (updated every 15 minutes) [55] based on the risks detections [56] As the suspicious level of a user changes more frequently, the risk might also change depending how it is incorporated into the overall risk score.
A Service Level agreement is a “documented agreement between a service provider and a customer that identifies both the services required and the expected level of the service”. [62, p. 107] This can also be an agreement between the business and the IT. However, this is no one-way street, as the business needs to contribute as well, in order to enable this co-creation.
Such expected and baselined service level [63, p. 202] can encompass different metrics such as, but not limited to:
How long should the resolution time for a potential problem for a vulnerability be?
That depends on the risk it poses. That is why the service level for the resolution is usually tiered. Systems which are accessible by a wide audience, e.g. internet facing systems, might have a higher priority, based on the importance to the business.
How fast do I need to be?
While in 2019, CISA reported that „adversaries are able to exploit a vulnerability within 15 days, on average“ [57], a more detailed picture was described in 2021. Although „less than 4% of the total number of CVEs have been publicly exploited“, threat actors are extremely fast as „of those 4% of known exploited CVEs, 42% are being used on day 0 of disclosure; 50% within 2 days; and 75% within 28 days.“ [58]
Even if „CISA updates the KEV catalog within 24 hours of known exploitation evidence“ [58] attacks might already have hit you. Therefore, it might be important to check these systems which are affected by a KEV depending on their value to the business and the indirect risk it poses as a dependency.
Looking at the vulnerabilities overall and not limiting these to the KEV, vendors report different results about the mean time to exploit. Qualis reported a number of 44 days for the average for 2023 [59], while Fortinet reported an relatively steady average of 5.4 for 2023 and 2024 [60, p. 6].
Nonetheless, Qualys highlighted that „[i]n numerous instances, vulnerabilities had exploit available on the very day they were published“. [59] 25 percent of high-risk vulnerabilities „were immediately targeted for exploitation, with the exploit published on the same day the vulnerability itself was publicly disclosed.“ [59] Note that it states high risk and not CVSS severity.
Rapid7 summarized it as “widespread exploitation of major vulnerabilities has shifted from a notable event to a baseline expectation“. [61] Also, the assumption of a „long-tail“ distribution, meaning that exploit attempts gather around the time of the disclosure and then fade out, seem to be a myth in regards of the perimeter of an organisation. [62]
How fast is fast enough?
“Based on how fast vulnerabilities can be exploited, organizations must be prepared to perform emergency remediation on key systems within hours of a vendor releasing a patch to address a vulnerability” [71]
There are only few guides, which actually provide hard timelines as recommendations.
The essential eights maturity model from the Australian Signals Directorate prescribe a patch, update or other vendor mitigations for vulnerabilities for internet facing services to be applied „within 48 hours of release when vulnerabilities are assessed as critical by vendors or when working exploits exist“, through all maturity levels. [63, p. 24]
In the end it comes down to the risk and your risk appetite, which then needs to be translated into a service level.
Vulnerability management is an ongoing process. You cannot reach 0 findings or, better to say keep the state at 0 findings, as new ones are reported continuously. To face the harsh truth “You cannot win at vulnerability management. You can only mature and get better at it.“ [57]
This does not mean, you should not aim for it, but you need to focus, prioritise and access whether it is worthwhile, while finding a balance between the risk appetite and resources spend. KPIs and views can help you with that.
Key Performance Indicators are special metrics that help us to “evaluate the success in meeting an objective”. [58, p. 120].
Typically, you have a target value which you want to achieve, a boundary which defined an acceptable range, and thresholds that trigger warnings. Depending on your organisation and your context, you need tailored KPI for your environment you operate in. Therefore, the graphic below labels them as metrics and does not provide any target values.
The hierarchy below shows some examples of metrics which can be used for KPIs, based on certain areas, e.g. scope, vulnerabilities, SLA, risk responses and risk itself. As these are just some examples, I will not elaborate on all the examples in the graphic provided. Furthermore, these metrics can be applied by different viewpoints (see also chapter Management Views)
As the name implies, these KPIs are the KEY performance indicators. Hence, you should not have too many of them, as these once are intended for steering. It does not mean you should not have more, as you can have further metrics.
We will start looking from the perspective of the scope and then go further by looking at the risk. From there it makes sense to dive deeper into the risk present if thresholds are exceeded, to take a look at the further course of action (Risk Response), if these actions are still in time (SLA) and at the remaining risk causes (findings).
Generally, these KPI can be applied to the organisation as a whole or to zoom into specific areas, e.g. as defined in the scope diagram above (see Figure 1: Scoping) or the views below (see Figure 3: Metrics).

Scope:
The first important metric related back to the scope: How many of the in scope assets did we scan? You probably want to scan 100% of your assets, but it could be, that one server was not reachable, due to a chance, or a new server was just deployed. So, it might be easy to define the threshold to 99%. But what if you have 10.000 servers? If you are allowed to miss 1%, it would mean you could miss up to 100 servers. So even here, the threshold needs to suit your organization and be tailored to your environment. Maybe even distinctions based on the type of assets (e.g. vital, sensitive, publicly exposed, normal) are needed.
The other aspect is, how often do you scan (scan frequency) [59], and are there any timely overlaps with other disciplines (e.g. Patch Management and Backups).
Risk:
Depending on the level you zoom in, you want to know which value streams, services, business processes, systems, assets exceeded their risk appetite. Furthermore, you want to know, which risk value you currently accept in total and for each level. As you know the risk value, you might want to know how likely the exploit prediction [36] is for any level or for certain groups over time. Lastly, you might be interested in the number of risks that materialised as issues and the impact cause.
Risk Responses:
It is important to know, which findings will be remediated through your regular patching processes and if they will be still within the SLA. More interesting are the findings not addressed by these regular patching processes or not withing time as the risk is not acceptable. Then you want to know, which of these out-of-regular-patching findings are not addressed via a (standard change) ticket and how the mean time to resolve [60, p. 2], develops over time. Additionally, you want to track the progress of remediations over time [61, p. 2] and the effectiveness of these.
SLA (and service level):
Generally, you want to know how many findings are within the defined and agreed resolving period. Furthermore, the number and the impact of in and out of SLA Vulnerabilities might be insightful.
Findings:
What findings are generally most interesting? Usually these whose have an unacceptable risk or are related to assets with high exposure (e.g. internet facing assets) and sensitive data. Furthermore, the top causes are interesting to see if there is a structural problem, which can be addressed to achieve an overall improvement.
Depending on your organisation and the organisation of the vulnerability management program (speaking of roles), dedicated reports are needed to address the needs of the respective stakeholders. The reporting cycle needs to be defined, and how the results are communicated (e.g. dashboards) and presented for the management.
Furthermore, appropriate communication channels need to be established for defined alerts if there are changes to a risk, e.g. based on threat intelligence or a new critical CVE disclosure, that exceed the defined threshold and require immediate actions. [75, p. 2].
Different high level views might provide some insights to identify areas for improvement and quick wins. The different metrics and KPI can be viewed by the scope items or levels, or by other relevant groups. Examples are depicted in the graphic below, see Figure 4: Views. These include but are not limited to environment type, environment tier, protection needs or special labels, e.g. vital and sensitive assets.

For example, the availability view could show only assets that have a higher need for availability and takes the findings into account that could impact the availability.
By looking at different level on the environment types, configuration drift might be identified, as even though the numbers of instances can vary in the environments, the distinct CVEs might show differences between the environments or even within server within the same environment.
The operational view needs to condense the information to enable a focused actions and avoiding losing the overview due to too many findings.
Therefore, the operational view needs to group by
In order to deduplicate findings, as then the focus can be set to remediate and not to be overwhelmed by sheer numbers. Effective grouping can be performed when findings share the same fix. [62, p. 7]
A simple example might be there can be multiple vulnerabilities, e.g. in the log4j library. One finding shows that the vulnerability is only present up to version 1.2.17. But here might be another finding, that is fixed in 1.25.1. If the remediation gets merged, the affected software can be directly updated to the latest version to avoid performing two updates. With that reduced amount of work and saved time, you might be able to remediate other findings-
After that, the software type (OS, Database, Application, Network, Storage, Hypervisor, etc.) is needed, to address it to the right team.
Especially when a new vulnerability management program is launched or an existing one is adjusted, not everything will run as planned or some edge cases were not defined. Therefore, it is also important to gather feedback regularly to be able to assess the current status and implement improvements as necessary, feasible and withing the resource constraints of the program.
A journey of a thousand miles begins with a single step. (Lao Tzu)
There is no need to feel overwhelmed by the possible scope of vulnerability management.
All models are wrong, but some are useful (George Box)
Depending on your context, there might be no need for the most complex assessment. There needs to be a balance between effort and value I delivers. More details do not necessarily mean more value, or at least not in an efficient relation. In the end
Draft the skeleton, then breathe life into it.
Furthermore, not all areas need to be covered directly from the start. You could implement the process cycles and then add building blocks which then cover further parts of your scope based on the priority set of your organisation.
We need to find a way to handle findings and risks in an appropriate way.
Automation is key to secure our systems. Automated patching of all parts of the software stack is needed [64, pp. 41-42] lest our systems are exploited.
You might think “but we cannot just apply the patch. We need to ensure it does not break anything”. Yes, you are right.
Similar to the DevSecOps approach with a CI/CD pipeline for in-house developments, new versions (patches, updates) can be deployed sequentially and automatically across the various environments (e.g. test, staging, live), provided that the automatic tests were successful. There can also quality gates or approval be integrated, e.g. before the update is applied to the live environment.
This fast and automatic response requires a high maturity. However, it can also be limited to e.g. internet facing assets due to their exposure or based on the risk level of the assets.

In this post we covered the WHY and the WHAT of vulnerability management. It is not possible to achieve something, that is not defined. Therefore the (legal, compliance, …) requirements need to be known and on which scope they apply.
SLA and especially the time-to-remediate are a key aspect for target settings for any vulnerability management program. Attackers are fast and might exploit vulnerabilities as soon as they are disclosed. Therefore, assessing the risk related to your findings is important. However, just relying on the general CVSS score is not enough, as it neither takes your environment into account nor the likelihood.
Lastly, we took at glance at metrics and reporting, since setting the targets is one aspect, checking whether one is on track is the other aspect and enables control.
Do not hesitate to contact us if you need support for your vulnerability management. Our experts are here to help!
Feel free to share your knowledge, experience and opinion with us in the comments below.

Sascha Artelt
Sascha Artelt is a member of the Cyber Strategy and Architecture team at NVISO. As a versatile security expert, he possesses comprehensive expertise across the entire lifecycle of cybersecurity projects. His proficiency spans from eliciting and defining requirements, through designing robust security solutions, to implementing and managing operational systems.
LinkedIn: Sascha Artelt
[1] OWASP Foundation, “OWASP Vulnerability Management Guide,” 2020.
[2] RAPID7, “The Ultimate Guide to Vulnerability Management”.
[3] ENISA, “Cybersecurity Certification V1.1.1,” 2021.
[4] BSI, “Technical Guideline TR-03185: Secure Software Lifecycle,” 2024.
[5] ISTQB, “Certified Tester Security Test Engineer Syllabus,” 2025.
[6] ISACA, COBIT 2019 Framework: Governance and Management Objectives, Schaumburg: Information Systems Audit and Control Association, 2019.
[7] CISA, “Cybersecurity Incident & Vulnerability Response Playbooks: Operational Procedures for Planning and Conducting Cybersecurity Incident and Vulnerability Response Activities in FCEB Information Systems,” 2021.
[8] NIST, “Risk Management Framework for Information Systems and Organizations: A System Life Cycle Approach for Security and Privacy,” 2018.
[9] National Institute of Standards and Technology, “NIST SP 800-30 Rev. 1 – Guide for Conducting Risk Assessments,” 2012.
[10] International Organization for Standardization, “ISO/IEC 27005:2022,” 2022.
[11] ENISA, “COMPENDIUM OF RISK MANAGEMENT FRAMEWORKS WITH POTENTIAL INTEROPERABILITY: Supplement to the Interoperable EU Risk Management Framework Report,” 2022.
[12] Carnegie Mellon University, “Current state of practice – SSVC: Stakeholder-Specific Vulnerability Categorization,” [Online]. Available: https://certcc.github.io/SSVC/topics/state_of_practice/. [Accessed 18 09 2025].
[13] PMI, The Standard for Risk Management in Portfolios, Programs, and Projects, Project Management Institute, 2019.
[14] Axelos, Management of Risk: Guidance for Practitioners: Guidance for Practitioners, The Stationery Office, 2010.
[15] BSI, “Risk levels for vulnerabilities,” 11 09 2025. [Online]. Available: https://www.bsi.bund.de/EN/Service-Navi/Abonnements/Newsletter/Buerger-CERT-Abos/Buerger-CERT-Sicherheitshinweise/Risikostufen/risikostufen.html.
[16] ISO, “ISO/IEC TS 38505-3: Information technology – Governance of data – Part 3: Guidelines for data classification,” 2021.
[17] NIST, “NIST IR 8496 – Data Classification Concepts and Considerations for Improving Data Protection,” 2023.
[18] Bundesministerium des Innern, “Vergleichstabelle der Geheimhaltsungrade”.
[19] BSI (Community Draft), 2025.
[20] Carnegie Mellon University, “Mission Impact – SSVC: Stakeholder-Specific Vulnerability Categorization,” [Online]. Available: https://certcc.github.io/SSVC/reference/decision_points/mission_impact/.
[21] BSI, “Webkurs Notfallmanagement auf Basis von BSI-Standard 100-4”.
[22] BSI, “Glossar und Abkürzungsverzeichnis: BSI Standard 200-4,” 2023.
[23] BSI, Business Continuity Managemet BSI-Standard 200-4, Köln: Reguvis Fachmedien GmbH, 2023.
[24] OWASP, “OWASP Risk Rating Methodology,” 01 09 2025. [Online]. Available: https://owasp.org/www-community/OWASP_Risk_Rating_Methodology.
[25] Bitkom e.V., “Risk Assessment & Datenschutz-Folgenabschätzung: Leitfaden,” 2017.
[26] Carnegie Mellon University, “Safety Impact – SSVC: Stakeholder-Specific Vulnerability Categorization,” [Online]. Available: https://certcc.github.io/SSVC/reference/decision_points/safety_impact/.
[27] Carnegie Mellon University, “Public Safety Impact – SSVC: Stakeholder-Specific Vulnerability Categorization,” [Online]. Available: https://certcc.github.io/SSVC/reference/decision_points/public_safety_impact/.
[28] BSI, “BSI-Standard 200-2 IT-Grundschutz Methodology,” 2017.
[29] BSI, “IT-Grundschutz-Kompendium,” 2023.
[30] J. Freund and J. Jones, Measuring and Managing Information Risk: A FAIR Approach, Elsevier, 2014.
[31] FAIR Institute, “An Introduction to the FAIR Materiality Assessment Model (FAIR-MAM),” 2023.
[32] CISA, “Known Exploited Vulnerabilities Catalog | CISA,” 10 09 2025. [Online]. Available: https://www.cisa.gov/known-exploited-vulnerabilities-catalog.
[33] CISA, “Reducing the Significant Risk of Known Exploited Vulnerabilities | CISA,” 10 09 2025. [Online]. Available: https://www.cisa.gov/known-exploited-vulnerabilities.
[34] A. G. Isaacs, “Comparing NIST LEV, EPSS, and KEV for Vulnerability Prioritization,” 06 06 2025. [Online]. Available: https://www.aptori.com/blog/comparing-nist-lev-epss-and-kev-for-vulnerability-prioritization. [Accessed 10 09 2025].
[35] CISA, “KEV JSON Schema,” 25 06 2024. [Online]. Available: https://www.cisa.gov/sites/default/files/feeds/known_exploited_vulnerabilities_schema.json.
[36] P. Mell and J. Spring, “NIST CSWP 41: Likely Exploited Vulnerabilities: A Proposed Metric for Vulnerability Exploitation Probability,” 2025.
[37] J. Lee, “LEV: Demystifying the New Vulnerability Metrics in NIST CSWP 41,” 07 07 2025. [Online]. Available: https://www.greenbone.net/en/blog/lev-demystifying-the-new-vulnerability-metrics-in-nist-cswp-41/.
[38] First.org, “Frequently Asked Questions,” 01 09 2025. [Online]. Available: https://www.first.org/epss/faq.
[39] First.org, “EPSS User Guide,” 10 09 2025. [Online]. Available: https://www.first.org/epss/user-guide.
[40] FAIR Institute, “Factor Analysis of Information Risk (FAIR) Model,” 2025.
[41] First.org, “EPSS User Guide,” 01 09 2025. [Online]. Available: https://www.first.org/epss/user-guide.html.
[42] First.org, “Common Vulnerability Scoring System version 4.0 Specification Document,” 2024.
[43] Mitre, “Common Vulnerability Scoring System Version 4.0 Calculator,” [Online]. Available: https://www.first.org/cvss/calculator/4-0.
[44] Carnegie Mellon University, “SSVC: Stakeholder-Specific Vulnerability Categorization,” [Online]. Available: https://certcc.github.io/SSVC/.
[45] Carnegie Mellon University, “Prioritizing Patch Creation – SSVC: Stakeholder-Specific Vulnerability Categorization,” [Online]. Available: https://certcc.github.io/SSVC/howto/supplier_tree/.
[46] Carnegie Mellon University, “Prioritizing Patch Deployment – SSVC: Stakeholder-Specific Vulnerability Categorization,” [Online]. Available: https://certcc.github.io/SSVC/howto/deployer_tree/.
[47] Carnegie Mellon University, “Prioritizing Vulnerability Coordination – SSVC: Stakeholder-Specific Vulnerability Categorization,” [Online]. Available: https://certcc.github.io/SSVC/howto/coordination_triage_decision/.
[48] Carnegie Mellon University, “Coordinator Publication Decision – SSVC: Stakeholder-Specific Vulnerability Categorization,” [Online]. Available: https://certcc.github.io/SSVC/howto/publication_decision/.
[49] Royal Canadian Mounted Police, “Threat and Risk Assessment Guide GCPSG-022 (2025),” 2025.
[50] ISO, “ISO 27002:2022,” 2022.
[51] NIST, “The NIST Cybersecurity Framework (CSF) 2.0,” 2024.
[52] P. Goyal, N. Sanna and T. Tucker, “A FAIR Framework for Effective Cyber Risk Management,” 2025.
[53] FAIR Institute, “An Overview of FAIR-CAM: FAIR – Controls Analystics Model”.
[54] European Commission, “Description of the methodology – IT Security Risk Management Methodology v1.2,” 2020.
[55] AWS, “Calculating security scores,” [Online]. Available: https://docs.aws.amazon.com/securityhub/latest/userguide/standards-security-score.html.
[56] ISO, “ISO 31000:2018 Risk management – Guidelines,” 2018.
[57] European Commission, “Description of the methodology – IT Security Risk Management Methodology v1.2,” 2020.
[58] ENISA, “INTEROPERABLE EU RISK MANAGEMENT FRAMEWORK: Methodology for assessment of interoperability among risk management frameworks and methodologies,” 2022.
[59] Oracle, “Cloud Guard FAQ,” [Online]. Available: https://www.oracle.com/sa/security/cloud-security/cloud-guard/faq.
[60] ZScaler, “Understanding User Risk Score,” [Online]. Available: https://help.zscaler.com/zia/understanding-user-risk-score.
[61] Microsoft, “What are risk detections?,” 13 09 2025. [Online]. Available: https://learn.microsoft.com/en-us/entra/id-protection/concept-identity-protection-risks.
[62] Axelos, ITIL 4: Drive Stakeholder Value, The Stationary Office, 2020.
[63] Axelos, ITIL Foundation 4th Edition, The Stationary Office, 2019.
[64] Center for Internet Security, “CIS Critical Security Controls Version 8.1,” 2025.
[65] CISA, “CISA INSIGHTS: Remediate Vulnerabilities for Internet-Accessible Systems”.
[66] CISA, “BOD 22-01: Reducing the Significant Risk of Known Exploited Vulnerabilities,” 03 11 2021. [Online]. Available: https://www.cisa.gov/news-events/directives/bod-22-01-reducing-significant-risk-known-exploited-vulnerabilities.
[67] Qualys, “2023 Threat Landscape Year in Review: If Everything Is Critical, Nothing Is,” 19 12 2023. [Online]. Available: https://blog.qualys.com/vulnerabilities-threat-research/2023/12/19/2023-threat-landscape-year-in-review-part-one.
[68] Fortinet, “Global Threat Landscape Report 2025,” 2025. [Online]. Available: https://www.fortinet.com/content/dam/fortinet/assets/threat-reports/threat-landscape-report-2025.pdf.
[69] Rapid7, 2024. [Online]. Available: https://www.rapid7.com/globalassets/_pdfs/research/rapid7_2024_attack_intelligence_report.pdf.
[70] B. Nahorney, “The myth of the long-tail vulnerability – Cisco Blogs,” 30 10 2023. [Online]. Available: https://blogs.cisco.com/security/the-myth-of-the-long-tail-vulnerability.
[71] S. Moore, “Vulnerability Management should be based on Risk,” 23 06 2021. [Online]. Available: https://www.gartner.com/smarterwithgartner/how-to-set-practical-time-frames-to-remedy-security-vulnerabilities.
[72] Australian Government – Australian Signals Directoriat, “Essential Eight maturity model,” 11 2023. [Online]. Available: https://www.cyber.gov.au/sites/default/files/2025-03/Essential%20Eight%20maturity%20model%20%28November%202023%29.pdf.
[73] J. Risto, “Vulnerability Management Maturity Model Part I: Taming the beast of Vulnerability Management.,” 06 07 2020. [Online]. Available: https://www.sans.org/blog/vulnerability-management-maturity-model.
[74] J. Risto, “Vulnerability Management Metrics: 5 Metrics to Start Measuring in Your Vulnerability Management Program,” 17 05 2021. [Online]. Available: https://www.sans.org/blog/5-metrics-start-measuring-vulnerability-management-program. [Accessed 11 09 2025].
[75] SANS, “Vulnerability Management Maturity Model,” 04 09 2023. [Online]. Available: https://sansorg.egnyte.com/dl/peDELVT16h.
[76] Cybersecurity Risk Foundation, “SANS Vulnerability Management Policy,” 04 2025. [Online]. Available: https://sansorg.egnyte.com/dl/3d1ckTSsSP.
[77] D. Shackleford, “A New Era in Vulnerability Management: A SANS Review of the Seemplicity Platform,” 2025.