In this post, we dive deeper into the HOW of vulnerability management. This post is dedicated to the processes to provide a comprehensive overview.

In this chapter, we will have a look at the processes of vulnerability management. The Center for Internet Security defines separate controls for the management process and the remediation process [1, p. 41]. However, this can be even further divided into governance, management and operations. With this view in mind, we will look at the cascade from the WHAT and WHY to the HOW.
The focus is set to the operational processes. However, as governance and management lay the foundation, we will look at them first.
The governance processes, framed by COBIT’s Evaluate, Direct, and Monitor (EDM) approach, serve crucial roles in organizational oversight.
Directing: The direct aspect involves setting requirements and determining the scope, as well as establishing targets. In my previous post in part 2a, we explored the topics of requirements and scoping, although not from a procedural viewpoint. Moreover, we identified regulatory and legal requirements in part 1. A significant focus is on the EDM03 process, which ensures risk optimization concerning the organization’s holistic risk management approach. This indirectly involves managing vulnerabilities.
Defining Scope: Beyond understanding the rationale behind actions, determining the precise scope of work is essential [2, pp. 4-5] . Clear distinctions between in-scope and out-of-scope items must be made to prevent misunderstandings.
Evaluation and Monitoring: Evaluation and monitoring processes focus on metrics and reporting [2, pp. 9-10], which were previously covered in part 2a and align with target-setting strategies. Metrics establish measurable targets and help assess current conditions. Effective communication of ongoing progress, trends, and necessary actions involves reports, dashboards, and meetings.
Command Chain and Escalation: Clear definitions of command hierarchy and escalation triggers are essential. While the organisational aspects (including an organisational chart) is addressed in the next blog post, the reporting and alerting aspects are defined in the operational process in this blog post, as it starts there and escalates upwards.
Defining Roles and Interfaces: Responsibilities, accountabilities, and interaction with other IT service management disciplines must be explicitly defined.
Typically, these components are documented in policies and standards, with the governance level holding accountability [3, p. 2].
Translating strategic direction into operational procedures involves several key processes: planning and implementing necessary changes via projects, and continuously monitoring operations. According to COBIT, this spans major areas such as Align, Plan, and Organise (APO), Build, Acquire, and Implement (BAI), and Monitor, Evaluate, and Assess (MEA). These processes are addressed by area rather than detailed sub-processes.
Management’s Role in Operationalizing Strategy: The management layer must further refine and translate strategic directives from the governance level into actionable tasks to ensure alignment with organizational goals. This includes ensuring all components necessary for successful vulnerability management are planned, prepared, and ready [4] [5].
Guiding Questions for Refinement:
While, of course, roles, processes and tools depend on what is covered, these need to be defined next.
Roles: Depending on the required activities, roles need to be defined, with accountabilities and responsibilities and a hierarchy for reporting and escalations, as well as the required skills and knowledge. Furthermore it needs to be estimated how many FTE (full time equivalent) are needed for these roles. There is no need for a one-to-one relationship, as one person can have multiple roles, as long as they do not conflict with each other. [6, p. 92]
Processes and Interfaces:
While the general flow of processes is known, they need to be adjusted to the organisation’s environment, as they might be different departments, roles and hierarchies.
Furthermore, the interface towards other management disciplines need to be defined and the required input or support need to be ensure, e.g. the asset groups with impact assessments [2, pp. 8-9] and/or further details for the risk assessment, and the enrichment [5, p. 2]. This also include to analyse potential (inter)dependencies with other parts of performed management disciplines in your organisation.
Tools and Automation:
After defining the scope of action and the processes, the required tools and those which support the processes need to be defined [6, p. 92] [2, pp. 5-6] and tested accordingly [2, pp. 6-7].
Setting up automation where sensible can help to ensure timely and accurate processes and results.
Reporting and Metrics:
Break down governance KPIs into management and operational metrics to track progress toward objectives [2, pp. 9-10]. Regularly assess areas for improvement, such as monitoring the rate of false positives [2, pp. 14-15] and investigating once thresholds are exceeded or common patterns are identified.
While the Deliver, Service and Support (DSS) domain in COBIT are considered a management area, it is also the part, where the operations takes place. The goals still belong to the management, but the execution itself is operation.
The OWASP Vulnerability Management Guide distinguishes between three cycles “detection”, “remediation” and “reporting”. [2, pp. 17-19] While originally these cycles include management and operational elements, we focus here on the operational part.

By breaking operations down into 3 Cycles (also called domains), more manageable and repeatable parts are created. Nonetheless, these domains are interconnected, as they respective elements can provide inputs to elements of another domain and logically also receive their outputs. [2, p. 3]
Some aspects can and should be handled proactive (being prepared) and other parts are reactive and cannot be prepared. For example, response actions can be predefined, so that in case of a newly discovered vulnerability and detected finding, mitigation and/or remediation can be applied quickly. While we dive deeper into the following parts, the OWASP vulnerability management cycles are used as a structure, while the content below is not limited to this specific guide.
The detection cycle itself is made up of 3 parts:
Vulnerability management relies on asset and configuration management, for an continuously up-to-date inventory [10, p. 10], which can be build up from the configuration management system and the definitive media library. This is not limited to the pure information that an asset exists. Information about the business context and the stakeholders, especially the business and technical responsible are required later for remediation (see chapter process findings).
As there is a certain dynamic with assets and software, a realistic goal might be to use automation to maintain a close-to-comprehensive inventory. When your inventory runs out of sync with your actual environment, inaccurate, incomplete and/or missing information will increase the effort required for the vulnerability management program [10, p. 10] and consequently the costs [11].
Therefore, processes that ensure application enrolling can help from a process perspective. [12, p. 14] Some vendors might also provide machine consumable data on their assets’ software composition, such as a software bill of materials (SBOM), which could be used to augment organization inventories. [10, p. 10]
Although the asset inventory is the base for many other practices, vulnerability management does not only use this system, but can also contribute to it. Depending on the view and the way you define the responsibilities, other data sources can be used to validate the absence of shadow IT. The need to use vulnerability management to identify new assets depends on the capabilities of your IT asset management and which sources and processes it incorporates. At least vulnerability management should not be limited to the defined assets in the asset inventory. It should rather use discovery scans, to ensure the results match and therefore contributes to an up-to-date asset and configuration management system.
While it is possible to identify new assets, essential information about the asset itself is missing, e.g. the importance of the asset and protection needs, or responsible persons (business and IT). We will look further at the importance of assets in the chapter Assess findings and prioritise risks.
As the asset inventory is a crucial source, therefore, multiple data sources should be used to ensure an up-to-date inventory. Sources that could be used could be typical services in an enterprise context are depicted in the picture below. Not all additional data sources need to be used. However, the selected sources should ensure enough confidence of covering all assets.

When talking about detecting vulnerabilities, you might think a scan is all you need.
“A” scan is a good starting point. But there is not the “one” scan. Therefore we look into the different ways on HOW vulnerabilities can be detected and also at the time aspect (WHEN).
Vulnerability and threat intelligence is always the source for the detection. If there are no information about a vulnerability, it can’t be detected in any way.

With the information about vulnerabilities, the assets can be scanned to identify vulnerable assets and tested (DAST), which are active approaches, or a pre-build software inventory (e.g. using SBOMs (Software Bill of Material)) can be compared to these intelligence and the source code can be tested (SAST), which are passive approaches.
The other variants, as penetration testing and responsible disclosure and bug bounty programs are something in between, as it involves active aspects and passive aspects. The active aspects relate to doing something for the discovery, which might be not known yet and hence it cannot be detected, but only discovered, while the passive aspects relate to receiving new information (e.g. a report) and hence the detection is rather passive from the organisation’s view. In this case, detection is used to find vulnerabilities that are already known, while discovery is used to find vulnerabilities that were unknown before.
The next post will dive deeper into these methods, as it focuses more on the technical part.
Next to the how to detect vulnerabilities, it is also important to define the when. Typically, you want to have to two modes: cyclic detection and ad hoc detection based on a trigger.
Cyclic detection:
Usually, you want to scan your assets (not only the hosts, but also your definitive media library) regularly for vulnerabilities, as new vulnerabilities are discovered every day.
Host scanning typically aligns with the patching cycle but occurs more frequently, since new vulnerabilities are published daily and may require out-of-cycle patches or other mitigations based on risk.
Next to this scheduled approach, an ad-hoc detection can be executed, either on a request, or depending on your maturity of automation, you could also design it to be triggered.
Requested or Triggered on event:
However, there are also situations where you need ad-hoc detections. Following, some example for automated scanning based on events are described.
Change Tickets: Depending on the maturity of IT service management and the level of automation, you might be able to run automated scans based on closed changes. As soon as a change is closed, the associated assets (and configuration items) could be scanned. This would provide accurate input for the risk management, but also gives feedback for ongoing remediations, e.g. if there was a patch deployed in the dev environment.
Generally this can be applied to all types of changes, e.g.:
Stages in the secure software development cycle: Certain detections can be included as part of the secure software development lifecycle. Integrate it into the process, to detect vulnerabilities as soon as possible. However, vulnerabilities can also be detect later during the lifecycle of the software artefact itself. This includes but is not limited to supply chain threats (which can occur at several steps, see figure below) [13] or code that contains flaws [14] [15] [16] [17]

Figure 2: Supply Chain Threats [13]
Threat Intelligence: As soon as you receive a threat intelligence report that matched with one of the software products you use, you might want to run a rescan for these assets. Depending on your level of automation, this might be a manual request or automatic.
Once you detected the “raw” findings, these need to be processed in an automated way.
There are several processing steps needed to prepare information that can then be used for the risk assessment and treatment. However, not only technical, but also business related information are needed. [10, p. 10]
These are not necessarily in sequence, but for better readability, we just assume that.
Normalise:
De-Duplicate:
Aggregate:
Enrich and contextualise:
Filter:
With the help of the processing steps, including the enrichment and filtering, it is possible to arrive at a pipeline, in the shape of an upside-down pyramid, as depicted in the following figure as an example. This narrowing down helps to overcome the amount of findings and arrive at actionable treatments (at which we will look in the next chapter).

The remediation cycle itself is made up of 5 parts:
Once we know that there is or potentially is a vulnerability, we need to assess the reports and/or scanning results if the finding is a true positive or a false positive, depending on the reliability of your source and scanning approach. Usually, you don’t want to spend time and effort to validate every potential finding.
As validation also consumes resources and time, you need to define criteria, when to validate findings. [2, p. 7] You might only want to spend time validating findings that:
Finding needs to be assessed to determine [6, p. 93] and prioritise the risk [2, p. 13].
Prioritising helps to identify, which findings actually matter for your organisation, as the context plays a crucial role. [24, p. 21] Once you know a technical vulnerability has been identified, the resulting risk need to be assessed. [6, p. 93]
We already discussed the difference between a vulnerability (the weakness identified in an artifact) and a finding (concrete instance of a vulnerability that poses a risk in the environment) in part 1. Prioritisation of the finding handling should be based on the risk it poses.
How the risk is assessed is usually provided by the GRC Team. The factors that can be taken into account were also discussed in part 2a.
A root cause analysis is something that is not always needed. But when it is needed, it also needs to be decided based on the risk, if it is conducted before, in parallel or after the remediation actions.
When the security advisory suggest to update to a certain version, there is usually no root causes analysis needed. But there are situations in which you want to dive deeper into it. [3, p. 2] The following list contains a non-exhaustive list of examples:
Risk response (also called risk treatment) is the “process to modify risk” [25, p. 6]
Response actions have to be evaluated, tested [6, p. 93] and conducted in an audit-proof manner [2, pp. 10-11]. It is important to be prepared and have tested responses available [24, pp. 21-22] lest you lose time to respond.
As we learned in part 1 a risk has a threat and an opportunity at the same time. The threat is that an attacker exploits my assets, but the impact depends on the asset itself. The simplest example for the opportunity is that time and effort are spared, and something more valuable than fixing an none-important finding can be done.
Whether the risk is acceptable or an remediation or mitigation actions are needed, is defined by the thresholds for the risk value.
The options for the treatment of the threat include [26, pp. 101-102] [27, p. 36] [2, pp. 12-14] [28, p. 7] [29, p. 13] [30, p. 8ff]:
However, it might be, that a risk response cannot be decided on the current level and need to be escalated, as the threat values exceeds the authority. Furthermore, depending on the level of residual risk, it might be beneficial to prepare contingent plans.
The other aspect is the timing of your response actions: [31] [10, pp. 11-12]
The selected timing for the response actions agreed on are typically dependent on:
As an example, let’s assume a vulnerability was detected and the finding has a high risk for the organization. If the next maintenance windows is close, and the asset is non-internet facing, it might be scheduled as part of the next maintenance window, as it is within the defined service level. But if this asset is internet facing, it might receive immediate actions as the service level prescribes it, in order to keep the window of exposure as small as possible.
The risk of the risk response
Let’s take risk management to a meta level, as it introduces risks itself. As you are changing the system with your risk response, there is the risk that your response can lead to an issue. [32] [33, p. 7]
Even patching can be risky
“Patches can sometimes be flawed or even destructive in nature. For instance, the Spectre and Meltdown patches were known to cause systems to blue screen, become unstable, and often require a complete rebuild and reimage. This demonstrates that patches can sometimes create more problems than they solve.” [34]
Therefore, a testing approach and a rollout strategy are essential, to detect such behaviour early enough (e.g. in the test environment) and stop the change. [10, pp. 5-7] Even within one environment rollout-strategies, as snowball (increasing the amount of distribution in each wave, with defined timeframes in between, based on the urgency (hours or days)), can be used.
Although patching is commonly seen as the immediate solution for remediation, it’s important to acknowledge its limitations and consider alternative methods for addressing vulnerabilities. Instead of solely depending on patching, it’s crucial to explore all available options. [34] Compensating controls can also offer a more effective and comprehensive approach to securing systems and reducing the risk of threats and patch-related disruptions. [10, p. 9]
Alternatives include, but are not limited to:
Risk response actions should be well-tested, trained, and pre-approved, as far as feasible, as starting to discuss and plan when the risk materialises, it might be too late. Furthermore, it is impractical to decide on the approach for each new risk and/or finding. By preparing in advance, swift decisions on the optimal risk response can be made when new risks emerge. [10, p. 9]
With the interface to IT disaster recovery and business continuity management , preapproved general risk responses and escalation paths need to be defined based on the risk thresholds. In order to keep the effort low, certain assets can be grouped together and responses can be planned for such groups. [10, pp. 12-13]
Organizations should deploy applications on platforms where patching is integral to the technology, minimizing operational disruptions [10, p. 10], .e.g. containers with a canary deployment allow for a smoother “patching” (replacement) process. Otherwise, for legacy applications, consider approaches as multiple instances with a load balance in front, so that the systems can be patched individually without degrading the service.
This pre-planning is especially important for unpatchable assets, whether this is temporarily for a given time frame (mission critical change freezes), or generally due to still used end-of-life software. [10, pp. 15-16]
To cover the gap back to IT Service Management, depending on your thresholds for changes, you probably want to have standards changes ready rather than relying on normal changes to avoid losing time, when it is not an emergency change.
There are two ways to not respond to the (remaining) risk and accept it (sometimes also called risk retention [30, p. 10]):
The passive variant applies, when the risk is below the defined tolerance [33, p. 6] where it is not worth undertaking further actions, taking all factors into account, so that the risk can be retained. [35, p. 39] This decision is predefined by certain criteria.
The active variant is where an informed decision is made, to not further treat a risk above the tolerance [33, p. 6] (sometimes also called threshold). These risks require a temporary exception (a temporary risk acceptance), which is valid for a defined period of time. [2, pp. 15-16]
Risk acceptance criteria should be set up considering the following categories with defined thresholds: [23, p. 14]
Generally, for any kind of risk retention needs to involve risk financing, meaning „contingent arrangements for the provision of funds to meet or modify the financial consequences should they occur“. [33, p. 7]
It is also important to communicate this accepted risk and make it visible for the involved stakeholders, and link the defined and accepted risk to the findings, to avoid further processing of these findings temporarily. While the accepted risk needs to be communicated, the recipients need to be limited, as this information could be abused by an (internal) attacker.
After a remediation or mitigation action has been applied to address a risk its effectiveness needs to be checked. For a patch, this would be to perform a rescan to validate that it actually remediated the finding. For mitigation actions, e.g. a virtual patch through an application level firewall, publicly available exploits could be used to test if these remain effective.
Accepted risks need to be reviewed regularly. Depending on the risk, the time until the next review should be set appropriately. At the next review cycle, it should be discussed if the situation and parameters have changed, so that it might be possible to perform mitigation or remediation actions. [2, pp. 15-16]
Furthermore, there is monitoring needed [33, p. 8], if the risk value changes, e.g. a new finding was detected, which in turn increased the risk. Furthermore, it is important to check if the risk materialised. Based on the risk, you might want to increase the visibility on the asset, by increasing the detection capabilities to watch out for indicators of attack and indicators of compromise. [36, p. 22] The detection could be taken over by the security operation centre (SOC), while an investigation usually would be taken over by the incident response team. Other options include to perform threat hunting for such high accepted risks.
There are different types of reports [2, pp. 11-12] you need for the different audiences, either on a regular or on an ad-hoc basis. Reporting does not only happen after the treatment, but could also already happen
The communication and reporting should be tailored to the respective needs of the target audience.
Reports for asset owner (business and technical view):
The business asset owner from the none-technical side, need a report, that helps them to understand the risk to the asset. Usually, they also have a technical pendant, which require the full details, e.g. if there are already recommendations by the vendor. “Customizable reports are also desirable, allowing technical staff to view data in the desired context while reducing information overload.” [37, p. 79]
Reports for the management
The report for the management needs to summarise the results on another level, but could also provide some insights in dedicated areas of interest. These kind of reports usually show the trend of vulnerabilities and the risks . [37, p. 79]
Next to the regular reports provided, alerting [3, p. 2] might be needed if there are changes to a risk, e.g. based on threat intelligence or a new critical CVE disclosure, that exceed the defined threshold and require immediate actions.
These thresholds have to be defined carefully, in order to avoid alert fatigue.
Escalations are not only needed when there are alerts, but also when findings run out of SLA or are expected to run out of SLA, in order to gain support from the higher levels in the hierarchy .
This blog post covered the process view of vulnerability management, which a focus on the operational processes. These are build up of three cycles, namely detection, remediation and reporting. While they are interconnected, it is important to understand that they do not run sequentially.
Furthermore, data enrichment, especially for the assessment of the risk of a finding, as well as contact information about the responsible persons (business and IT) are crucial lest risk cannot be addressed.
While only outlined, management and governance processes are important, as they provide the direction and targets, and ensure everything is set up and in place to execute the defined processes.
While I try to provide a comprehensive view in my post about vulnerability management, there is no need to design the most complex processes.
It is always possible to start small, and enhance the reasonable step by step.
Do not hesitate to contact us if you need support for your vulnerability management. Our experts are here to help!
Feel free to share your knowledge, experience and opinion with us in the comments below.

Sascha Artelt
Sascha Artelt is a member of the Cyber Strategy and Architecture team at NVISO. As a versatile security expert, he possesses comprehensive expertise across the entire lifecycle of cybersecurity projects. His proficiency spans from eliciting and defining requirements, through designing robust security solutions, to implementing and managing operational systems.
LinkedIn: Sascha Artelt
[1] Center for Internet Security, “CIS Critical Security Controls Version 8.1,” 2025.
[2] OWASP Foundation, “OWASP Vulnerability Management Guide,” 2020.
[3] SANS, “Vulnerability Management Maturity Model,” 2025.
[4] J. Risto, “Vulnerability Management Maturity Model Part I: Taming the beast of Vulnerability Management.,” 06 07 2020. [Online]. Available: https://www.sans.org/blog/vulnerability-management-maturity-model.
[5] SANS, “Vulnerability Management Maturity Model,” 04 09 2023. [Online]. Available: https://sansorg.egnyte.com/dl/peDELVT16h.
[6] ISO, “ISO 27002:2022,” 2022.
[7] Enisa, “Coordinated Vulnerability Disclosure Policies in the EU,” 2022.
[8] BSI, “Leitlinie des BSI zum Coordinated Vulnerability Disclosure (CVD)-Prozess,” 2022.
[9] First.org, “Guidelines and Practices for Multi-Party Vulnerability Coordination and Disclosure,” 2020.
[10] NIST, “Guide to Enterprise Patch Management Planning: Preventive Maintenance for Technology (NIST SP 800-40r4),” 2022.
[11] Gartner, “Data Quality: Why It Matters and How to Achieve It,” [Online]. Available: https://www.gartner.com/en/data-analytics/topics/data-quality. [Accessed 22 10 2025].
[12] S. Pubal, “WEB APPLICATION VULNERABILITY,” SANS, 2024.
[13] The Linux Foundation, “Supply chain threats,” 2025. [Online]. Available: https://slsa.dev/spec/v1.1/threats-overview.
[14] NIST, “NIST Special Publication 800-218: Secure Software Development Framework (SSDF) Version 1.1,” 2022.
[15] OpenSSF, “Open Source Project Security Baseline,” 25 02 2025. [Online]. Available: https://baseline.openssf.org/versions/2025-02-25.
[16] D. Wichers, itamarlavender, will-Obrien, E. Worcel, P. Subramanian, kingthorin, coadaflorin, hblankenship, GovorovViva64, pfhorman, GouveaHeitor, C. Gibler, DSotnikov, A. Abraham, N. Rathaus and M. Jang, “Source Code Analysis Tools,” [Online]. Available: https://owasp.org/www-community/Source_Code_Analysis_Tools. [Accessed 24 09 2025].
[17] KirstenS, N. Bloor, S. Baso, J. Bowie, R. ch, EvgeniyRyzhkov, Iberiam, Ann.campbell, Ejohn20, J. Marcil, C. Schelin, J. Wang, Fabian, Achim, W. Dirk and kingthorin, “Static Code Analysis,” [Online]. Available: https://owasp.org/www-community/controls/Static_Code_Analysis. [Accessed 24 09 2025].
[18] C. Fouque, “Elastic on Elastic: How InfoSec uses the Elastic Stack for vulnerability management,” 22 02 2023. [Online]. Available: https://www.elastic.co/blog/how-infosec-uses-elastic-stack-vulnerability-management. [Accessed 10 09 2025].
[19] D. Shackleford, “A New Era in Vulnerability Management: A SANS Review of the Seemplicity Platform,” 2025.
[20] D. Brothers, “An Introduction to Risk-based Vulnerability Management (BRKSEC-1639),” 2023.
[21] PWC, “Vulnerability management: Why managing software vulnerabilities is business critical – and how to do it efficiently and effectively,” 2022.
[22] ENISA, “INTEROPERABLE EU RISK MANAGEMENT FRAMEWORK: Methodology for assessment of interoperability among risk management frameworks and methodologies,” 2022.
[23] European Commission, “Description of the methodology: IT Security Risk Management Methodology v1.2,” 2020.
[24] CISA, “Cybersecurity Incident & Vulnerability Response Playbooks: Operational Procedures for Planning and Conducting Cybersecurity Incident and Vulnerability Response Activities in FCEB Information Systems,” 2021.
[25] International Organization for Standardization, “ISO/IEC 27005:2022,” 2022.
[26] Axelos, Management of Risk: Guidance for Practitioners: Guidance for Practitioners, The Stationery Office, 2010.
[27] PMI, The Standard for Risk Management in Portfolios, Programs, and Projects, Project Management Institute, 2019.
[28] BSI, “BSI-Standard 200-3: Risk Analysis based on IT-Grundschutz,” 2017.
[29] ISO, “ISO 31000:2018(en) Risk management — Guidelines,” 2018.
[30] M. Schmidt, “Information security risk management terminology and key concepts,” Risk Management, 16 12 2022.
[31] Carnegie Mellon University, “Prioritizing Patch Deployment – SSVC: Stakeholder-Specific Vulnerability Categorization,” 11 09 2025. [Online]. Available: https://certcc.github.io/SSVC/howto/deployer_tree/#deployer-units-of-work.
[32] Carnegie Mellon University, “Risk Tolerance and Response Priority – SSVC: Stakeholder-Specific Vulnerability Categorization,” 11 09 2025. [Online]. Available: https://certcc.github.io/SSVC/topics/risk_tolerance_and_priority/.
[33] ISO, “ISO 31073:2022 Risk management – Vocabulary,” 2022.
[34] SANS Institute, “The Vulnerability Assessment Framework: Stop Inefficient Patching Now and Transform Your Vulnerability Management,” 05 05 2023. [Online]. Available: https://www.sans.org/blog/the-vulnerability-assessment-framework.
[35] European Commission, “Description of the methodology – IT Security Risk Management Methodology v1.2,” 2020.
[36] CISA, “Cybersecurity Incident & Vulnerability Response Playbooks Operational Procedures for Planning and Conducting Cybersecurity Incident and Vulnerability Response Activities in FCEB Information Systems,” 2021.
[37] W. Kandek, Vulnerability Management for Dummies, West Sussex: Wiley, 2015.
[38] NIST, “Security Content Automation Protocol,” 14 04 2025. [Online]. Available: https://csrc.nist.gov/projects/security-content-automation-protocol/. [Accessed 16 09 2025].
[39] Naval Information Warfare Atlantic, “What is OVAL? The OVAL Community Version 5.12.1 Documentation,” 19 09 2024. [Online]. Available: https://oval-community-guidelines.readthedocs.io/en/5.12.1_release/index.html. [Accessed 16 09 2025].
[40] Naval Information Warfare Atlantic, “Getting Started – The OVAL Community Guidelines 5.12.1,” 14 09 2024. [Online]. Available: https://oval-community-guidelines.readthedocs.io/en/5.12.1_release/getting-started.html. [Accessed 16 09 2025].
[41] Mitre, “OVAL – OVAL Use Case Guide,” 05 01 2012. [Online]. Available: https://oval.mitre.org/adoption/usecasesguide.html. [Accessed 16 09 2025].
[42] Naval Information Warfare Atlantic, “OVAL Design Principles – The OVAL Community Guidelines 5.12.1,” 19 09 2024. [Online]. Available: https://oval-community-guidelines.readthedocs.io/en/latest/oval-design-principles.html#oval-use-cases. [Accessed 16 09 2025].
[43] S. Springett, “Component Analysis,” [Online]. Available: https://owasp.org/www-community/Component_Analysis. [Accessed 16 09 2025].
[44] CISA and Partners.
[45] OWASP, “Software Component Verification Standard,” 2020.
[46] O. Santos, “CSAF, VEX, and SBOMs a Today’s Cybersecurity Acronym Soup,” 2022.
[47] T. Schmidt, “Vulnerability management with CSAF – why SBOM is not enough”.
[48] ISTQB, “static application security testing – ISTQB Glossary,” 31 10 2024. [Online]. Available: https://glossary.istqb.org/en_US/term/static-application-security-testing.
[49] Red Hat, “Securing the Code: Red Hat’s Comprehensive Strategies for Software Security,” 2024. [Online]. Available: https://access.redhat.com/security/security-testing-sdl-practice. [Accessed 16 09 2025].
[50] ISTQB, “Certified Tester Security Test Engineer Syllabus Version,” 2025.
[51] ISTQB, “static application security testing – ISTQB Glossary,” 31 10 2024. [Online]. Available: https://glossary.istqb.org/en_US/term/static-application-security-testing?term=SAST&exact_matches_first=true.
[52] BSI, “Management von Schwachstellen und Sicherheitsupdates,” 2018.
[53] BSI, “BSI-IT-Sicherheitsmitteilungen,” 01 09 2025. [Online]. Available: https://www.bsi.bund.de/DE/Themen/Unternehmen-und-Organisationen/Cyber-Sicherheitslage/Technische-Sicherheitshinweise-und-Warnungen/Cyber-Sicherheitswarnungen/cyber-sicherheitswarnungen_node.html.
[54] BSI, “Warn- und Informationsdienst – aktuelle Sicherheitshinweise,” 01 09 2025. [Online]. Available: https://wid.cert-bund.de/portal/wid/kurzinformationen.
[55] First.org, “Vulnerability Database Catalog,” 17 03 2016. [Online]. Available: https://www.first.org/global/sigs/vrdx/vdb-catalog.
[56] ENISA, “COMPENDIUM OF RISK MANAGEMENT FRAMEWORKS WITH POTENTIAL INTEROPERABILITY: Supplement to the Interoperable EU Risk Management Framework Report,” 2022.
[57] Carnegie Mellon University, “Current state of practice – SSVC: Stakeholder-Specific Vulnerability Categorization,” [Online]. Available: https://certcc.github.io/SSVC/topics/state_of_practice/. [Accessed 18 09 2025].
[58] BSI, “Risk levels for vulnerabilities,” 11 09 2025. [Online]. Available: https://www.bsi.bund.de/EN/Service-Navi/Abonnements/Newsletter/Buerger-CERT-Abos/Buerger-CERT-Sicherheitshinweise/Risikostufen/risikostufen.html.
[59] BSI, “BSI-Standard 200-2 IT-Grundschutz Methodology,” 2017.
[60] BSI, “IT-Grundschutz-Kompendium,” 2023.
[61] BSI, Business Continuity Managemet BSI-Standard 200-4, Köln: Reguvis Fachmedien GmbH, 2023.
[62] OWASP, “OWASP Risk Rating Methodology,” 01 09 2025. [Online]. Available: https://owasp.org/www-community/OWASP_Risk_Rating_Methodology.
[63] NIST, “Risk Management Framework for Information Systems and Organizations: A System Life Cycle Approach for Security and Privacy,” 2018.
[64] National Institute of Standards and Technology, “NIST SP 800-30 Rev. 1 – Guide for Conducting Risk Assessments,” 2012.
[65] J. Freund and J. Jones, Measuring and Managing Information Risk: A FAIR Approach, Elsevier, 2014.
[66] FAIR Institute, “An Introduction to the FAIR Materiality Assessment Model (FAIR-MAM),” 2023.
[67] First.org, “Common Vulnerability Scoring System version 4.0 Specification Document,” 2024.
[68] First.org, “EPSS User Guide,” 01 09 2025. [Online]. Available: https://www.first.org/epss/user-guide.html.
[69] CISA, “Known Exploited Vulnerabilities Catalog | CISA,” 10 09 2025. [Online]. Available: https://www.cisa.gov/known-exploited-vulnerabilities-catalog.
[70] CISA, “Reducing the Significant Risk of Known Exploited Vulnerabilities | CISA,” 10 09 2025. [Online]. Available: https://www.cisa.gov/known-exploited-vulnerabilities.
[71] A. G. Isaacs, “Comparing NIST LEV, EPSS, and KEV for Vulnerability Prioritization,” 06 06 2025. [Online]. Available: https://www.aptori.com/blog/comparing-nist-lev-epss-and-kev-for-vulnerability-prioritization. [Accessed 10 09 2025].
[72] CISA, “KEV JSON Schema,” 25 06 2024. [Online]. Available: https://www.cisa.gov/sites/default/files/feeds/known_exploited_vulnerabilities_schema.json.
[73] P. Mell and J. Spring, “NIST CSWP 41: Likely Exploited Vulnerabilities: A Proposed Metric for Vulnerability Exploitation Probability,” 2025.
[74] J. Lee, “LEV: Demystifying the New Vulnerability Metrics in NIST CSWP 41,” 07 07 2025. [Online]. Available: https://www.greenbone.net/en/blog/lev-demystifying-the-new-vulnerability-metrics-in-nist-cswp-41/.
[75] First.org, “Frequently Asked Questions,” 01 09 2025. [Online]. Available: https://www.first.org/epss/faq.
[76] First.org, “EPSS User Guide,” 10 09 2025. [Online]. Available: https://www.first.org/epss/user-guide.
[77] Carnegie Mellon University, “SSVC: Stakeholder-Specific Vulnerability Categorization,” [Online]. Available: https://certcc.github.io/SSVC/.
[78] Carnegie Mellon University, “Prioritizing Patch Creation – SSVC: Stakeholder-Specific Vulnerability Categorization,” [Online]. Available: https://certcc.github.io/SSVC/howto/supplier_tree/.
[79] Carnegie Mellon University, “Prioritizing Patch Deployment – SSVC: Stakeholder-Specific Vulnerability Categorization,” [Online]. Available: https://certcc.github.io/SSVC/howto/deployer_tree/.
[80] Carnegie Mellon University, “Prioritizing Vulnerability Coordination – SSVC: Stakeholder-Specific Vulnerability Categorization,” [Online]. Available: https://certcc.github.io/SSVC/howto/coordination_triage_decision/.
[81] Carnegie Mellon University, “Coordinator Publication Decision – SSVC: Stakeholder-Specific Vulnerability Categorization,” [Online]. Available: https://certcc.github.io/SSVC/howto/publication_decision/.
[82] Royal Canadian Mounted Police, “Threat and Risk Assessment Guide GCPSG-022 (2025),” 2025.
[83] NIST, “The NIST Cybersecurity Framework (CSF) 2.0,” 2024.
[84] P. Goyal, N. Sanna and T. Tucker, “A FAIR Framework for Effective Cyber Risk Management,” 2025.
[85] FAIR Institute, “An Overview of FAIR-CAM: FAIR – Controls Analystics Model”.
[86] S. Bakshi and E. Muthukrishnan, “Portfolio, Program and Project Management Using COBIT 5, Part 2,” 27 12 2017. [Online]. Available: https://www.isaca.org/resources/news-and-trends/industry-news/2017/portfolio-program-and-project-management-using-cobit-5-part-2.