Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Security Risk Assessments: Modeling and Risk Level Propagation

Security Risk Assessments: Modeling and Risk Level Propagation DANIEL ANGERMEIER, Fraunhofer-Institute AISEC, Germany HANNAH WESTER, Fraunhofer-Institute AISEC, Germany KRISTIAN BEILKE, Fraunhofer-Institute AISEC, Germany GERHARD HANSCH, Fraunhofer-Institute AISEC, Germany JÖRN EICHLER, Freie Universität Berlin, Institute of Computer Science, Germany Security risk assessment is an important task in systems engineering. It is used to derive security requirements for a secure system design and to evaluate design alternatives as well as vulnerabilities. Security risk assessment is also a complex and interdisciplinary task, where experts from the application domain and the security domain have to collaborate and understand each other. Automated and tool-supported approaches are desired to help manage the complexity. However, the models used for system engineering usually focus on functional behavior and lack security-related aspects. Therefore, we present our modeling approach that alleviates communication between the involved experts and features steps of computer-aided modeling to achieve consistency and avoid omission errors. We demonstrate our approach with an example. We also describe how to model impact rating and attack feasibility estimation in a modular fashion, along with the propagation and aggregation of these estimations through the model. As a result, experts can make local decisions or changes in the model, which in turn provides the impact of these decisions or changes on the overall risk proile. Finally, we discuss the advantages of our model-based method. CCS Concepts: · Software and its engineering → Risk management; · Security and privacy → Security requirements; Software security engineering; Usability in security and privacy . Additional Key Words and Phrases: Security Risk Assessment, Risk Analysis, Security Engineering, Model-based, Secure Design, Threat Modeling 1 INTRODUCTION Security risk assessment (SRA) is a crucial part of requirements engineering and enables the systematic deduction of security requirements. Additionally, SRA supports the prioritization and execution of further security-related tasks in the engineering life cycle. Therefore, SRAs represent mandatory steps in many regulations and interna- tional standards (e.g., [34] as well as [17] for the automotive domain). However, risk assessment presents a challenging task of high complexity, as all possible interactions (not only all speciiedinteractions) with the system under development (SUD) may result in a violation of protection needs. Consequently, it is advisable to support the analyst with a computer-aided, model-based approach to avoid omissions or other łhuman errorsž and master complexity. Generally, data for a high grade of computer aid for SRA is often missing: Deining functional and structural properties of a system is an integral part of the system’s development, while modeling potential misuses is not. Authors’ addresses: Daniel Angermeier, daniel.angermeier@aisec.fraunhofer.de, Fraunhofer-Institute AISEC, Lichtenbergstraße 11, Garch- ing, Bavaria, Germany, 85748; Hannah Wester, daniel.angermeier@aisec.fraunhofer.de, Fraunhofer-Institute AISEC, Lichtenbergstraße 11, Garching, Bavaria, Germany, 85748; Kristian Beilke, Fraunhofer-Institute AISEC, Lichtenbergstraße 11, Berlin, Berlin, Germany; Gerhard Hansch, Fraunhofer-Institute AISEC, Lichtenbergstraße 11, Garching, Bavaria, Germany, 85748, gerhard.hansch@gmail.com; Jörn Eichler, Freie Universität Berlin, Institute of Computer Science, Takustr. 9, Berlin, Berlin, Germany, joern.eichler@fu-berlin.de. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for proit or commercial advantage and that copies bear this notice and the full citation on the irst page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). © 2022 Copyright held by the owner/author(s). 2378-962X/2022/11-ART https://doi.org/10.1145/3569458 ACM Trans. Cyber-Phys. Syst. 2 • Angermeier et al. System models are designed with a given purpose in mind. As łall models are wrong, but some are usefulž 5]), ([ existing models from system development only partially it the purpose of security assessment. Some aspects may be over-represented, while others may be covered insuiciently. Therefore, we follow the hypothesis that a dedicated model for the security risk assessment creates a better match for the task. Nevertheless, a dedicated SRA model still represents a simpliication of the SUD. This simpliication is useful as it reduces complexity. Attackers, however, interact with existing systems. Thus, attackers are not afected by the model’s limitations. Therefore, human expertise by a security analyst remains a necessity for proper SRAs to explore the attacker’s options with human creativity. Our graph-based modeling approach aims to achieve the best of both worlds. While the full graph provides a rich set of relations between the elements of the SRA for computer-aided functionality, selected parts of the graph can also be visualized to supplement human experts. They further support the identiication of critical paths and the derivation of efect chains from the model while keeping the human analyst in the loop. For example, the connections between components can be derived from the data lows and visualized in a diagram. Likewise, attack paths in the graph can be visualized similar to attack trees. Graphical representations also support the communication between developers, security experts, and other stakeholders (e.g., management), making the analysis comprehensible and veriiable for all involved stakeholders. This paper is based on previous work and practical experiences in the automotive2ield , 4, 8])(cf. and[extends the work presented in [1]. We provide the following contributions: • Reinement of an extended metamodel for security risk assessments to represent complex dependencies based on the SUD between security goals, threats, controls, and assumptions • Deinition of basic formal properties to evaluate these dependencies • Ruleset for risk value calculation for all relevant elements Beneits from our contributions include globally calculated estimations concerning the risk value for all relevant elements. Thus, critical security goals, threats, assumptions, and controls can be identiied and the consequences of design alternatives can be evaluated. Complementary to the efects of controls and assumptions on attack feasibility, our contributions provide means to represent and evaluate efects on security goals’ protection needs. The remainder of this paper is organized as follows. After discussing related work in Section 2, we provide background information on the used risk assessment method in Section 3. We then present and demonstrate our graph-based modeling approach in Section 4, and describe the information low and risk calculation in Section 5, before we conclude in Section 6. 2 RELATED WORK To cope with the increased exposure to cyber-attacks, as demonstrated by25[], harmonized regulations and international standards establish SRAs as a mandatory activity of future development processes 17, 34(]). cf. [ The mandatory implementation of security risk assessment and management processes into the development, production, and post-production phases requires suitable methods. Generally, model-based risk assessment during development includes creating a model of the SUD, an impact assessment, and a threat assessment. An overview of eighteen diferent security requirements engineering approaches and techniques, including CORAS, Misuse Cases, Secure Tropos, and UMLSec, is provided by Fabian et9].al. Gritzalis [ et al. 12][compare validity, compliance, costs, and usefulness of popular risk assessment methods including EBIOS, MEHARI, OCTAVE, IT-Grundschutz, MAGERIT, CRAMM, HTRA, NIST SP800, RiskSafeAssessment, and CORAS. Most of these approaches do not provide a formal method in combination with a dedicated metamodel. One exception here is CORAS22 [ ], a model-driven risk assessment method using a graphical notation applying a domain-speciic language. The models are analyzed manually by human analysts, as CORAS opts for a dedicated graphical concrete syntax and does not deine alternative representation, often preferred for larger models (cf. [21]). ACM Trans. Cyber-Phys. Syst. Security Risk Assessments: Modeling and Risk Level Propagation • 3 A dedicated survey on attack and defense modeling approaches utilizing directed acyclic graphs is provided by Kordy et al. [19]. The surveyed approaches feature a focused perspective on security-speciic properties and allow for calculations, e.g., critical attack paths. However, they do not entail an integrated perspective of the SUD and its security properties aligned with typical artifacts from the development phase. According to the classiication19 of],[we present a defense-oriented approach that combines static parts for the system model with sequential parts for the risk analysis. Furthermore, we use a DAG for general security modeling and quantitative assessment, including conceptual, and quantitative extensions in a semi-formal way. Parts and diferent iterations are implemented by commercial tools and applied by independent users to perform realistic assessments. A state-of-the-art threat assessment method and basis for many cybersecurity risk assessment methodologies is STRIDE, standing for the six considered threat classes Spooing, Tampering, Repudiation, Information disclosure, Denial of service, and Elevation of privilege that can be used to threaten security 18 obje ]. Each ctives thr[eat class antagonizes at least one security property. Spooing a false identity violates the authenticity property of entities, tampering threatens the integrity of data and processes, repudiating a responsibility interferes with non-repudiation, e.g., of process interaction, information disclosure the conidentiality of data and processes, denial of service the availability of components, data, and processes, and elevation of privileges enables the unauthorized execution of actions. A prominent reinement of STRIDE is STRIDE-per-Element, which considers that certain threats are more prevalent with certain elements of a model, which facilitates threat identiication in general by focusing on the most relevant threats 31].[ Combining STRIDE with attack trees is used by several recent security risk analysis frameworks ,e.g., [13, 26]. Bayes attack graphs are one method to assess security risks in IT networks and assess vulnerabilities, enabling Bayesian inference procedures for general attack graphs 19 (cf. , 28 [ , 37]). In the concept and development phase, whose support is in the focus of the method presented here, there are usually no known vulnerabilities and only rarely weaknesses. Thus, a percentage value required for Bayesian networks cannot be directly determined and, in practice, usually leads to intense disagreement among the respective responsible parties. To evaluate the technical diiculty for an attacker to execute an attack, we instead use a qualitative scheme such as the Common Methodology for Information Technology Security Evaluation 6] to rate [ the required capabilities for each attack step. A way to avoid the common problem of inconsistencies within SRAs is using an appropriate ontology. A good overview of risk assessment ontologies is provided by Souag33et ]. The al. [authors compare many existing ontologies and create a new one, including additional high-level security concepts, axioms, and attributes. They generally specify their models in the Web Ontology Language (OWL) and apply a certain level of automation with queries using the Semantic Query-enhanced Web Rule Language (SQWRL, 27]). cf.Their [ target audience is requirements engineers and thus is not focused on risk assessment. In contrast to this high-level approach, we provide way more details concerning the structure, relations, and propagation of ratings within the SRA. A similar approach is applie35 d ]byfor [ the automated search for known vulnerabilities in incomplete or inconsistent described systems. An approach that deducts recommendations for high-level security measures from assessed security risks can be found14 in ]. Unlike [ our proposal, the presented risk analysis method and metamodel provide only a limited security risk evaluation. Such approaches might signiicantly beneit from the here presented metamodel and methods. CAIRIS (Computer Aided Integration of Requirements and Information Security) is a framework to man- age security, usability, and design artifacts. 10] achie [ ves a certain level of automation and visualization. The framework’s aim is much broader in scope as it also encompasses usability and requirement engineering ac- tivities. Regarding security risk analysis, the authors propose a broad ontology of concepts. CAIRIS expresses similar information compared to our approach. However, the implementation includes many concepts that https://cairis.org/ ACM Trans. Cyber-Phys. Syst. 4 • Angermeier et al. keep additional information but adds efort for documentation, maintenance, and consistency (i.e., environments, vulnerabilities, obstacles, use cases). Details and conciseness of core concepts for SRAs (goals, threats, controls, and their interactions) seem to be impacted by the breadth of the approach. Similarly, we consider separating the security and the (functional) development domains to beneit tailored application. A combination of UML-based information system modeling with Bayesian attack graphs for assessing attack probabilities are CySeMol 32] and [ its extension CySeMoL P [15]. The relational model and the thereupon built inference engine allow for evaluating ’what-if’ scenarios. Networks consisting of well-known components can be evaluated eiciently due to the predeined granularity of the components. While this approach enables modiications of the model during analysis, it does not support iterative dissection or damage transformations and hardly copes with new components. A further approach to formally describe security properties in a security risk analysis framework, based on model-checking and a Markov decision process do to determine risk probabilities, is presented by [23]. A proprietary framework for information risk analysis is the FAIR appr 11oach ]. It includes by [ a taxonomy, a method for measuring the driving risk factors, and a computational engine to simulate relationships between these factors. Key factors in determining risks ar Loss e the Event Frequency, based on the Threat Event Frequency and the Vulnerability, and Loss the Magnitude, relecting the impact. Due to the dependency on measurable and historical factors, initial risk assessments and non-metric environments pose severe problems for users. In contrast to the presented approach that focuses on the overall impact, FAIR is limited to risks for information assets. A combination of the automotive Hazard Analysis And Risk Assessment (HARA) with STRIDE, intended to support the functional concept phase by a straightforward quantiication of the impacts of threats and hazards is the Security-Aware Hazard And Risk Analysis (SAHARA) metho 24d].[Similarly, the conventional Failure Mode Efects Analysis (FMEA) is extended by vulnerabilities by the FMVEA metho 30] and d [focused on the technical concept phase of the development. While both these automotive-oriented methods rely on a model of the SUD, they use a top-down assessment approach, about by what a speciic threat, respectively, safety hazard might be caused. Furthermore, in their assessment, they do not consider interactions, respectively sequences, the efects of security measures and of their propagation along the model. A combination of FAIR, SAHARA, and FMVEA is the probabilistic RISKEE (Risk-Tree Based Method for Assessing Risk in Cyber Security) approach 20]. A [ s the long form of the name indicates, it combines risk calculation with attack trees. Based on FAIR, the considered risk factors are frequency, vulnerability, and magnitude of vulnerabilities. A specialty of the RISKEE approach is the relation and visualization of the calculated and the acceptable risk as a loss-exceeding curve. Popular commercial tools for threat modeling are the Microsoft Threat Modeling andTthe oolfortisee SecuriCAD, which also includes probabilistic attack simulation. They both provide a graphical interface for modeling current and abstract IT environments and assessing potential security issues. While the Threat Modeling Tool utilizes a STRIDE-based risk assessment method, SecuriCAD also supports evaluating possible attack vectors by Monte Carlo simulation. Both tools regard coarse-grained attack paths focusing on cloud and enterprise IT but lack attack feasibility factor propagation or damage transformation. Commercial tools are currently also developed to support SRAs in the automotive domain, including the Yakindu Securityand Analyst Ansys Medini Analyze. https://docs.microsoft.com/en-us/azure/security/develop/threat-modeling-tool https://www.foreseeti.com/securicad/ https://www.itemis.com/de/yakindu/security-analyst/ https://www.ansys.com/products/systems/ansys-medini-analyze-for-cybersecurity ACM Trans. Cyber-Phys. Syst. Security Risk Assessments: Modeling and Risk Level Propagation • 5 3 BACKGROUND The risk assessment approach used in this paper is based on the Modular Risk Assessment (MoRA) metho7,d (cf. [ 8]). Figure 1 depicts four core activities of the method framework: łModel the Target of Evaluationž, łDetermine Protection Needsž, Analyze ł Threatsž, and Analyze ł Risksž. The irst step decomposes the SUD into relevant functions, data elements, components, and data lows. The next step identiies security goalsas combinations of assets detailed in the SUD and their required security properties. The third step identiies threats to the assets analyzing systematically elements of the SUD. Additionally, actual or proposed controls can be added to mitigate identiied threats. Impact and attack feasibility ratings for security goals as well as threats and controls are estimated in the last step. Risk levels are derived from those estimations. For more details on the application of MoRA, we refer to aforementioned publications3,and 4]. While [ this work is based on MoRA, we align the terminology in this paper with ISO/SAE 21434. MoRA relies on an assessment modeland catalogsto homogenize assessments within a common application domain. Thus, the assessment model and the catalogs represent a common ground for all stakeholders regarding core aspects of risk assessments like evaluation criteria or threat classes. Note that standards and regulations can suggest or deine parts of the assessment model, such as the threat model given in Annex 5 of R-155 the UN Regulation on uniform provisions concerning the approval of vehicles with regard to cybersecurity and of their cybersecurity management systems[36]. Our graph-based modeling approach as speciied in extension to our previous work in Section 4 augments MoRA’s representation. It facilitates the method implementation and is based on experience from several years of practical application in industrial development projects. The model and calculation rules are the basis for tooling, such as the Yakindu Security Analyst. While we present our generic metamodel without a speciic syntax in this paper, we successfully adapted the metamodel and tooling to accommodate the speciic requirements for risk assessments in standards and regulation, such as ISO/SAE 21434 (cf. [17]) or the IEC 62443 (cf. [16]). 4 METAMODEL The following section presents the metamodel of our SRA model previously intro1duce ]. Itdencompasses in [ a focused representation of the SUD itself as well as risk assessment-speciic core concepts like security goals, damage scenarios, threats, controls, and assumptions. Providing all these elements in one model allows for derivation and validation of relations between and properties of elements of the SRA aligned with the SUD (cf. [4]). This facilitates comprehension and traceability. The core concepts are presented along MoRA’s main activities followed by a modeling example. 4.1 Model the System Under Development The SUD model serves as the foundation for the analysis. It includes assets, which are required to understand the protection needs and potential damage scenarios. The SUD model also provides an overview of potential interactions with the SUD. This facilitates the elicitation of potential threats against it. Furthermore, by modeling the SUD in cooperation with domain experts, security analysts gain a solid understanding of the SUD. Likewise, as all arguments are rooted in the system model, explaining risks to domain experts is facilitated. Note that every risk assessment requires an abstraction of the analyzed item, i.e., a model, in the analyst’s mind. Documenting the model and discussing it with domain experts improves the model’s correctness in our experience. The graph of the SUD consists of four sets, visualized as nodes, and the relations between these nodes, connecting them as edges, as shown in Figure 2. The four sets of nodes in the risk assessment graph represent the functions, data elements, components, and data lows of the SUD. Within each of these sets, the subelement relation (łis subdata of,ž łis subcomponent ofž and łis subfunction ofž) represents a hierarchy between the elements. A component, for example, can be reined into its sub-components (e.g., the component łvehiclež has ACM Trans. Cyber-Phys. Syst. 6 • Angermeier et al. Model the System Determine Protection Needs Analyze Risks Under Development Security Assets + Properties Functions Impact Rating Security Goals Data Risk level Analyze Threats Components Controls / Threats Assumptions Data Flows Attack Feasibility Rating Fig. 1. Main activities and core concepts in security risk assessments according to MoRA. Section 4 provides more details on the metamodel representing these core concepts. subcomponents łbrake ECUž and łairbag ECUž and łbrake ECUž could consist of a subcomponent łsoftware platformž). Data lows each have a sender and a receiver, resulting in a matching łhas senderž and łhas receiverž relation from the data low to the sender and receiver component. Furthermore, the data low has a łtransmitsž relation to one or more data elements. Note that the metamodel allows components that neither receive nor transmit data, if required. Components have a łstoresž relation to locally stored data elements. This is mainly used for data that might never be transmitted, such as private keys for cryptographic operations. All relations between components and data are non-exclusive, i.e., components can send, store, and receive the same data element as other components. These relations are depicted in Figure 2. Based on these explicit relations, implicit relations can be derived: For each sender, a łproducesž relation to the sent data elements, for each receiver a consumesž ł relation to received data elements. Therefore, interface deinitions of components can be derived from the data low deinitions. These implicit relations are always calculated from the existing data low deinitions and never deined explicitly to avoid inconsistencies. Functions have a łmaps tož relation to data elements, components, and data lows, as shown in Figure 2. These relations imply that functions are implemented by data processing and transmission, which in turn are executed by components. The functions thus depend on their mapped elements. We have chosen this representation for the SUD as it fully supports SRAs based on MoRA 8]) while ([ it also captures only information typically created during system development. For example, an SUD provided as a set of UML use case diagrams, component diagrams with information lows, and corresponding class diagrams can be used as input for the modeling activity. The functions can be extracted from use case diagrams, components and data lows from component diagrams, and data elements from class diagrams. Thus, well-established modeling languages can be used as input for the irst step łModel the Target of Evaluationž. Furthermore, the information captured in the SRA model can be łtranslatedž back to UML with low efort, improving the communication between domain and security experts. Consequently, our SUD representation supports the mutual understanding and the collaborative creation of the SRA model by all stakeholders, maintaining an unambiguous reference for ACM Trans. Cyber-Phys. Syst. Security Risk Assessments: Modeling and Risk Level Propagation • 7 Function maps to 1..n 1..n Data 0..n 0..n transmits is subfunction of 1..n 1..n 0..n 0..n 0..n 0..n 0..n has sender stores 0..n Component Data Flow is subdata of 1..n 0..n has receiver 1..n 0..n 0..n 0..n 0..n is subcomponent of Fig. 2. Metamodel including functions: Functions are implemented using data, components, and data flows. the following steps of the risk assessment. 4.2 Determine Protections Needs The protection needs are captured through the risk assessment-speciic core concepts security goals and damage scenarios. Security Goals (SG) deine security properties for assets, where łsecurity propertyž denotes an asset’s property, such as conidentiality, availability, or integrity. For the sake of simplicity, we focus on these three security attributes, but the method may be extended to any set of properties. Assets are modeled as elements of the SUD, relected by the łis asset ofž relation, as shown in Figure 4. For example, a medical system stores the data element łpatient dataž. łConidentiality of patient dataž represents a security goal with the security property conidentialityž ł and the asset łpatient dataž. Note that our deinition of łsecurity goalž diverges from the similar term cyb ł ersecurity goalž as deined in ISO / SAE 21434 [17]. As the standard does not provide a compact term for cyb ł ersecurity property of assetž, we stick to łsecurity goalž, remaining consistent with our previous publications on the same topic. Violation of a relevant security goal leads to one ordamage more scenarios. This is denoted by the łviolation causesž relation. A damage scenario is deined by a non-empty set of impact criteria. In our example, violation of the security goal łConidentiality of patient dataž causes the damage scenario łUnauthorized access to personal dataž, which entails the impact criterion łSubstantial violation of lawsž as an attribute. Impact criteria are part of the assessment model. The assessment model assigns an impact rating to each criterion. Impact criteria can be structured in impact categories, such as safety, inancial, operational, or privacy. These four impact categories are required by17 [ ] and were previously proposed in, e. g., 29].[ The assessment model with the impact categories and corresponding impact criteria is adaptable to organizations and their ield of operation. Security goals might depend on other security goals. For example, the availability of a function depends on the availability of a component executing the function. If the availability of the component is violated, the availability of the function is also violated. Consequently, the second security goal depends on the irst. These dependencies can be independent of each other or require several dependencies to be violated. For example, if two independent sources provide a data item, then the security goal A ł vailability of data itemž is violated only if the security goals A ł vailability of the irst sourcež AND A ł vailability of the second sourcež are both violated. For the graphical representation of this example, see Figure 3. ACM Trans. Cyber-Phys. Syst. 8 • Angermeier et al. Availability of data item Availability of data item Security Property: Availability Security Property: Availability depends on depends on Combined Security Goals 1 Combined Security Goals 2 Combined Security Goals 1 encompasses encompasses encompasses Availability of source 1 Availability of source 2 Availability of source 1 Availability of source 2 Security Property: Availability Security Property: Availability Security Property: Availability Security Property: Availability Fig. 3. Dependencies of security goals. The let side shows how the Availability of data itemcan be violated only if the Availability ofsource 1 AND source 2 are not given. The right side depicts a case where violating a single dependency is suficient to violate the security goal at the top. We introduce the element łCombined Security Goalsž to deine these dependencies. A security goal depends on an arbitrary number of mutually independent łCombined Security Goalsž nodes. Each łCombined Security Goalsž node then relates to one or more security goals. Note that arbitrary logical expressions with AND and OR, as often seen in classical attack trees, can always be transformed into disjunctive normal form (DNF) to it this metamodel, including speciic sequences of attacks. 4.3 Analyze Threats The threat analysis is captured through the risk assessment-speciic core concepts of threats, controls, and assumptions. Security goals are threatened by combinationsthr ofeats, as depicted in Figure 4. Similar to the dependency on other security goals, threats can either threaten a security goal independently of each other or require other threats to also execute successfully. For example, the integrity of a function can be threatened by eavesdropping on a message as attack preparation AND by subsequently replaying the eavesdropped message. We introduce the element łCombined Threatsž to deine these dependencies. A security goal is threatened by an arbitrary number of mutually independent łCombined Threatsž nodes. Each łCombined Threatsž node then relates to one or more threats. Threats provide the following attributes: • Attack feasibility factors help to estimate the attack feasibility rating to realize the threat. The attack feasibility factors themselves are deined in the assessment model and, therefore, can be adapted to any standard or organizational needs. For example, CEM 6] deines [ ive attack feasibility factors for the estimation of the łrequired attack potentialž,Elapse i.e., d Time , Expertise, Knowledge of the TOE, Window of Opportunity, and the necessary Equipment, along with a set of predeined values (e.g., łLaymanž) and corresponding numeric values for each attack feasibility factor. • Threatened security propertiesdeines the security properties a threat might violate. For example, the threat łinformation disclosurež threatens the security property conidentialityž ł . Threats act on the physical manifestation of the SUD. The physical aspects of the SUD are modeled as components and data lows (including wireless transmissions) as described in Subsection 4.1, while data and ACM Trans. Cyber-Phys. Syst. Security Risk Assessments: Modeling and Risk Level Propagation • 9 functions are processed by these elements. The łacts onž relation is not required to model a risk assessment, but useful to help human analysts understand the threats. Additionally, the model of the SUD combined with the threatened security properties can be used to identify and validate potentially violated security goals. For example, it is plausible to assume that a threat łinformation disclosurež on a data low threatening the conidentiality of the transmitted data items afects the conidentiality of functions mapped onto the data items. Vulnerabilities are not represented with a metamodel element. In the context of MoRA, a threat causing a relevant risk that is not mitigated poses a vulnerability of the SUD. (Cybersecurity) controls and assumptions mitigate threats. Combinations of these elements are represented by a łCombined Mitigationsž node type. In some cases, controls and assumptions may be similar due to technical circumstances. Technical measures like channel encryption used by the SUD are usually modeled as controls. By contrast, laws of nature, responsibilities or controls of third parties, attacker capabilities, and the analysis limits are documented as assumptions. Controls and assumptions, similar to threats, provide attack feasibility factors and protected security properties. The attack feasibility factors facilitate the estimation of the control’s or assumption’s efect on the attack feasibility of related threats. Section 5 provides details on how to combine the attack feasibility factors in the SRA model. Additionally, controls as well as assumptions may cause changes in the impact of the violation of security goals modeled as damage transformation. In this case, a damage scenario is replaced by another damage scenario or entirely removed by assigning no transformation target. For example, suppose a valve in a factory is controlled by network messages. In that case, an attacker might manipulate these messages to violate the security goal łintegrity of valve controlž and cause the damage scenario eł xplosion of a pressure tankž. The control op ł ening the valve on locally measured high pressurež cannot prevent this manipulation, but efectively transforms the (source) damage scenario into the less critical target damage scenario łproduction outage.ž Assumptions may be used to bring information into the model that has not been explicitly modeled in the SUD but is important in its efect on the analysis, such as limitations of the assumed attacker model. Unlike controls, assumptions do not depend on security goals. MoRA also supports catalogs for threat and control classes (cf. 8]).[These classes may entail a pre-assessment of attack feasibility factors, estimating the attack feasibility to execute the threat or break the control. The threats and controls in the SRA can use these pre-assessed values but also override them to relect the more speciic context of the SRA. In addition to providing a common ground for security risk assessments, these catalogs also support the analyst in łnot overlookingž known threats and validating conformity to regulatory prescribed threat and control catalogs. Controls are implemented by the SUD and may thus depend on its security goals. For example, the control digital ł signaturež requires a component to create the signature and another component to check its validity. Consequently, instead of breaking the signature, an attacker can try to violate the security goal conidentiality ł of the private keyž on the signing component or the security goal łintegrity of the certiicatež on the component executing the signature check. Both attacks can circumvent the control digital ł signature.ž We model this by introducing a dependency of controls on security goals or combinations of security goals, again using the łCombined Security Goalsž node type. Consequently, impacts caused by the loss of conidentiality of cryptographic keys do not have to be estimated directly but are relected by additional attack paths on controls. Therefore, if the impact rating for security goals of the protected functions, data items, or components changes, then this change is consistently relected for the risks caused by attacks on cryptographic keys. ACM Trans. Cyber-Phys. Syst. 10 • Angermeier et al. Security Goal Combined Threats Threat is asset of is threatened by encompasses Security Property Attack Feasibility Factors 0..n 0..n 0..n 1..n 1..n 0..n Threatened Sec. Prop. 0..n 0..n 1..n 0..n violation causes 0..n has source Damage Scenario Damage Transformation 0..n depends on encompasses is mitigated by Impact Criteria has target 0..1 0..n 0..n causes 0..n 1..n 0..n 0..n Combined Sec. Goals Control / Assumption Combined Mitigation depends on encompasses Attack Feasibility Factors 0..n 0..n 1..n 1..n Protected Security Prop. 0..n acts on Function maps to 0..n 1..n 1..n Data 0..n 0..n transmits is subfunction of 1..n 0..n 1..n 0..n 0..n 0..n 0..n 0..n has sender stores 0..n Component Data Flow is subdata of 1..n 0..n has receiver 0..n 1..n 0..n 0..n 0..n 0..n 0..n is subcomponent of Fig. 4. Complete metamodel including controls and assumptions: Threats can be mitigated by controls, assumptions, or combinations of these. 4.4 Analyze Risks Figure 4 depicts the full metamodel with all node types and their relations to each other. We do not model risks as separate elements. Instead, risk levels can be determined for every security goal, threat, control, or damage scenario as described in Section 5. A risk level is determined by the combination of potential damages (impact rating) and the attack feasibility rating to cause these damages. The impact rating is determined by the impact criteria originating from the damage scenarios related to a risk. The attack feasibility rating is determined by the attack feasibility factors of the threats and the attack feasibility factors of the controls mitigating them. In our practical experience, this model of the SUD, represented by functions, data, components, and data lows, is well-suited for SRAs and easy to understand for system developers. All nodes in the risk assessment core concept relate to this model. Security goals are properties of the SUD. Threats and controls act on the SUD, modeling the interaction with the system. As outline 3],dthe inrelations [ between the risk assessment elements can be validated by tracing them back to the model of the SUD. Similarly 3] provides , [ a method to propose new nodes and relations based on the model of the SUD. Consequently, the creation of risk assessment elements can partially be automated, requiring the analyst to check and modify the proposals and to specify proposed elements further. 4.5 Modeling Example Figure 5 depicts an instance of a metamodel for a ictitious software update function. Note that the elements in the risk assessment and their relations can be deined without actually providing a graphical representation, e. g., in a tabular representation. This is important, as a full graph for a complete risk assessment possesses high ACM Trans. Cyber-Phys. Syst. Security Risk Assessments: Modeling and Risk Level Propagation • 11 complexity, owing to a large number of nodes with many relations between them. Thus, a complete graphical representation is typically diicult to process for a human analyst. However, plotting selected parts graphically is helpful in our experience. Consequently, we chose a small example to highlight our approach’s key features in a manageable fashion. Generally, we do not prescribe a dedicated concrete syntax for instances of the metamodel as requirements difer between diferent application domains and organizational environments. Figure 6 depicts a screenshot of the itemis YAKINDU Security Analyst. The Security Analyst provides diferent concrete syntaxes to work on instances of the metamodel. A textual concrete syntax for threats is displayed in the upper half of the screenshot (titled attack step in the syntax). The second half demonstrates a graphical concrete syntax of the SUD. The example describes a ictitious software update function in which a server pushes an update into a vehicle. Violation of the security goal łIntegrity of the update functionž can lead to a safety-related damage scenario łUncontrollable vehiclež as well as a damage scenario łUnauthorized tuningž related to inancial losses. This security goal is threatened by two independent attack paths: the irst attack path encompasses the combination of the threat łReverse Engineeringž AND the threat łMan-in-the-Middle attack (mobile)ž. The latter threat acts on the mobile data low between server and vehicle. The control AES ł GCMž protects the conidentiality and the integrity of the transferred data and thus mitigates the Man-in-the-Middle (MitM) attack. However, the control also depends on the conidentiality of the data item AES ł keyž. In our example, all vehicles share the same symmetric key. Therefore, the security goal łConidentiality of AES keyž is threatened by a key extraction attack on a single vehicle. Security Goal: Integrity is threatened by Combined Threats: CT1 encompasses Threat: Man-in-the- of the Update function Middle attack (mobile) violation causes is mitigated by Threat: Reverse Damage Scenario: Damage Scenario: Engineering Combined Mitigation: Uncontrollable vehicle Unauthorized tuning CM1 encompasses Combined Threats: CT2 Damage encompasses has source Transformation: DT1 has target Control: AES GCM causes Assumption: Tuning encompasses Combined Mitigation: is mitigated Threat: Manipulate only CM2 by data on the CAN bus depends on maps to is asset of Combined Security acts on Goals: CSG1 maps Function: Distribute SW Data Flow: SW Update acts on to Update [Server→ Vehicle] maps to Component: Server acts Threat: Key extraction Component: Vehicle on has sender has receiver encompasses encompasses Combined Threats: CT3 transmits stores SUD Model is threatened by maps to Data: SW Update Data: AES key is asset of Security Goal: Confidentiality of key Fig. 5. This example shows an excerpt of an instance of our graph for a risk assessment of a fictitious sotware update function. ACM Trans. Cyber-Phys. Syst. 12 • Angermeier et al. Fig. 6. A screenshot of concrete syntaxes provided by YAKINDU Security Analyst 21.1. The second attack path complements the MitM attack on the data low between server and vehicle with an attack on a data low inside the vehicle, but still requires reverse engineering by the attacker. The control AES ł GCMž does not protect data lows inside the vehicle, as it only acts on the data low between server and vehicle. In this example, we also limit the attacker model to tuning-related attacks when physical access is needed. Consequently, the assumption łTuning onlyž transforms the damage scenario łUncontrollable vehiclež into the damage scenario łUnauthorized tuningž. Note that damage transformation can also remove a damage scenario completely. 5 PROPAGATION RULES AND RISK CALCULATION In the previous sections, we deined the metamodel and provided an example for an instance of the graph. In this section, we provide rules to calculate risk levels for a speciic graph. First, we give an intuition of the idea. Then we formalize the actual calculation based on the metamodel elements instantiated in a graph. We conclude this section with an example. 5.1 Intuition In contrast to other SRA models, we aim to calculate the risk level for security goals, threats, controls, and damage scenarios individually. It is desirable to identify risk levels for all these elements, as identifying the most critical threats, the security goals and assets at highest risk, the weakest links among the controls, or the most critical damage scenarios all represent valuable information in making risk treatment decisions. ACM Trans. Cyber-Phys. Syst. Security Risk Assessments: Modeling and Risk Level Propagation • 13 Calculation of risks requires two inputs: an estimation of the attack feasibility rating (based on attack feasibility factors) and an impact rating. These inputs are deined as attributes in separate metamodel elements of a graph instance: Threats and controls entail attack feasibility factor attributes. Damage scenarios entail impact criteria as attributes. Impact criteria attributes, in turn, are mapped to impact ratings in the assessment model. This results in speciic impact ratings being available in damage scenarios. Furthermore, controls and assumptions may cause a damage transformation selecting relevant damage scenarios. We use the relations between the risk assessment-speciic elements in the graph to combine these attributes and calculate risk levels. The combination of attributes brings together the two required inputs for risk calculation. Note that the metamodel technically allows for the deinition of circular dependencies, but for the presented approach, the metamodel instance must be a directed acyclic graph (DAG). In our practical experience on real life projects, this does not impose relevant limitations on the modeling capabilities. The basic idea is to let the values of attributes low or propagate through the graph. Figure 7 shows the propagation of attack feasibility factors and damage transformations. Figure 8 shows calculated risk values propagating in the opposite direction along the edges. The sequence of nodes from a control, assumption or threat to another node creates an attack path towards that node. Note that we call every such path of any length an attack path. We calculate risks for every attack path towards a security goal. Any of the following types of nodes can be on an attack path and receive as well as propagate values: Security Goal, Combined Security Goals, Assumption, Control, Combined Mitigation, Threat, Combined Threats. The metamodel elements not listed (Damage Scenario and Damage Transformation) are used to calculate risk values, but are not part of attack paths themselves. Multiple attack paths can lead to a damage scenario (efectively forming an attack tree as part of the graph). The propagation rules deine how to accumulate the values of attributes along the attack paths and how to combine multiple attack paths with each other. Consequently, changing an attribute at a node causes the risk values of all related nodes in the graph to be updated accordingly. As noted above, the algorithm has two parts. In the irst part (attack feasibility factor and damage transformation propagation), we create all attack paths of maximum length. This step starts in those assumptions, controls, and threats without any dep ł ends onž or łis mitigated byž relations (these nodes are referred to as łleafsž). The algorithm irst applies the calculation rules as deined below to all of the łleafž nodes. As a result of these rules, a set of attack paths, all including the current node, is created as output and passed along the edge to the next node as input. Every node applies the propagation rules whenever all input sets (for all incoming edges) are available. Eventually, all nodes in the graph that can be part of an attack path are covered and the irst part of the algorithm is inished. The second part (risk propagation) works on the graph in the opposite direction. The algorithm starts in security goals, calculates risks for all incoming attack paths (propagated in step 1), and then propagates risks along the incoming attack paths of these security goals. Security goals possess relations to damage scenarios as a basis for the impact rating. However, the incoming attack paths might have transformation efects on these scenarios. After the efects are applied, a risk level is calculated for each attack path. The highest risk determines the risk level for the security goal itself. The risk level for each attack path subsequently travels along the attack path through the graph. This modeling and calculation approach therefore enables risk decisions that include very complex dependencies. The following calculation instructions allow for an automated implementation. 5.2 Calculation We split the description of calculation rules into two parts. First, we start with the propagation of attack feasibility factors and damage transformations as shown in Figure 7 with the rules presented in Table 1. Then we provide ACM Trans. Cyber-Phys. Syst. Attack Feasibility Factors Risk Level Damage Transformation (selective) Attack Feasibility Factors Risk Level Damage Transformation (selective) Attack Feasibility Factors Damage Transformation 14 • Angermeier et al. Security Goal Combined Threats Threat 0..n is threatened by 0..n 1..n encompasses 1..n Security Property Attack Feasibility Factors Attack Feasibility Factors Attack Feasibility Factors Threatened Sec. Properties Damage Transformation Damage Transformation 0..n 1..n 0..n 0..n violation causes encompasses 0..n 1 has source 0..n Damage Scenario Damage Transformation is mitigated 0..1 has target 0..n Impact Criteria by 0..n causes depends on 0..n 1..n 0..n 0..n Combined Security Goals Control / Assumption Combined Mitigation 0..n depends on 0..n 1..n encompasses 1..n Attack Feasibility Factors Attack Feasibility Factors Attack Feasibility Factors Protected Sec. Properties Damage Transformation Damage Transformation Fig. 7. The propagation of atack feasibility factors and damage transformation through the graph. Security Goal Combined Threats Threat 0..n is threatened by 0..n 1..n encompasses 1..n Security Property Attack Feasibility Factors Risk Level Risk Level Threatened Sec. Properties (selective) (selective) 0..n 1..n 0..n 0..n violation causes encompasses Risk Level (selective) 0..n 1 has source 0..n Damage Scenario Damage Transformation is mitigated 0..1 has target 0..n Impact Criteria by 0..n causes depends on 0..n 1..n 0..n 0..n Combined Security Goals Control / Assumption Combined Mitigation 0..n depends on 0..n 1..n encompasses 1..n Attack Feasibility Factors Risk Level Risk Level Protected Sec. Properties (selective) (selective) Fig. 8. The risks are propagated selectively, following the origins of the atack paths. the propagation of risk levels, which low in the opposite direction along the attack paths as shown in Figure 8 with the rules presented in Tables 2 and 3. The practical calculation also happens in this order. 5.2.1 Propagation of Atack Feasibility Factors and Damage Transformations. The irst type of propagation concerns attack feasibility factors. As described in section 4, they help to estimate the attack feasibility rating. The efort required by an attacker is at least the efort required to attack the initial node on the path, but usually increases along the path as diferent steps have to be taken, requiring more efort. Similarly, a node that causes damage transformation propagates this efect along the attack path. The propagation starts in assumptions, as well as in controls and threats without any dep ł ends onž or łis mitigated byž relations. Nodes with the term łCombinedž in their node type combine the incoming attack paths as deined below. Threats afect incoming ACM Trans. Cyber-Phys. Syst. Risk Level (selective) Security Risk Assessments: Modeling and Risk Level Propagation • 15 attack paths by adding their own attack feasibility factors to the attack path. Mitigations (controls or assumptions) always generate one or more new attack paths: the irst attack path breaks the mitigation via its attack feasibility factors. In this case, the mitigation’s attack feasibility factors are propagated in a new attack path. The second attack path is generated if the mitigation has damage transformation efects. This attack path leaves the mitigation in place and propagates the damage transformation efect. If the mitigation has incoming attack paths, then these paths break the mitigation via its dependencies. Consequently, these attack paths are propagated without changes to the attack feasibility factors or damage transformation efects. Security goals always propagate attack paths without changes to the attack feasibility factors or damage transformation efects. Attack paths usually terminate in security goals, but may also terminate in other nodes (e. g., when a threat does not violate any security goals). Table 1 deines the propagation rules for attack feasibility factors and damage transformation efects. We use the following notation and deinitions: Let � denote a tuple of attack feasibility factor values used to determine the attack feasibility rating. For example, using an approach based on attack potentials (c.f. 6]):[given an assessment model with � attack feasibility factors, let� := (� , . . . , � ) denote an �-tuple of attack feasibility factor values,�wher repr e esents the value of � �,1 �,� �,� attack feasibility factor � for the attack feasibility factor�tuple . �ˆ represents minimum values for each attack feasibility factor. For example, using an approach based on attack potentials (c.f. [6]), this results in the (tuple 0, . . . , 0). Every node in the graph has a unique ID � ,. The ID 0 is reserved for łno node.ž We use the łno nodež concept for a damage transformation that completely removes a damage scenario � repr . esents the set of all node IDs. � ⊂ � represents the set of all Damage Scenario node IDs (including�0),⊂ � represents the set of all Damage Transformation node IDs. A Damage Transformation node � has a relation łhas sourcež to exactly one Damage Scenario node � ∈ � and another relation łhas targetž to another Damage Scenario node� ∈ �. Let src : � → � return a damage transformation’s source node � and tgt : � → � return its target node�. The damage transformation function dt : (�, �) → � uses a Damage Transformation node � ∈ � and a Damage Scenario node� ∈ � as input and provides a Damage Scenario node � ∈ � as output. It is deined as � �≠ src[�] dt[�, �] := tgt[�] � = src[�]. Let � := (�, �, � ) deine anattack path, where � ⊆ � represents a set of damage transformation nodes and � represents the set of nodes with IDs � traversed on the attack path. In other words, � deines the efects on risk accumulated in a single attack path towards a node in the graph and combines an attack efort (attack feasibility factor values in �) with zero or more damage transformation efects�in . Note that, given the acyclic nature of the graph, storing the IDs in a set is suicient to reconstruct a full attack path from a given starting point. Note that several attack paths may contain the same set of nodes, as, e. g., controls with damage transformation efects create two attack paths. Let � denote the set of all attack feasibility factor value �. tuples Then afmax : � → � denotes a function which takes an arbitrary numb�er∈ N of attack feasibility factor value�tuples , . . . , � as input and calculates 1 � the maximum for each attack feasibility factor. For example, using an approach based on attack potentials 6]) (c.f. [ as attack feasibility factors, we obtain afmax[� , . . . , � ] := (max[� , . . . , � ], . . . , max[� , . . . , � ]). 1 � 1,1 �,1 1,� �,� Let � denote the set of all attack paths �. Then cpths : � → � denotes a function which takes an arbitrary number � ∈ N of attack paths� , . . . , � as input and calculates the maximum value for each attack feasibility factor, while damage transformation efects are unafected and all nodes on the path are remembered. More ACM Trans. Cyber-Phys. Syst. 16 • Angermeier et al. precisely, Ø Ø cpths[� , . . . , � ] := (afmax[� , . . . , � ], � , � ). 1 � 1 � � � �=1..� �=1..� The output of cpths[] is itself an attack path which accumulates the values and efects of all the inputs. Finally, let � denote a set of propagated attack paths. For each node, the set of all incoming attack paths represents the input set of the calculation step. This input set is then combined with the node’s own values to propagate a number of attack paths along the graph in the node’s output set. Table 1 provides speciics on the calculation and propagation rules. Note that variables are re-deined for each node and node type (e. g., a node’s unique ID is always � in the node’s scope). Names of metamodel elements are printed in bold. Threat Input Sources: � connected Combined Mitigations Value:� := � (the union of the incoming attack paths). �=1..� The � connected Combined Mitigation nodes propagate � attack path sets � . These sets contain a total of� = |� | attack paths � = (� , � , � ). � � � � � �=1..� Local � := tuple of own attack feasibility factor values � := the node’s unique ID 0 0 Output Targets: All connectedCombined Threats nodes Value for� = 0 (a łleafž node): {(� , ∅, {� })}, i. e., one attack path with the threat’s own values 0 0 Value for� > 0: Ð Ð {cpths[(� , ∅, {� }), � ]} = {cpths[(� , ∅, {� }), (� , � , � )]} = 0 0 � 0 0 � � � � ∈� �=1..� {(afmax[� , � ], ∅ ∪ � , {� } ∪ � )}, 0 � � 0 � �=1..� i. e., the threat propagates� attack paths, where each of the� attack paths in the input set � is combined with the threat’s attack feasibility factor � values and node ID � . 0 0 Combined Threats Input Sources: � connected Threats Value:� output sets � of the� connected Threats, each with|� | attack paths for a total of � � � = |� | attack paths �=1..� Local � := �ˆ (no own attack feasibility factor values) � := the node’s unique ID 0 0 Output Targets: All connectedSecurity Goal nodes Value:� := {cpths[� , . . . , � , (� , ∅, {� })] | � ∈ � for every� ∈ {1, . . . ,� }}. 1 � 0 0 � � The output � contains� := |� | attack paths: it encompasses all possible combinations of �=1..� incoming attack paths for each�ofconnected threats. Every threat contributes|� | diferent attack paths. By combining all choices of selecting a single attack path for each � difer threat, ent attack paths (with combined attack feasibility factor values) are created for the output set. The node’s unique ID� is added to each attack path. ACM Trans. Cyber-Phys. Syst. Security Risk Assessments: Modeling and Risk Level Propagation • 17 Security Goal Sources: � connected Combined Threats nodes and � connected Combined Security Goals Input nodes Ð Ð Value:� := � ∪ � (the union of the incoming attack paths). � � �=1..� �=1..� The � connected Combined Threats nodes propagate � attack path sets � . Additionally,�the connected Combined Security Goals nodes propagate � attack path sets � , resulting in a total Í Í of� = |� | + |� | incoming attack paths. � � �=1..� �=1..� Local � := �ˆ (no own attack feasibility factor values) � := the node’s unique ID 0 0 Output Targets: All connectedCombined Security Goals nodes Value for� = 0 : ∅ (as the node is not attacked, nothing is propagated) Value for� > 0: {cpths[� , (� , ∅, {� })]}, � 0 0 � ∈� i. e, the node adds its unique�IDto each attack path but does not inluence the attack feasibility factors (as� is set to minimum for each value) or the damage transformation efects. Combined Security Goals Input Sources: � connected Security Goals Value:� attack path sets � of the� connected Security Goals, each with|� | attack paths for a � � total of� = |� | attack paths �=1..� � Local � := �ˆ (no own attack feasibility factor values) � := the node’s unique ID 0 0 Output Targets: All connectedSecurity Goal and Control nodes Value:� := {cpths[� , . . . , � , (� , ∅, {� })] | � ∈ � for every� ∈ {1, . . . ,� }}. 1 � 0 0 � � The output � contains� := |� | attack paths: it encompasses all possible combinations of �=1..� incoming attack paths for each�ofconnected security goals. Every security goal contributes |� | diferent attack paths. By combining all choices of selecting a single attack path for each security goal,� diferent attack paths (with combined attack feasibility factor values) are created for the output set. The node’s unique ID� is added to each attack path. Control Input Sources: � connected Combined Security Goals nodes Value:� := � (the union of the incoming attack paths). �=1..� The � connected Combined Security Goals nodes propagate � attack path sets � . These sets contain a total of � = |� | attack paths � . � � �=1..� Local � := tuple of own attack feasibility factor values � := set of damage transformation efects � := the node’s unique ID 0 0 Output Targets: All connectedCombined Mitigation nodes ACM Trans. Cyber-Phys. Syst. 18 • Angermeier et al. Value for� = 0 (a łleaf nodež): · if � = ∅ : {(� , ∅, {� })}, 0 0 0 · else: {(� , ∅, {� }), (�ˆ, � , {� })}, 0 0 0 0 i. e., two attack paths for a control with at least one damage transformation efect or one attack path for a control without. Note that �ˆ represents the tuple of minimal attack feasibility factor values. Value for� > 0: · if � = ∅ : {cpths[(�ˆ,∅, {� }), � ]} ∪ {(� , ∅, {� })} 0 0 � 0 0 � ∈� · else: {cpths[(�ˆ,∅, {� }), � ]} ∪ {(� , ∅, {� }), (�ˆ, � , {� })} 0 � 0 0 0 0 � ∈� i. e., a control without damage transformation efects propagates � + 1 attack paths. This includes the � attack paths in the input set � combined with the control’s node�ID. As the control is broken via its dependencies, the control’s attack feasibility factor value � is tuple not added to these paths. The (� + 1)th propagated attack path is the same as for a łleafž node, as the control is not broken via its dependencies in this case. A control with damage transformation efects propagates one additional (� + 2)th attack path without the node’s attack feasibility factor values, but its damage transformation efects � and node ID � . This relects an attacker’s option to accept the control’s 0 0 efects on damages instead of breaking the control. Assumption Input Sources: None (assumptions are always łleafž nodes) Value:� := ∅ (no incoming attack paths). Local � := set of damage transformation efects � := the node’s unique node ID 0 0 � := tuple of own attack feasibility factor values ⊥ OR (where ⊥ means that the assumption always causes a damage transformation efect) Output Targets: All connectedCombined Mitigation nodes Value: · if � = ⊥ and � = ∅ : ∅, 0 0 · if � = ⊥ and � ≠ ∅ : {(�ˆ, � , {� })}, 0 0 0 0 · if � ≠ ⊥ and � = ∅ : {(� , ∅, {� })}, 0 0 0 0 · if � ≠ ⊥ and � ≠ ∅ : {(�ˆ, � , {� }), (� , ∅, {� })}, 0 0 0 0 0 0 i. e., no attack paths for an assumption without efects, one attack path for an assumption with only one efect, and two attack paths for an assumption with attack feasibility factor values and a damage transformation efect. Combined Mitigations Input Sources: � connected Control or Assumption nodes Value:� output sets � of the� connected nodes (Controls / Assumptions), each with|� | attack � � paths for a total of � = |� | attack paths �=1..� Local � := �ˆ (no own attack feasibility factor values) � := the node’s unique ID; 0 0 Output Targets: All connectedThreat nodes ACM Trans. Cyber-Phys. Syst. Security Risk Assessments: Modeling and Risk Level Propagation • 19 Value:� := {cpths[� , . . . , � , (� , ∅, {� })] | � ∈ � for every� ∈ {1, . . . ,� }}. 1 � 0 0 � � The output � contains� := |� | attack paths: it encompasses all possible combinations �=1..� of incoming attack paths for each�of connected mitigations (controls / assumptions). Every mitigation contributes |� | diferent attack paths. By combining all choices of selecting a single attack path for each mitigation, � diferent attack paths (with combined attack feasibility factor values) are created for the output set. The node’s unique ID � is added to each attack path. Table 1. Propagation rules for atack feasibility factors and damage transformation efects 5.2.2 Risk Propagation.With the attack feasibility factor values and the damage transformation efects completely propagated, the calculation and propagation of risk levels takes place. The initial calculation happens in Security Goal nodes without an incoming encompassesž ł relation of a Combined Security Goals node (i. e., a łleafž node for risk propagation). For each attack path, we calculate the attack feasibility rating based on the attack path’s attack feasibility factor values. Next, we apply all damage transformation efects of the selected attack path to the Security Goal node’s related Damage Scenario nodes (following the relation łviolation causesž). The resulting damage scenarios provide impact criteria, which in turn provide impact ratings. The highest of these impact ratings deines the impact rating for the selected attack path. Combining this impact rating with the attack feasibility rating of the attack path provides a risk level for the attack path. The highest risk value of any of a node’s attack paths deines the risk for the node itself. The node then propagates all risk levels along its respective attack paths to propagate the results through the graph. We use the following additional notation for these rules: Let � := afr[�] calculate the attack feasibility rating for a attack feasibility factor �. Note value that tuple this function is part of the assessment model and thus not deined heredc . Let [�] := � return the set � of impact criteria assigned to a damage scenario �. Let � := imr [� ] calculate the impact rating for a set of impact criteria � , based on the assessment model. For exampleimr , [] might return the maximum over a set of impact ratings assigned to each impact criterion in the assessment model (where higher numbers imply higher impacts). Let � := rl[�, �] calculate the risk lev�el for an attack feasibility rating � and an impact rating �. Let � := (�, � ) represent a risk level result for an attack path with risk�le and vela set of node IDs� . Finally, let� denote a set of risk level results. Tables 2 and 3 deine the speciics for the risk level calculation inside a Security Goal node and the risk level result propagation. Table 2 also describes the risk level propagation to Damage Scenario nodes (see second łOutputž in the table). Note that instead of propagating risks hop by hop on an attack path, we propagate selected risks directly from Security Goal nodes. The derived risk level result for each attack path is propagated to each node on this attack path. This simpliies the propagation rules, while preserving the semantics of Figure 8. ACM Trans. Cyber-Phys. Syst. 20 • Angermeier et al. Security Goal Input Sources: All� Security Goals where this node is on an attack path (seOutput e in this table) Value:� := � (the union of the incoming results). �=1..� The � Security Goals propagate � result sets� . These sets contain a total of � = |� | results� = (� , � ). � � � � �=1..� Input � := the set of� incoming attack paths (� , � , � ) for�= 1..� (see Table 1) � � � Local � := the set of connectedDamage Scenario nodes � := the node’s unique ID 0 0 � := {dt[�, �]}, �,0 � ∈� ,�∈� Derived � 0 the set of transformed damage scenarios for attack path �after one iteration � := {dt[�, �]}, �,� � ∈� ,�∈� �,�−1 the set of transformed damage scenarios after � + 1 iterations � := � with� = � , the resulting set of transformed damage scenarios. � �,� �,� �,�−1 Note that this allows cycles or ambiguous situations. It is up to the analyst creating the model to prevent or resolve such issues. � = � if � = ∅. � 0 � � := imr [ dc[� ]], the impact rating for attack path � � � � ∈� � � � := afr[� ], the attack feasibility rating for attack � path � � � := rl[� , � ], the risk level for attack path � � � � � := {(� , � )}, the set of all risk level results for all incoming attack� paths in � � �=1..|� | � := max[{� | (� , � ) ∈ � } ∪ {� | (� , � ) ∈ � }], the security goal’s risk value, i. e., the maximum � � � � � � of the node’s own risk levels and the risk levels propagated to the node Output Targets: All nodes� on the incoming attack paths with � ∈ � and (� , � ) ∈ � � � � � � Value:� := {(� , � ∪ {� }) | (� , � ) ∈ � and � ∈ � }. � � � 0 � � � � Propagate to the target node � the results� for all attack paths (local and propagated) with the � � security goal’s ID � on the path. Note that � might be empty (for a łleafž node with � = 0). Output Targets: |� | connected Damage Scenario nodes � ∈ � � � Value:� := {(rl[� , imr [dc[�]]], � ) | � = afr[� ] for all � ∈ {1, . . . , �} and � ∈ � }. � � � � � Propagate to each connected damage scenario� ∈ � the result set� , containing a risk value and the nodes on the attack path � . The risk value is calculated from the attack feasibility � for rating � � that attack path and the damage ratingimr [dc[�]] for damage scenario �. Note that � contains the damage scenarios for an attack path after damage transformation. Table 2. Propagation rules for risk results of Security Goal nodes ACM Trans. Cyber-Phys. Syst. Security Risk Assessments: Modeling and Risk Level Propagation • 21 Combined Security Goals, Combined Threats, Threat, Combined Mitigation, Control or Assumption Input Sources: All� Security Goals where this node is on an attack path (see Table 2) Value:� := � (the union of the incoming results). �=1..� The � Security Goals propagate � result sets� . These sets contain a total of � = |� | results� = (� , � ). � � � � �=1..� Local � := the node’s unique ID � := max[{� | (� , � ) ∈ � }], the node’s risk value, i. e., the maximum of the risk levels propagated � � � Derived to the node Output Targets: None Value: None Table 3. Propagation rules for risk results of Combined Security Goals, Combined Threats, Threat, Combined Mitiga- tion, Control or Assumption nodes. � ≔ cpths � ,� , �,Ƹ ∅, � � � � � Security Goal: Integrity Combined Threats: CT1 encompasses Threat: Man-in-the- � ≔ cpths � ,� , �,Ƹ ∅, � of the Update function Middle attack (mobile) is threatened by � = affmax[r ,r ],∅, � 1 ≤ � ≤ � � ≔ cpths � , � ,∅, � � = � ,∅, � 1 ≤ � ≤ � = affmax � ,� ,� ,∅, � ,� 1 ≤ � ≤ � � = affmax[r ,r ],∅, � ,� ,� � ≔ cpths � , � ,∅, � � = � ,∅, � ,� � = affmax � ,� ,� ,∅, � ,� ,� ,� ,� � = affmax[� ,� ], � , � ,� ,� ,� ,� � = � ,∅, � is mitigated by violation causes Threat: Reverse � ≔ cpths � , �,Ƹ ∅, � Engineering Damage Scenario: Damage Scenario: � ≔ cpths � , �,Ƹ ∅, � � ≔ � ,∅, � Uncontrollable vehicle Unauthorized tuning � ≔ cpths � ,� , �,Ƹ ∅, � Combined Mitigation: � � CM1 encompasses Damage Combined Threats: CT2 � � = � ,∅, � ,� ,� ,� ,� � = � ,∅, � has source Transformation: DT1 has target � = � ,∅, � � = � ,{� }, � ,� ,� encompasses causes � ≔ cpths � , �,Ƹ ∅, � � ≔ cpths � , � , � , � ,� ,� � ≔ cpths � , �,Ƹ ∅, � � ≔ � ,∅, � Assumption: Tuning encompasses Combined Mitigation: is mitigated Threat: Manipulate only CM2 by data on the CAN bus Control: AES GCM � ≔ �,Ƹ {� }, � � = �,Ƹ {� }, � � � = �,Ƹ {� }, � ,� � � � = � ,∅, � ,� ,� ,� depends on Legend �Ƹ≔ tuple with minimal values for each attack feasibility factor � Threat: Key extraction � � ≔ � ,∅, � � Local attack feasibility factor value tuple � � ≔ cpths � , �,Ƹ ∅, � encompasses � Combined Security � Local damage transformation set � Goals: CSG1 � = � ,∅, � � Local node ID � � � = � ,∅, � ,� ,� Combined Threats: CT3 � = � , �,� Set of outgoing attack paths - each tuple denotes � ≔ cpths � , �,Ƹ ∅, � encompasses ′ ′ ′ one attack path � = � ,� ,� �+ is threatened by � ≔ cpths � , �,Ƹ ∅, � Set of incoming attack paths - each tuple � = � , �,� � � = � ,∅, � ,� Security Goal: Confidentiality of key denotes one attack path. An element without ′ ′ ′ � = � ,� ,� �+ icoig path is cosidered a leaf Fig. 9. Propagation of atack factors and damage transformation shown on the sotware update example. Thick borders mark łleafž nodes. ACM Trans. Cyber-Phys. Syst. 22 • Angermeier et al. 5.3 Example We use our example presented in Subsection 4.5 to depict the propagation of attack feasibility factors and damage transformation efects within that example in Figure 9. Thick borders mark the three nodes considered as łleafž nodes for this propagation. These leafs serve as starting points for the propagation. Note that we use unique identiiers for the propagated attack paths in this example and not the indices enumerating elements in a set of attack paths. Rounded boxes with solid borders denote sets of outgoing attack paths, while rounded boxes with dashed borders denote sets of incoming attack paths. Local attributes are depicted as circles. This results in three attack paths for the security goal łIntegrity of the Update functionž: [1] One way to attack this security goal is to extract the private key, conduct reverse engineering, and manipulate the encrypted and signed software update as man-in-the-middle on the mobile connections. The threat łKey extractionž has its attack feasibility factor�values . These are propagated through the security goal łConiden- tiality of keyž to the control AES ł GCMž that depends on this security goal. This results in a broken control and consequently does not add the control’s attack feasibility factor values to the attack path. The attack path is then propagated to the threat łMan-in-the-Middle attack (mobile)ž. This threat has its attack feasibility factor values� . afmax[� , � ] calculates the maximum value for each attack feasibility factor afr,[afmax while[� , � ]] 7 1 7 1 7 calculates the attack feasibility rating. To threaten the target security goal łIntegrity of the Update functionž, the threat łReverse Engineeringž with its attack feasibility� factors needs to be combined with the MitM threat in node łCT1ž and thus, a total attack feasibility rating afr[of afmax[� , � , � ]] is calculated for this attack path. 1 7 8 The traversed node IDs are accumulated along the path, while no damage transformation efects are encountered. [2] A second attack is to break the control AES ł GCMž with its attack feasibility factor � values (not by extracting the key, but e.g., by brute-forcing it because of a short key length), conduct reverse engineering, and manipulate the encrypted and signed software update as man-in-the-middle on the mobile connections. This results in a total attack feasibility rating afr[afmax of [� , � , � ]], no damage transformation efects, and a diferent set of 5 7 8 node IDs compared to attack path 1. [3] A third attack is to conduct reverse engineering and manipulate the software update on the CAN bus inside the vehicle, where no encryption is applied. For this attack, the assumption łTuning onlyž causes damage transformation node łDT1ž with ID� , transforming łUncontrollable vehiclež to łUnauthorized tuningž. This is propagated to the threat łManipulate data on the CAN busž, which possesses the attack feasibility factor values� . Together with łReverse Engineeringž this combines to the total afr[of afmax[� , � ]]. The damage 11 8 11 transformation set for this attack path{�is}. A risk level is calculated for each of the three attack paths, based on each attack path’s attack feasibility rating as well as the damage associated with each damage scenario after damage transformation. For our three attack paths, this results in: • rl[afr[afmax[� , � , � ]], imr [dc[� ], dc[� ]]] (attack path 1) 1 7 8 15 16 • rl[afr[afmax[� , � , � ]], imr [dc[� ], dc[� ]]] (attack path 2) 5 7 8 15 16 • rl[afr[afmax[� , � ]], imr [dc[� ]]] (attack path 3). 8 11 16 Furthermore, each of these risks is propagated to all nodes on the respective attack path. The highest of these risks determines the risk level for each node. Finally, a risk level is calculated for each attack path combined with the resulting damage scenarios after damage transformation on that attack path, i. e.: • rl[afr[afmax[� , � , � ]], imr [dc[� ]]] (attack path 1, damage scenario with ID � ) 1 7 8 15 15 • rl[afr[afmax[� , � , � ]], imr [dc[� ]]] (attack path 1, damage scenario with ID � ) 1 7 8 16 16 • rl[afr[afmax[� , � , � ]], imr [dc[� ]]] (attack path 2, damage scenario with ID � ) 5 7 8 15 15 • rl[afr[afmax[� , � , � ]], imr [dc[� ]]] (attack path 2, damage scenario with ID � ) 5 7 8 16 16 • rl[afr[afmax[� , � ]], imr [dc[� ]]] (attack path 3, damage scenario with ID � ). 8 11 16 16 ACM Trans. Cyber-Phys. Syst. Security Risk Assessments: Modeling and Risk Level Propagation • 23 6 CONCLUSION Security risk analyses are becoming a mandatory development step in many domains due to international regulations. This is already the case in the automotive domain due to the new UN Regulation 36]. No. 155 [ Implementing the necessary processes for systematically evaluating complex systems, such as modern cars, is a demanding task. We address the question of how to implement such a process with a proven, model-based approach. The structure of the presented model, in combination with the methodical approach, forms a common basis upon which system and security engineers can jointly develop and assess an instanced model. To this aim, we mix two artifact types, system and security properties with limited expressiveness, which proved well applicable in our experience. Our metamodel encompasses the SUD itself, composed of functions, data elements, components, and data lows. A ixed set of relations links these elements. We extend this metamodel to include the elements speciic for SRAs (security goals, threats, controls, and assumptions) and additional relations. Thus, we achieve an integrated representation of the SUD as well as its security properties in the context of an SRA. To properly consider the often-intricate dependencies and inluences, we introduce a set of propagation rules. Consequently, the relations between the security-speciic elements can be validated by tracing them back to the elements of the SUD. A consequence of this mixed artifacts strategy is that security modeling requires repeating or extending several steps of functional modeling, i.e., creating use case diagrams and lowcharts, which is not an easily automatable task. In contrast to other approaches that separate elements and relations of the SUD and the security-speciic elements, we provide an integrated perspective that allows users to assess the level of risk and the impact of threats, controls, and assumptions in a qualiied manner. Local or iterative changes to the model rarely require changes to other elements due to the modular structure. This supports the maintainability of analyses, reduces the follow-ups’ eforts on updates or new indings, and improves comprehensibility. To demonstrate our method, we present an application on small ictitious example. In the absence of suitable evaluation criteria for the quality of analyses, it is not yet possible to measure the quality of the applied approach beyond that. The development of such evaluation criteria is the subject of ongoing research. For this purpose, we intend to re-analyze existing assessments to identify relevant properties. However, we collected evidence of our method’s suitability in several hundred real-life security risk assessments in projects with industrial customers. We conducted security risk assessments for the development of vehicle functions and ECUs, industrial components, IT systems, and IoT devices in the course of ten years and continuously improved the method based on our own experience and the feedback of our customers. Limitations of our approach include an increase in the complexity of the resulting models, requiring security experts to apply the method. In the future, expressive and case-speciic SRAs will no longer be suicient. It will be required to infer between diferent models and evaluate them simultaneously to cope with complex, integrated systems. This will require additional methods. Likewise, expertise is required to tailor the assessment model and catalogs to a company’s needs for best results. Additionally, achieving higher precision often comes at the price of increasing model complexity. Creation and maintenance of these models gain from tool support, such as the Yakindu Security Analyst. Although initially developed for automotive security risk analysis, we successfully applied the proposed structure and representation as graphs in other domains, such as industrial security. REFERENCES [1] Daniel Angermeier, Kristian Beilke, Gerhard Hansch, and Jörn Eichler. 2019. Modeling Security Risk A17th ssessments. Embedde Ind Security in Cars (escar Europe)(Stuttgart, Germany, 2019-12-31). Ruhr-Universität Bochum, Bochum, Germany, 133ś146. https: //doi.org/10.13154/294-6670 ACM Trans. Cyber-Phys. Syst. 24 • Angermeier et al. [2] Daniel Angermeier, Alexander Nieding, and Jörn Eichler. 2016. Supporting Risk Assessment with the Systematic Identiication, Merging, and Validation of Security Goals. International In Workshop on Risk Assessment and Risk-driven Testing . Springer, Cham, Germany, 82ś95. [3] Daniel Angermeier, Alexander Nieding, and Jörn Eichler. 2016. Systematic Identiication of Security Goals and Threats in Risk Assessment. Softwaretechnik-Trends36, 3 (2016). http://pi.informatik.uni-siegen.de/stt/36_3/./01_Fachgruppenberichte/Ada/02_Angermeier.pdf [4] Daniel Angermeier, Alexander Nieding, and Jörn Eichler. 2017. Supporting Risk Assessment with the Systematic Identiication, Merging, and Validation of Security Goals. Risk InAssessment and Risk-Driven Quality Assurance , Jürgen Großmann, Michael Felderer, and Fredrik Seehusen (Eds.). Springer International Publishing, Cham, 82ś95. [5] George EP Box. 1979. Robustness in the Strategy of Scientiic Model Building. Robustness In in Statistics , Robert L. Launer and Graham N. Wilkinson (Eds.). Elsevier, Madison, WI, USA, 201ś236. https://doi.org/10.1016/B978-0-12-438150-6.50018-2 [6] Common Criteria Editorial Board. Common 2017. Methodology for Information Technology Security Evaluation: Evaluation Methodology (3.1r5 ed.). Standard. Common Criteria. [7] J. Eichler. 2015.Model-based Security Engineering for Electronic Business Processes . Ph. D. Dissertation. Technische Universität München. http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:91-diss-20150721-1238308-1-8 [8] Jörn Eichler and Daniel Angermeier. 2015. Modular Risk Assessment for the Development of Secure Automotive Systems. 31. In VDI/VW-Gemeinschaftstagung Automotive Security (VDI-Berichte, Vol. 2263) . VDI, Düsseldorf, 81ś90. [9] Benjamin Fabian, Seda Gürses, Maritta Heisel, Thomas Santen, and Holger Schmidt. 2010. A comparison of security requirements engineering methods. Requirements Engineering 15, 1 (2010), 7ś40. https://doi.org/10.1007/s00766-009-0092-x [10] Shamal Faily, John Lyle, Cornelius Namiluko, Andrea Atzeni, and Cesare Cameroni. 2012. Model-driven Architectural Risk Analysis Using Architectural and Contextualised Attack Patterns. ProIn ceedings of the Workshop on Model-Driven Security (Innsbruck, Austria) (MDsec ’12). ACM, New York, NY, USA, Article 3, 6 pages. https://doi.org/10.1145/2422498.2422501 [11] Jack Freund and Jack Jones. 2015. Measuring and Managing Information Risk: A FAIR approach . Butterworth-Heinemann, Oxford, UK. [12] Dimitris Gritzalis, Giulia Iseppi, Alexios Mylonas, and Vasilis Stavrou. 2018. Exiting the Risk Assessment Maze: ACM A Meta-Survey. Comput. Surv. 51, 1, Article 11 (Jan. 2018), 30 pages. https://doi.org/10.1145/3145905 [13] Mohammad Hamad and Vassilis Prevelakis. 2020. SAVTA: A hybrid vehicular threat model: Overview and case Information study. 11, 5 (May 2020), 273. [14] Gerhard Hansch, Peter Schneider, and Gerd S. Brost. 2019. Deriving Impact-driven Security Requirements and Monitoring Measures for Industrial IoT. In 5th ACM Cyber-Physical System Security Workshop (Auckland, New Zealand)(CPSS ’19). ACM, New York, NY, USA, 37ś45. https://doi.org/10.1145/3327961.3329528 [15] Hannes Holm, Khurram Shahzad, Markus Buschle, and Mathias Ekstedt. 2015. CySeMoL: P Predictive, Probabilistic Cyber Security Modeling LanguageIEEE . Transactions on Dependable and Secure Computing 12, 6 (Nov. 2015), 626ś639. https://doi.org/10.1109/TDSC. 2014.2382574 [16] IEC. 2020. IEC 62443-3-2:2020 Security for industrial automation and control systems - Part 3-2: Security risk assessment for system design . Standard. International Electrotechnical Commission and others, Geneva, CH. [17] ISO/SAE. 2021. ISO/SAE 21434:2021 Road Vehicles ś Cybersecurity engineering . Standard. International Organization for Standardization, Geneva, CH. [18] Loren Kohnfelder and Praerit Garg. 1999. The threats to our products. Technical Report. Microsoft Interface. https://adam.shostack.org/ microsoft/The-Threats-To-Our-Products.docx [19] Barbara Kordy, Ludovic Piètre-Cambacédès, and Patrick Schweitzer. 2014. DAG-based attack and defense modeling: Don’t miss the forest for the attack trees.Computer science review13 (Nov. 2014), 1ś38. https://doi.org/10.1016/j.cosrev.2014.07.001 [20] Michael Krisper, Jürgen Dobaj, Georg Macher, and Christoph Schmittner. 2019. RISKEE: A Risk-Tree Based Method for Assessing Risk in Cyber SecuritySystems, . In Software and Services Process Improvement - 26th European Conference, EuroSPI 2019, Edinburgh, UK, September 18-20, 2019, Proceedings. Springer, Cham, Germany, 45ś56. https://doi.org/10.1007/978-3-030-28005-5_4 [21] Katsiaryna Labunets, Fabio Massaci, and Alessandra Tedeschi. 2017. Graphical vs. Tabular Notations for Risk Models: On the Role of Textual Labels and Complexity.Pr Inoceedings of the 11th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (Markham, Ontario, Canada)(ESEM ’17). IEEE Press, Piscataway, NJ, USA, 267ś276. https://doi.org/10.1109/ESEM.2017.40 [22] Mass Soldal Lund, Bjùrnar Solhaug, and Ketil Stùlen. 2010. Model-Driven Risk Analysis: the CORAS approach . Springer Science & Business Media, Berlin Heidelberg, Germany. https://doi.org/10.1007/978-3-642-12323-8 [23] Feng Luo, Shuo Hou, Xuan Zhang, Zhenyu Yang, and Wenwen Pan. 2020. Security Risk Analysis Approach for Safety-Critical Systems of Connected Vehicles. Electronics9, 8 (Aug. 2020), 1242. [24] Georg Macher, Harald Sporer, Reinhard Berlach, Eric Armengaud, and Christian Kreiner. 2015. SAHARA: A Security-Aware Hazard and Risk Analysis Method.Pr Inoceedings of the 2015 Design, Automation and Test in Europe Conference (Grenoble, France)(DATE ’15). EDA Consortium, San Jose, CA, USA, 621ś624. [25] Charlie Miller and Chris Valasek. 2013. Adventures in automotive networks and contrDef ol units. Con 21 (2013), 260ś264. [26] Jean-Philippe Monteuuis, Aymen Boudguiga, Jun Zhang, Houda Labiod, Alain Servel, and Pascal Urien. 2018. SARA: Security automotive risk analysis method.Pr Inoceedings of the 4th ACM Workshop on Cyber-Physical System Security . Association for Computing Machinery, ACM Trans. Cyber-Phys. Syst. Security Risk Assessments: Modeling and Risk Level Propagation • 25 New York, NY, USA, 3ś14. [27] Martin J. O’Connor and Amar K. Das. 2009. SQWRL: A Query Language for OWL. In Proceedings of the 6th International Conference on OWL: Experiences and Directions (Chantilly, VA, USA (O) WLED’09, Vol. 529). CEUR-WS.org, Aachen, Germany, 208ś215. http: //dl.acm.org/citation.cfm?id=2890046.2890072 [28] N. Poolsappasit, R. Dewri, and I. Ray. 2012. Dynamic Security Risk Management Using Bayesian AttackIEEE Graphs. Transactions on Dependable and Secure Computing9, 1 (2012), 61ś74. https://doi.org/10.1109/TDSC.2011.34 [29] Alastair Ruddle, Benjamin Weyl, Sajid Idrees, Y. Roudier, Michael Friedewald, Timo Leimbach, A. Fuchs, S. Gürgens, O. Henninger, Roland Rieke, M. Ritscher, H. Broberg, L. Apvrille, R. Pacalet, and Gabriel Pedroza. 2009. Security requirements for automotive on-board networks based on dark-side scenarios. Deliverable D2.3: EVITA. E-safety vehicle intrusion protected applications. Fraunhofer ISI (01 2009). [30] Christoph Schmittner, Thomas Gruber, Peter Puschner, and Erwin Schoitsch. 2014. Security Application of Failure Mode and Efect Analysis (FMEA). In Computer Safety, Reliability, and Security , Andrea Bondavalli and Felicita Di Giandomenico (Eds.). Springer International Publishing, Cham, 310ś325. [31] Adam Shostack. 2014. Threat Modeling: Designing for Security . John Wiley and Sons, Indianapolis, IN, USA. [32] Teodor Sommestad, Mathias Ekstedt, and Hannes Holm. 2013. The Cyber Security Modeling Language: A Tool for Assessing the Vulnerability of Enterprise System ArchiteIEEE ctures. Systems Journal7, 3 (Dec. 2013), 363ś373. https://doi.org/10.1109/JSYST.2012. [33] Amina Souag, Camille Salinesi, Raúl Mazo, and Isabelle Comyn-Wattiau. 2015. A Security Ontology for Security Requirements Elicitation. In Engineering Secure Software and Systems(Milan, Italy (ESSoS ) 2015), Frank Piessens, Juan Caballero, and Nataliia Bielova (Eds.). Springer International Publishing, Cham, Germany, 157ś177. https://doi.org/10.1007/978-3-319-15618-7_13 [34] UNECE WP.29 TF CS and OTA. 2020. UN Regulation on uniform provisions concerning the approval of vehicles with regards to cyber security and cyber security management system . Proposal. UN World Forum for the Harmonization of Vehicle Regulations (WP.29). [35] Jan Wolf, Felix Wieczorek, Frank Schiller, Gerhard Hansch, Norbert Wiedermann, and Martin Hutle. 2016. Adaptive Modelling for Security Analysis of Networked Control Systems. 4thInInternational Symposium for ICS & SCADA Cyber Security Research (Belfast, UK) (ICS-CSR ’16). BCS Learning & Development, Swindon, UK, 64ś73. https://doi.org/10.14236/ewic/ICS2016.8 [36] UNECE GRVA WP29. 2021. UN Regulation No. 155 - Cyber security and cyber security management system . Technical Report. UNITED NATIONS. [37] Peng Xie, Jason H Li, Xinming Ou, Peng Liu, and Renato Levy. 2010. Using Bayesian networks for cyber security analysis. 2010 In IEEE/IFIP International Conference on Dependable Systems & Networks (DSN) . IEEE, IEEE Computer Society, Los Alamitos, CA, USA, 211ś220. https://doi.org/10.1109/DSN.2010.5544924 ACM Trans. Cyber-Phys. Syst. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png ACM Transactions on Cyber-Physical Systems Association for Computing Machinery

Security Risk Assessments: Modeling and Risk Level Propagation

Loading next page...
 
/lp/association-for-computing-machinery/security-risk-assessments-modeling-and-risk-level-propagation-0u5LSiXaG0

References

References for this paper are not available at this time. We will be adding them shortly, thank you for your patience.

Publisher
Association for Computing Machinery
Copyright
Copyright © 2023 Copyright held by the owner/author(s).
ISSN
2378-962X
eISSN
2378-9638
DOI
10.1145/3569458
Publisher site
See Article on Publisher Site

Abstract

DANIEL ANGERMEIER, Fraunhofer-Institute AISEC, Germany HANNAH WESTER, Fraunhofer-Institute AISEC, Germany KRISTIAN BEILKE, Fraunhofer-Institute AISEC, Germany GERHARD HANSCH, Fraunhofer-Institute AISEC, Germany JÖRN EICHLER, Freie Universität Berlin, Institute of Computer Science, Germany Security risk assessment is an important task in systems engineering. It is used to derive security requirements for a secure system design and to evaluate design alternatives as well as vulnerabilities. Security risk assessment is also a complex and interdisciplinary task, where experts from the application domain and the security domain have to collaborate and understand each other. Automated and tool-supported approaches are desired to help manage the complexity. However, the models used for system engineering usually focus on functional behavior and lack security-related aspects. Therefore, we present our modeling approach that alleviates communication between the involved experts and features steps of computer-aided modeling to achieve consistency and avoid omission errors. We demonstrate our approach with an example. We also describe how to model impact rating and attack feasibility estimation in a modular fashion, along with the propagation and aggregation of these estimations through the model. As a result, experts can make local decisions or changes in the model, which in turn provides the impact of these decisions or changes on the overall risk proile. Finally, we discuss the advantages of our model-based method. CCS Concepts: · Software and its engineering → Risk management; · Security and privacy → Security requirements; Software security engineering; Usability in security and privacy . Additional Key Words and Phrases: Security Risk Assessment, Risk Analysis, Security Engineering, Model-based, Secure Design, Threat Modeling 1 INTRODUCTION Security risk assessment (SRA) is a crucial part of requirements engineering and enables the systematic deduction of security requirements. Additionally, SRA supports the prioritization and execution of further security-related tasks in the engineering life cycle. Therefore, SRAs represent mandatory steps in many regulations and interna- tional standards (e.g., [34] as well as [17] for the automotive domain). However, risk assessment presents a challenging task of high complexity, as all possible interactions (not only all speciiedinteractions) with the system under development (SUD) may result in a violation of protection needs. Consequently, it is advisable to support the analyst with a computer-aided, model-based approach to avoid omissions or other łhuman errorsž and master complexity. Generally, data for a high grade of computer aid for SRA is often missing: Deining functional and structural properties of a system is an integral part of the system’s development, while modeling potential misuses is not. Authors’ addresses: Daniel Angermeier, daniel.angermeier@aisec.fraunhofer.de, Fraunhofer-Institute AISEC, Lichtenbergstraße 11, Garch- ing, Bavaria, Germany, 85748; Hannah Wester, daniel.angermeier@aisec.fraunhofer.de, Fraunhofer-Institute AISEC, Lichtenbergstraße 11, Garching, Bavaria, Germany, 85748; Kristian Beilke, Fraunhofer-Institute AISEC, Lichtenbergstraße 11, Berlin, Berlin, Germany; Gerhard Hansch, Fraunhofer-Institute AISEC, Lichtenbergstraße 11, Garching, Bavaria, Germany, 85748, gerhard.hansch@gmail.com; Jörn Eichler, Freie Universität Berlin, Institute of Computer Science, Takustr. 9, Berlin, Berlin, Germany, joern.eichler@fu-berlin.de. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for proit or commercial advantage and that copies bear this notice and the full citation on the irst page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). © 2022 Copyright held by the owner/author(s). 2378-962X/2022/11-ART https://doi.org/10.1145/3569458 ACM Trans. Cyber-Phys. Syst. 2 • Angermeier et al. System models are designed with a given purpose in mind. As łall models are wrong, but some are usefulž 5]), ([ existing models from system development only partially it the purpose of security assessment. Some aspects may be over-represented, while others may be covered insuiciently. Therefore, we follow the hypothesis that a dedicated model for the security risk assessment creates a better match for the task. Nevertheless, a dedicated SRA model still represents a simpliication of the SUD. This simpliication is useful as it reduces complexity. Attackers, however, interact with existing systems. Thus, attackers are not afected by the model’s limitations. Therefore, human expertise by a security analyst remains a necessity for proper SRAs to explore the attacker’s options with human creativity. Our graph-based modeling approach aims to achieve the best of both worlds. While the full graph provides a rich set of relations between the elements of the SRA for computer-aided functionality, selected parts of the graph can also be visualized to supplement human experts. They further support the identiication of critical paths and the derivation of efect chains from the model while keeping the human analyst in the loop. For example, the connections between components can be derived from the data lows and visualized in a diagram. Likewise, attack paths in the graph can be visualized similar to attack trees. Graphical representations also support the communication between developers, security experts, and other stakeholders (e.g., management), making the analysis comprehensible and veriiable for all involved stakeholders. This paper is based on previous work and practical experiences in the automotive2ield , 4, 8])(cf. and[extends the work presented in [1]. We provide the following contributions: • Reinement of an extended metamodel for security risk assessments to represent complex dependencies based on the SUD between security goals, threats, controls, and assumptions • Deinition of basic formal properties to evaluate these dependencies • Ruleset for risk value calculation for all relevant elements Beneits from our contributions include globally calculated estimations concerning the risk value for all relevant elements. Thus, critical security goals, threats, assumptions, and controls can be identiied and the consequences of design alternatives can be evaluated. Complementary to the efects of controls and assumptions on attack feasibility, our contributions provide means to represent and evaluate efects on security goals’ protection needs. The remainder of this paper is organized as follows. After discussing related work in Section 2, we provide background information on the used risk assessment method in Section 3. We then present and demonstrate our graph-based modeling approach in Section 4, and describe the information low and risk calculation in Section 5, before we conclude in Section 6. 2 RELATED WORK To cope with the increased exposure to cyber-attacks, as demonstrated by25[], harmonized regulations and international standards establish SRAs as a mandatory activity of future development processes 17, 34(]). cf. [ The mandatory implementation of security risk assessment and management processes into the development, production, and post-production phases requires suitable methods. Generally, model-based risk assessment during development includes creating a model of the SUD, an impact assessment, and a threat assessment. An overview of eighteen diferent security requirements engineering approaches and techniques, including CORAS, Misuse Cases, Secure Tropos, and UMLSec, is provided by Fabian et9].al. Gritzalis [ et al. 12][compare validity, compliance, costs, and usefulness of popular risk assessment methods including EBIOS, MEHARI, OCTAVE, IT-Grundschutz, MAGERIT, CRAMM, HTRA, NIST SP800, RiskSafeAssessment, and CORAS. Most of these approaches do not provide a formal method in combination with a dedicated metamodel. One exception here is CORAS22 [ ], a model-driven risk assessment method using a graphical notation applying a domain-speciic language. The models are analyzed manually by human analysts, as CORAS opts for a dedicated graphical concrete syntax and does not deine alternative representation, often preferred for larger models (cf. [21]). ACM Trans. Cyber-Phys. Syst. Security Risk Assessments: Modeling and Risk Level Propagation • 3 A dedicated survey on attack and defense modeling approaches utilizing directed acyclic graphs is provided by Kordy et al. [19]. The surveyed approaches feature a focused perspective on security-speciic properties and allow for calculations, e.g., critical attack paths. However, they do not entail an integrated perspective of the SUD and its security properties aligned with typical artifacts from the development phase. According to the classiication19 of],[we present a defense-oriented approach that combines static parts for the system model with sequential parts for the risk analysis. Furthermore, we use a DAG for general security modeling and quantitative assessment, including conceptual, and quantitative extensions in a semi-formal way. Parts and diferent iterations are implemented by commercial tools and applied by independent users to perform realistic assessments. A state-of-the-art threat assessment method and basis for many cybersecurity risk assessment methodologies is STRIDE, standing for the six considered threat classes Spooing, Tampering, Repudiation, Information disclosure, Denial of service, and Elevation of privilege that can be used to threaten security 18 obje ]. Each ctives thr[eat class antagonizes at least one security property. Spooing a false identity violates the authenticity property of entities, tampering threatens the integrity of data and processes, repudiating a responsibility interferes with non-repudiation, e.g., of process interaction, information disclosure the conidentiality of data and processes, denial of service the availability of components, data, and processes, and elevation of privileges enables the unauthorized execution of actions. A prominent reinement of STRIDE is STRIDE-per-Element, which considers that certain threats are more prevalent with certain elements of a model, which facilitates threat identiication in general by focusing on the most relevant threats 31].[ Combining STRIDE with attack trees is used by several recent security risk analysis frameworks ,e.g., [13, 26]. Bayes attack graphs are one method to assess security risks in IT networks and assess vulnerabilities, enabling Bayesian inference procedures for general attack graphs 19 (cf. , 28 [ , 37]). In the concept and development phase, whose support is in the focus of the method presented here, there are usually no known vulnerabilities and only rarely weaknesses. Thus, a percentage value required for Bayesian networks cannot be directly determined and, in practice, usually leads to intense disagreement among the respective responsible parties. To evaluate the technical diiculty for an attacker to execute an attack, we instead use a qualitative scheme such as the Common Methodology for Information Technology Security Evaluation 6] to rate [ the required capabilities for each attack step. A way to avoid the common problem of inconsistencies within SRAs is using an appropriate ontology. A good overview of risk assessment ontologies is provided by Souag33et ]. The al. [authors compare many existing ontologies and create a new one, including additional high-level security concepts, axioms, and attributes. They generally specify their models in the Web Ontology Language (OWL) and apply a certain level of automation with queries using the Semantic Query-enhanced Web Rule Language (SQWRL, 27]). cf.Their [ target audience is requirements engineers and thus is not focused on risk assessment. In contrast to this high-level approach, we provide way more details concerning the structure, relations, and propagation of ratings within the SRA. A similar approach is applie35 d ]byfor [ the automated search for known vulnerabilities in incomplete or inconsistent described systems. An approach that deducts recommendations for high-level security measures from assessed security risks can be found14 in ]. Unlike [ our proposal, the presented risk analysis method and metamodel provide only a limited security risk evaluation. Such approaches might signiicantly beneit from the here presented metamodel and methods. CAIRIS (Computer Aided Integration of Requirements and Information Security) is a framework to man- age security, usability, and design artifacts. 10] achie [ ves a certain level of automation and visualization. The framework’s aim is much broader in scope as it also encompasses usability and requirement engineering ac- tivities. Regarding security risk analysis, the authors propose a broad ontology of concepts. CAIRIS expresses similar information compared to our approach. However, the implementation includes many concepts that https://cairis.org/ ACM Trans. Cyber-Phys. Syst. 4 • Angermeier et al. keep additional information but adds efort for documentation, maintenance, and consistency (i.e., environments, vulnerabilities, obstacles, use cases). Details and conciseness of core concepts for SRAs (goals, threats, controls, and their interactions) seem to be impacted by the breadth of the approach. Similarly, we consider separating the security and the (functional) development domains to beneit tailored application. A combination of UML-based information system modeling with Bayesian attack graphs for assessing attack probabilities are CySeMol 32] and [ its extension CySeMoL P [15]. The relational model and the thereupon built inference engine allow for evaluating ’what-if’ scenarios. Networks consisting of well-known components can be evaluated eiciently due to the predeined granularity of the components. While this approach enables modiications of the model during analysis, it does not support iterative dissection or damage transformations and hardly copes with new components. A further approach to formally describe security properties in a security risk analysis framework, based on model-checking and a Markov decision process do to determine risk probabilities, is presented by [23]. A proprietary framework for information risk analysis is the FAIR appr 11oach ]. It includes by [ a taxonomy, a method for measuring the driving risk factors, and a computational engine to simulate relationships between these factors. Key factors in determining risks ar Loss e the Event Frequency, based on the Threat Event Frequency and the Vulnerability, and Loss the Magnitude, relecting the impact. Due to the dependency on measurable and historical factors, initial risk assessments and non-metric environments pose severe problems for users. In contrast to the presented approach that focuses on the overall impact, FAIR is limited to risks for information assets. A combination of the automotive Hazard Analysis And Risk Assessment (HARA) with STRIDE, intended to support the functional concept phase by a straightforward quantiication of the impacts of threats and hazards is the Security-Aware Hazard And Risk Analysis (SAHARA) metho 24d].[Similarly, the conventional Failure Mode Efects Analysis (FMEA) is extended by vulnerabilities by the FMVEA metho 30] and d [focused on the technical concept phase of the development. While both these automotive-oriented methods rely on a model of the SUD, they use a top-down assessment approach, about by what a speciic threat, respectively, safety hazard might be caused. Furthermore, in their assessment, they do not consider interactions, respectively sequences, the efects of security measures and of their propagation along the model. A combination of FAIR, SAHARA, and FMVEA is the probabilistic RISKEE (Risk-Tree Based Method for Assessing Risk in Cyber Security) approach 20]. A [ s the long form of the name indicates, it combines risk calculation with attack trees. Based on FAIR, the considered risk factors are frequency, vulnerability, and magnitude of vulnerabilities. A specialty of the RISKEE approach is the relation and visualization of the calculated and the acceptable risk as a loss-exceeding curve. Popular commercial tools for threat modeling are the Microsoft Threat Modeling andTthe oolfortisee SecuriCAD, which also includes probabilistic attack simulation. They both provide a graphical interface for modeling current and abstract IT environments and assessing potential security issues. While the Threat Modeling Tool utilizes a STRIDE-based risk assessment method, SecuriCAD also supports evaluating possible attack vectors by Monte Carlo simulation. Both tools regard coarse-grained attack paths focusing on cloud and enterprise IT but lack attack feasibility factor propagation or damage transformation. Commercial tools are currently also developed to support SRAs in the automotive domain, including the Yakindu Securityand Analyst Ansys Medini Analyze. https://docs.microsoft.com/en-us/azure/security/develop/threat-modeling-tool https://www.foreseeti.com/securicad/ https://www.itemis.com/de/yakindu/security-analyst/ https://www.ansys.com/products/systems/ansys-medini-analyze-for-cybersecurity ACM Trans. Cyber-Phys. Syst. Security Risk Assessments: Modeling and Risk Level Propagation • 5 3 BACKGROUND The risk assessment approach used in this paper is based on the Modular Risk Assessment (MoRA) metho7,d (cf. [ 8]). Figure 1 depicts four core activities of the method framework: łModel the Target of Evaluationž, łDetermine Protection Needsž, Analyze ł Threatsž, and Analyze ł Risksž. The irst step decomposes the SUD into relevant functions, data elements, components, and data lows. The next step identiies security goalsas combinations of assets detailed in the SUD and their required security properties. The third step identiies threats to the assets analyzing systematically elements of the SUD. Additionally, actual or proposed controls can be added to mitigate identiied threats. Impact and attack feasibility ratings for security goals as well as threats and controls are estimated in the last step. Risk levels are derived from those estimations. For more details on the application of MoRA, we refer to aforementioned publications3,and 4]. While [ this work is based on MoRA, we align the terminology in this paper with ISO/SAE 21434. MoRA relies on an assessment modeland catalogsto homogenize assessments within a common application domain. Thus, the assessment model and the catalogs represent a common ground for all stakeholders regarding core aspects of risk assessments like evaluation criteria or threat classes. Note that standards and regulations can suggest or deine parts of the assessment model, such as the threat model given in Annex 5 of R-155 the UN Regulation on uniform provisions concerning the approval of vehicles with regard to cybersecurity and of their cybersecurity management systems[36]. Our graph-based modeling approach as speciied in extension to our previous work in Section 4 augments MoRA’s representation. It facilitates the method implementation and is based on experience from several years of practical application in industrial development projects. The model and calculation rules are the basis for tooling, such as the Yakindu Security Analyst. While we present our generic metamodel without a speciic syntax in this paper, we successfully adapted the metamodel and tooling to accommodate the speciic requirements for risk assessments in standards and regulation, such as ISO/SAE 21434 (cf. [17]) or the IEC 62443 (cf. [16]). 4 METAMODEL The following section presents the metamodel of our SRA model previously intro1duce ]. Itdencompasses in [ a focused representation of the SUD itself as well as risk assessment-speciic core concepts like security goals, damage scenarios, threats, controls, and assumptions. Providing all these elements in one model allows for derivation and validation of relations between and properties of elements of the SRA aligned with the SUD (cf. [4]). This facilitates comprehension and traceability. The core concepts are presented along MoRA’s main activities followed by a modeling example. 4.1 Model the System Under Development The SUD model serves as the foundation for the analysis. It includes assets, which are required to understand the protection needs and potential damage scenarios. The SUD model also provides an overview of potential interactions with the SUD. This facilitates the elicitation of potential threats against it. Furthermore, by modeling the SUD in cooperation with domain experts, security analysts gain a solid understanding of the SUD. Likewise, as all arguments are rooted in the system model, explaining risks to domain experts is facilitated. Note that every risk assessment requires an abstraction of the analyzed item, i.e., a model, in the analyst’s mind. Documenting the model and discussing it with domain experts improves the model’s correctness in our experience. The graph of the SUD consists of four sets, visualized as nodes, and the relations between these nodes, connecting them as edges, as shown in Figure 2. The four sets of nodes in the risk assessment graph represent the functions, data elements, components, and data lows of the SUD. Within each of these sets, the subelement relation (łis subdata of,ž łis subcomponent ofž and łis subfunction ofž) represents a hierarchy between the elements. A component, for example, can be reined into its sub-components (e.g., the component łvehiclež has ACM Trans. Cyber-Phys. Syst. 6 • Angermeier et al. Model the System Determine Protection Needs Analyze Risks Under Development Security Assets + Properties Functions Impact Rating Security Goals Data Risk level Analyze Threats Components Controls / Threats Assumptions Data Flows Attack Feasibility Rating Fig. 1. Main activities and core concepts in security risk assessments according to MoRA. Section 4 provides more details on the metamodel representing these core concepts. subcomponents łbrake ECUž and łairbag ECUž and łbrake ECUž could consist of a subcomponent łsoftware platformž). Data lows each have a sender and a receiver, resulting in a matching łhas senderž and łhas receiverž relation from the data low to the sender and receiver component. Furthermore, the data low has a łtransmitsž relation to one or more data elements. Note that the metamodel allows components that neither receive nor transmit data, if required. Components have a łstoresž relation to locally stored data elements. This is mainly used for data that might never be transmitted, such as private keys for cryptographic operations. All relations between components and data are non-exclusive, i.e., components can send, store, and receive the same data element as other components. These relations are depicted in Figure 2. Based on these explicit relations, implicit relations can be derived: For each sender, a łproducesž relation to the sent data elements, for each receiver a consumesž ł relation to received data elements. Therefore, interface deinitions of components can be derived from the data low deinitions. These implicit relations are always calculated from the existing data low deinitions and never deined explicitly to avoid inconsistencies. Functions have a łmaps tož relation to data elements, components, and data lows, as shown in Figure 2. These relations imply that functions are implemented by data processing and transmission, which in turn are executed by components. The functions thus depend on their mapped elements. We have chosen this representation for the SUD as it fully supports SRAs based on MoRA 8]) while ([ it also captures only information typically created during system development. For example, an SUD provided as a set of UML use case diagrams, component diagrams with information lows, and corresponding class diagrams can be used as input for the modeling activity. The functions can be extracted from use case diagrams, components and data lows from component diagrams, and data elements from class diagrams. Thus, well-established modeling languages can be used as input for the irst step łModel the Target of Evaluationž. Furthermore, the information captured in the SRA model can be łtranslatedž back to UML with low efort, improving the communication between domain and security experts. Consequently, our SUD representation supports the mutual understanding and the collaborative creation of the SRA model by all stakeholders, maintaining an unambiguous reference for ACM Trans. Cyber-Phys. Syst. Security Risk Assessments: Modeling and Risk Level Propagation • 7 Function maps to 1..n 1..n Data 0..n 0..n transmits is subfunction of 1..n 1..n 0..n 0..n 0..n 0..n 0..n has sender stores 0..n Component Data Flow is subdata of 1..n 0..n has receiver 1..n 0..n 0..n 0..n 0..n is subcomponent of Fig. 2. Metamodel including functions: Functions are implemented using data, components, and data flows. the following steps of the risk assessment. 4.2 Determine Protections Needs The protection needs are captured through the risk assessment-speciic core concepts security goals and damage scenarios. Security Goals (SG) deine security properties for assets, where łsecurity propertyž denotes an asset’s property, such as conidentiality, availability, or integrity. For the sake of simplicity, we focus on these three security attributes, but the method may be extended to any set of properties. Assets are modeled as elements of the SUD, relected by the łis asset ofž relation, as shown in Figure 4. For example, a medical system stores the data element łpatient dataž. łConidentiality of patient dataž represents a security goal with the security property conidentialityž ł and the asset łpatient dataž. Note that our deinition of łsecurity goalž diverges from the similar term cyb ł ersecurity goalž as deined in ISO / SAE 21434 [17]. As the standard does not provide a compact term for cyb ł ersecurity property of assetž, we stick to łsecurity goalž, remaining consistent with our previous publications on the same topic. Violation of a relevant security goal leads to one ordamage more scenarios. This is denoted by the łviolation causesž relation. A damage scenario is deined by a non-empty set of impact criteria. In our example, violation of the security goal łConidentiality of patient dataž causes the damage scenario łUnauthorized access to personal dataž, which entails the impact criterion łSubstantial violation of lawsž as an attribute. Impact criteria are part of the assessment model. The assessment model assigns an impact rating to each criterion. Impact criteria can be structured in impact categories, such as safety, inancial, operational, or privacy. These four impact categories are required by17 [ ] and were previously proposed in, e. g., 29].[ The assessment model with the impact categories and corresponding impact criteria is adaptable to organizations and their ield of operation. Security goals might depend on other security goals. For example, the availability of a function depends on the availability of a component executing the function. If the availability of the component is violated, the availability of the function is also violated. Consequently, the second security goal depends on the irst. These dependencies can be independent of each other or require several dependencies to be violated. For example, if two independent sources provide a data item, then the security goal A ł vailability of data itemž is violated only if the security goals A ł vailability of the irst sourcež AND A ł vailability of the second sourcež are both violated. For the graphical representation of this example, see Figure 3. ACM Trans. Cyber-Phys. Syst. 8 • Angermeier et al. Availability of data item Availability of data item Security Property: Availability Security Property: Availability depends on depends on Combined Security Goals 1 Combined Security Goals 2 Combined Security Goals 1 encompasses encompasses encompasses Availability of source 1 Availability of source 2 Availability of source 1 Availability of source 2 Security Property: Availability Security Property: Availability Security Property: Availability Security Property: Availability Fig. 3. Dependencies of security goals. The let side shows how the Availability of data itemcan be violated only if the Availability ofsource 1 AND source 2 are not given. The right side depicts a case where violating a single dependency is suficient to violate the security goal at the top. We introduce the element łCombined Security Goalsž to deine these dependencies. A security goal depends on an arbitrary number of mutually independent łCombined Security Goalsž nodes. Each łCombined Security Goalsž node then relates to one or more security goals. Note that arbitrary logical expressions with AND and OR, as often seen in classical attack trees, can always be transformed into disjunctive normal form (DNF) to it this metamodel, including speciic sequences of attacks. 4.3 Analyze Threats The threat analysis is captured through the risk assessment-speciic core concepts of threats, controls, and assumptions. Security goals are threatened by combinationsthr ofeats, as depicted in Figure 4. Similar to the dependency on other security goals, threats can either threaten a security goal independently of each other or require other threats to also execute successfully. For example, the integrity of a function can be threatened by eavesdropping on a message as attack preparation AND by subsequently replaying the eavesdropped message. We introduce the element łCombined Threatsž to deine these dependencies. A security goal is threatened by an arbitrary number of mutually independent łCombined Threatsž nodes. Each łCombined Threatsž node then relates to one or more threats. Threats provide the following attributes: • Attack feasibility factors help to estimate the attack feasibility rating to realize the threat. The attack feasibility factors themselves are deined in the assessment model and, therefore, can be adapted to any standard or organizational needs. For example, CEM 6] deines [ ive attack feasibility factors for the estimation of the łrequired attack potentialž,Elapse i.e., d Time , Expertise, Knowledge of the TOE, Window of Opportunity, and the necessary Equipment, along with a set of predeined values (e.g., łLaymanž) and corresponding numeric values for each attack feasibility factor. • Threatened security propertiesdeines the security properties a threat might violate. For example, the threat łinformation disclosurež threatens the security property conidentialityž ł . Threats act on the physical manifestation of the SUD. The physical aspects of the SUD are modeled as components and data lows (including wireless transmissions) as described in Subsection 4.1, while data and ACM Trans. Cyber-Phys. Syst. Security Risk Assessments: Modeling and Risk Level Propagation • 9 functions are processed by these elements. The łacts onž relation is not required to model a risk assessment, but useful to help human analysts understand the threats. Additionally, the model of the SUD combined with the threatened security properties can be used to identify and validate potentially violated security goals. For example, it is plausible to assume that a threat łinformation disclosurež on a data low threatening the conidentiality of the transmitted data items afects the conidentiality of functions mapped onto the data items. Vulnerabilities are not represented with a metamodel element. In the context of MoRA, a threat causing a relevant risk that is not mitigated poses a vulnerability of the SUD. (Cybersecurity) controls and assumptions mitigate threats. Combinations of these elements are represented by a łCombined Mitigationsž node type. In some cases, controls and assumptions may be similar due to technical circumstances. Technical measures like channel encryption used by the SUD are usually modeled as controls. By contrast, laws of nature, responsibilities or controls of third parties, attacker capabilities, and the analysis limits are documented as assumptions. Controls and assumptions, similar to threats, provide attack feasibility factors and protected security properties. The attack feasibility factors facilitate the estimation of the control’s or assumption’s efect on the attack feasibility of related threats. Section 5 provides details on how to combine the attack feasibility factors in the SRA model. Additionally, controls as well as assumptions may cause changes in the impact of the violation of security goals modeled as damage transformation. In this case, a damage scenario is replaced by another damage scenario or entirely removed by assigning no transformation target. For example, suppose a valve in a factory is controlled by network messages. In that case, an attacker might manipulate these messages to violate the security goal łintegrity of valve controlž and cause the damage scenario eł xplosion of a pressure tankž. The control op ł ening the valve on locally measured high pressurež cannot prevent this manipulation, but efectively transforms the (source) damage scenario into the less critical target damage scenario łproduction outage.ž Assumptions may be used to bring information into the model that has not been explicitly modeled in the SUD but is important in its efect on the analysis, such as limitations of the assumed attacker model. Unlike controls, assumptions do not depend on security goals. MoRA also supports catalogs for threat and control classes (cf. 8]).[These classes may entail a pre-assessment of attack feasibility factors, estimating the attack feasibility to execute the threat or break the control. The threats and controls in the SRA can use these pre-assessed values but also override them to relect the more speciic context of the SRA. In addition to providing a common ground for security risk assessments, these catalogs also support the analyst in łnot overlookingž known threats and validating conformity to regulatory prescribed threat and control catalogs. Controls are implemented by the SUD and may thus depend on its security goals. For example, the control digital ł signaturež requires a component to create the signature and another component to check its validity. Consequently, instead of breaking the signature, an attacker can try to violate the security goal conidentiality ł of the private keyž on the signing component or the security goal łintegrity of the certiicatež on the component executing the signature check. Both attacks can circumvent the control digital ł signature.ž We model this by introducing a dependency of controls on security goals or combinations of security goals, again using the łCombined Security Goalsž node type. Consequently, impacts caused by the loss of conidentiality of cryptographic keys do not have to be estimated directly but are relected by additional attack paths on controls. Therefore, if the impact rating for security goals of the protected functions, data items, or components changes, then this change is consistently relected for the risks caused by attacks on cryptographic keys. ACM Trans. Cyber-Phys. Syst. 10 • Angermeier et al. Security Goal Combined Threats Threat is asset of is threatened by encompasses Security Property Attack Feasibility Factors 0..n 0..n 0..n 1..n 1..n 0..n Threatened Sec. Prop. 0..n 0..n 1..n 0..n violation causes 0..n has source Damage Scenario Damage Transformation 0..n depends on encompasses is mitigated by Impact Criteria has target 0..1 0..n 0..n causes 0..n 1..n 0..n 0..n Combined Sec. Goals Control / Assumption Combined Mitigation depends on encompasses Attack Feasibility Factors 0..n 0..n 1..n 1..n Protected Security Prop. 0..n acts on Function maps to 0..n 1..n 1..n Data 0..n 0..n transmits is subfunction of 1..n 0..n 1..n 0..n 0..n 0..n 0..n 0..n has sender stores 0..n Component Data Flow is subdata of 1..n 0..n has receiver 0..n 1..n 0..n 0..n 0..n 0..n 0..n is subcomponent of Fig. 4. Complete metamodel including controls and assumptions: Threats can be mitigated by controls, assumptions, or combinations of these. 4.4 Analyze Risks Figure 4 depicts the full metamodel with all node types and their relations to each other. We do not model risks as separate elements. Instead, risk levels can be determined for every security goal, threat, control, or damage scenario as described in Section 5. A risk level is determined by the combination of potential damages (impact rating) and the attack feasibility rating to cause these damages. The impact rating is determined by the impact criteria originating from the damage scenarios related to a risk. The attack feasibility rating is determined by the attack feasibility factors of the threats and the attack feasibility factors of the controls mitigating them. In our practical experience, this model of the SUD, represented by functions, data, components, and data lows, is well-suited for SRAs and easy to understand for system developers. All nodes in the risk assessment core concept relate to this model. Security goals are properties of the SUD. Threats and controls act on the SUD, modeling the interaction with the system. As outline 3],dthe inrelations [ between the risk assessment elements can be validated by tracing them back to the model of the SUD. Similarly 3] provides , [ a method to propose new nodes and relations based on the model of the SUD. Consequently, the creation of risk assessment elements can partially be automated, requiring the analyst to check and modify the proposals and to specify proposed elements further. 4.5 Modeling Example Figure 5 depicts an instance of a metamodel for a ictitious software update function. Note that the elements in the risk assessment and their relations can be deined without actually providing a graphical representation, e. g., in a tabular representation. This is important, as a full graph for a complete risk assessment possesses high ACM Trans. Cyber-Phys. Syst. Security Risk Assessments: Modeling and Risk Level Propagation • 11 complexity, owing to a large number of nodes with many relations between them. Thus, a complete graphical representation is typically diicult to process for a human analyst. However, plotting selected parts graphically is helpful in our experience. Consequently, we chose a small example to highlight our approach’s key features in a manageable fashion. Generally, we do not prescribe a dedicated concrete syntax for instances of the metamodel as requirements difer between diferent application domains and organizational environments. Figure 6 depicts a screenshot of the itemis YAKINDU Security Analyst. The Security Analyst provides diferent concrete syntaxes to work on instances of the metamodel. A textual concrete syntax for threats is displayed in the upper half of the screenshot (titled attack step in the syntax). The second half demonstrates a graphical concrete syntax of the SUD. The example describes a ictitious software update function in which a server pushes an update into a vehicle. Violation of the security goal łIntegrity of the update functionž can lead to a safety-related damage scenario łUncontrollable vehiclež as well as a damage scenario łUnauthorized tuningž related to inancial losses. This security goal is threatened by two independent attack paths: the irst attack path encompasses the combination of the threat łReverse Engineeringž AND the threat łMan-in-the-Middle attack (mobile)ž. The latter threat acts on the mobile data low between server and vehicle. The control AES ł GCMž protects the conidentiality and the integrity of the transferred data and thus mitigates the Man-in-the-Middle (MitM) attack. However, the control also depends on the conidentiality of the data item AES ł keyž. In our example, all vehicles share the same symmetric key. Therefore, the security goal łConidentiality of AES keyž is threatened by a key extraction attack on a single vehicle. Security Goal: Integrity is threatened by Combined Threats: CT1 encompasses Threat: Man-in-the- of the Update function Middle attack (mobile) violation causes is mitigated by Threat: Reverse Damage Scenario: Damage Scenario: Engineering Combined Mitigation: Uncontrollable vehicle Unauthorized tuning CM1 encompasses Combined Threats: CT2 Damage encompasses has source Transformation: DT1 has target Control: AES GCM causes Assumption: Tuning encompasses Combined Mitigation: is mitigated Threat: Manipulate only CM2 by data on the CAN bus depends on maps to is asset of Combined Security acts on Goals: CSG1 maps Function: Distribute SW Data Flow: SW Update acts on to Update [Server→ Vehicle] maps to Component: Server acts Threat: Key extraction Component: Vehicle on has sender has receiver encompasses encompasses Combined Threats: CT3 transmits stores SUD Model is threatened by maps to Data: SW Update Data: AES key is asset of Security Goal: Confidentiality of key Fig. 5. This example shows an excerpt of an instance of our graph for a risk assessment of a fictitious sotware update function. ACM Trans. Cyber-Phys. Syst. 12 • Angermeier et al. Fig. 6. A screenshot of concrete syntaxes provided by YAKINDU Security Analyst 21.1. The second attack path complements the MitM attack on the data low between server and vehicle with an attack on a data low inside the vehicle, but still requires reverse engineering by the attacker. The control AES ł GCMž does not protect data lows inside the vehicle, as it only acts on the data low between server and vehicle. In this example, we also limit the attacker model to tuning-related attacks when physical access is needed. Consequently, the assumption łTuning onlyž transforms the damage scenario łUncontrollable vehiclež into the damage scenario łUnauthorized tuningž. Note that damage transformation can also remove a damage scenario completely. 5 PROPAGATION RULES AND RISK CALCULATION In the previous sections, we deined the metamodel and provided an example for an instance of the graph. In this section, we provide rules to calculate risk levels for a speciic graph. First, we give an intuition of the idea. Then we formalize the actual calculation based on the metamodel elements instantiated in a graph. We conclude this section with an example. 5.1 Intuition In contrast to other SRA models, we aim to calculate the risk level for security goals, threats, controls, and damage scenarios individually. It is desirable to identify risk levels for all these elements, as identifying the most critical threats, the security goals and assets at highest risk, the weakest links among the controls, or the most critical damage scenarios all represent valuable information in making risk treatment decisions. ACM Trans. Cyber-Phys. Syst. Security Risk Assessments: Modeling and Risk Level Propagation • 13 Calculation of risks requires two inputs: an estimation of the attack feasibility rating (based on attack feasibility factors) and an impact rating. These inputs are deined as attributes in separate metamodel elements of a graph instance: Threats and controls entail attack feasibility factor attributes. Damage scenarios entail impact criteria as attributes. Impact criteria attributes, in turn, are mapped to impact ratings in the assessment model. This results in speciic impact ratings being available in damage scenarios. Furthermore, controls and assumptions may cause a damage transformation selecting relevant damage scenarios. We use the relations between the risk assessment-speciic elements in the graph to combine these attributes and calculate risk levels. The combination of attributes brings together the two required inputs for risk calculation. Note that the metamodel technically allows for the deinition of circular dependencies, but for the presented approach, the metamodel instance must be a directed acyclic graph (DAG). In our practical experience on real life projects, this does not impose relevant limitations on the modeling capabilities. The basic idea is to let the values of attributes low or propagate through the graph. Figure 7 shows the propagation of attack feasibility factors and damage transformations. Figure 8 shows calculated risk values propagating in the opposite direction along the edges. The sequence of nodes from a control, assumption or threat to another node creates an attack path towards that node. Note that we call every such path of any length an attack path. We calculate risks for every attack path towards a security goal. Any of the following types of nodes can be on an attack path and receive as well as propagate values: Security Goal, Combined Security Goals, Assumption, Control, Combined Mitigation, Threat, Combined Threats. The metamodel elements not listed (Damage Scenario and Damage Transformation) are used to calculate risk values, but are not part of attack paths themselves. Multiple attack paths can lead to a damage scenario (efectively forming an attack tree as part of the graph). The propagation rules deine how to accumulate the values of attributes along the attack paths and how to combine multiple attack paths with each other. Consequently, changing an attribute at a node causes the risk values of all related nodes in the graph to be updated accordingly. As noted above, the algorithm has two parts. In the irst part (attack feasibility factor and damage transformation propagation), we create all attack paths of maximum length. This step starts in those assumptions, controls, and threats without any dep ł ends onž or łis mitigated byž relations (these nodes are referred to as łleafsž). The algorithm irst applies the calculation rules as deined below to all of the łleafž nodes. As a result of these rules, a set of attack paths, all including the current node, is created as output and passed along the edge to the next node as input. Every node applies the propagation rules whenever all input sets (for all incoming edges) are available. Eventually, all nodes in the graph that can be part of an attack path are covered and the irst part of the algorithm is inished. The second part (risk propagation) works on the graph in the opposite direction. The algorithm starts in security goals, calculates risks for all incoming attack paths (propagated in step 1), and then propagates risks along the incoming attack paths of these security goals. Security goals possess relations to damage scenarios as a basis for the impact rating. However, the incoming attack paths might have transformation efects on these scenarios. After the efects are applied, a risk level is calculated for each attack path. The highest risk determines the risk level for the security goal itself. The risk level for each attack path subsequently travels along the attack path through the graph. This modeling and calculation approach therefore enables risk decisions that include very complex dependencies. The following calculation instructions allow for an automated implementation. 5.2 Calculation We split the description of calculation rules into two parts. First, we start with the propagation of attack feasibility factors and damage transformations as shown in Figure 7 with the rules presented in Table 1. Then we provide ACM Trans. Cyber-Phys. Syst. Attack Feasibility Factors Risk Level Damage Transformation (selective) Attack Feasibility Factors Risk Level Damage Transformation (selective) Attack Feasibility Factors Damage Transformation 14 • Angermeier et al. Security Goal Combined Threats Threat 0..n is threatened by 0..n 1..n encompasses 1..n Security Property Attack Feasibility Factors Attack Feasibility Factors Attack Feasibility Factors Threatened Sec. Properties Damage Transformation Damage Transformation 0..n 1..n 0..n 0..n violation causes encompasses 0..n 1 has source 0..n Damage Scenario Damage Transformation is mitigated 0..1 has target 0..n Impact Criteria by 0..n causes depends on 0..n 1..n 0..n 0..n Combined Security Goals Control / Assumption Combined Mitigation 0..n depends on 0..n 1..n encompasses 1..n Attack Feasibility Factors Attack Feasibility Factors Attack Feasibility Factors Protected Sec. Properties Damage Transformation Damage Transformation Fig. 7. The propagation of atack feasibility factors and damage transformation through the graph. Security Goal Combined Threats Threat 0..n is threatened by 0..n 1..n encompasses 1..n Security Property Attack Feasibility Factors Risk Level Risk Level Threatened Sec. Properties (selective) (selective) 0..n 1..n 0..n 0..n violation causes encompasses Risk Level (selective) 0..n 1 has source 0..n Damage Scenario Damage Transformation is mitigated 0..1 has target 0..n Impact Criteria by 0..n causes depends on 0..n 1..n 0..n 0..n Combined Security Goals Control / Assumption Combined Mitigation 0..n depends on 0..n 1..n encompasses 1..n Attack Feasibility Factors Risk Level Risk Level Protected Sec. Properties (selective) (selective) Fig. 8. The risks are propagated selectively, following the origins of the atack paths. the propagation of risk levels, which low in the opposite direction along the attack paths as shown in Figure 8 with the rules presented in Tables 2 and 3. The practical calculation also happens in this order. 5.2.1 Propagation of Atack Feasibility Factors and Damage Transformations. The irst type of propagation concerns attack feasibility factors. As described in section 4, they help to estimate the attack feasibility rating. The efort required by an attacker is at least the efort required to attack the initial node on the path, but usually increases along the path as diferent steps have to be taken, requiring more efort. Similarly, a node that causes damage transformation propagates this efect along the attack path. The propagation starts in assumptions, as well as in controls and threats without any dep ł ends onž or łis mitigated byž relations. Nodes with the term łCombinedž in their node type combine the incoming attack paths as deined below. Threats afect incoming ACM Trans. Cyber-Phys. Syst. Risk Level (selective) Security Risk Assessments: Modeling and Risk Level Propagation • 15 attack paths by adding their own attack feasibility factors to the attack path. Mitigations (controls or assumptions) always generate one or more new attack paths: the irst attack path breaks the mitigation via its attack feasibility factors. In this case, the mitigation’s attack feasibility factors are propagated in a new attack path. The second attack path is generated if the mitigation has damage transformation efects. This attack path leaves the mitigation in place and propagates the damage transformation efect. If the mitigation has incoming attack paths, then these paths break the mitigation via its dependencies. Consequently, these attack paths are propagated without changes to the attack feasibility factors or damage transformation efects. Security goals always propagate attack paths without changes to the attack feasibility factors or damage transformation efects. Attack paths usually terminate in security goals, but may also terminate in other nodes (e. g., when a threat does not violate any security goals). Table 1 deines the propagation rules for attack feasibility factors and damage transformation efects. We use the following notation and deinitions: Let � denote a tuple of attack feasibility factor values used to determine the attack feasibility rating. For example, using an approach based on attack potentials (c.f. 6]):[given an assessment model with � attack feasibility factors, let� := (� , . . . , � ) denote an �-tuple of attack feasibility factor values,�wher repr e esents the value of � �,1 �,� �,� attack feasibility factor � for the attack feasibility factor�tuple . �ˆ represents minimum values for each attack feasibility factor. For example, using an approach based on attack potentials (c.f. [6]), this results in the (tuple 0, . . . , 0). Every node in the graph has a unique ID � ,. The ID 0 is reserved for łno node.ž We use the łno nodež concept for a damage transformation that completely removes a damage scenario � repr . esents the set of all node IDs. � ⊂ � represents the set of all Damage Scenario node IDs (including�0),⊂ � represents the set of all Damage Transformation node IDs. A Damage Transformation node � has a relation łhas sourcež to exactly one Damage Scenario node � ∈ � and another relation łhas targetž to another Damage Scenario node� ∈ �. Let src : � → � return a damage transformation’s source node � and tgt : � → � return its target node�. The damage transformation function dt : (�, �) → � uses a Damage Transformation node � ∈ � and a Damage Scenario node� ∈ � as input and provides a Damage Scenario node � ∈ � as output. It is deined as � �≠ src[�] dt[�, �] := tgt[�] � = src[�]. Let � := (�, �, � ) deine anattack path, where � ⊆ � represents a set of damage transformation nodes and � represents the set of nodes with IDs � traversed on the attack path. In other words, � deines the efects on risk accumulated in a single attack path towards a node in the graph and combines an attack efort (attack feasibility factor values in �) with zero or more damage transformation efects�in . Note that, given the acyclic nature of the graph, storing the IDs in a set is suicient to reconstruct a full attack path from a given starting point. Note that several attack paths may contain the same set of nodes, as, e. g., controls with damage transformation efects create two attack paths. Let � denote the set of all attack feasibility factor value �. tuples Then afmax : � → � denotes a function which takes an arbitrary numb�er∈ N of attack feasibility factor value�tuples , . . . , � as input and calculates 1 � the maximum for each attack feasibility factor. For example, using an approach based on attack potentials 6]) (c.f. [ as attack feasibility factors, we obtain afmax[� , . . . , � ] := (max[� , . . . , � ], . . . , max[� , . . . , � ]). 1 � 1,1 �,1 1,� �,� Let � denote the set of all attack paths �. Then cpths : � → � denotes a function which takes an arbitrary number � ∈ N of attack paths� , . . . , � as input and calculates the maximum value for each attack feasibility factor, while damage transformation efects are unafected and all nodes on the path are remembered. More ACM Trans. Cyber-Phys. Syst. 16 • Angermeier et al. precisely, Ø Ø cpths[� , . . . , � ] := (afmax[� , . . . , � ], � , � ). 1 � 1 � � � �=1..� �=1..� The output of cpths[] is itself an attack path which accumulates the values and efects of all the inputs. Finally, let � denote a set of propagated attack paths. For each node, the set of all incoming attack paths represents the input set of the calculation step. This input set is then combined with the node’s own values to propagate a number of attack paths along the graph in the node’s output set. Table 1 provides speciics on the calculation and propagation rules. Note that variables are re-deined for each node and node type (e. g., a node’s unique ID is always � in the node’s scope). Names of metamodel elements are printed in bold. Threat Input Sources: � connected Combined Mitigations Value:� := � (the union of the incoming attack paths). �=1..� The � connected Combined Mitigation nodes propagate � attack path sets � . These sets contain a total of� = |� | attack paths � = (� , � , � ). � � � � � �=1..� Local � := tuple of own attack feasibility factor values � := the node’s unique ID 0 0 Output Targets: All connectedCombined Threats nodes Value for� = 0 (a łleafž node): {(� , ∅, {� })}, i. e., one attack path with the threat’s own values 0 0 Value for� > 0: Ð Ð {cpths[(� , ∅, {� }), � ]} = {cpths[(� , ∅, {� }), (� , � , � )]} = 0 0 � 0 0 � � � � ∈� �=1..� {(afmax[� , � ], ∅ ∪ � , {� } ∪ � )}, 0 � � 0 � �=1..� i. e., the threat propagates� attack paths, where each of the� attack paths in the input set � is combined with the threat’s attack feasibility factor � values and node ID � . 0 0 Combined Threats Input Sources: � connected Threats Value:� output sets � of the� connected Threats, each with|� | attack paths for a total of � � � = |� | attack paths �=1..� Local � := �ˆ (no own attack feasibility factor values) � := the node’s unique ID 0 0 Output Targets: All connectedSecurity Goal nodes Value:� := {cpths[� , . . . , � , (� , ∅, {� })] | � ∈ � for every� ∈ {1, . . . ,� }}. 1 � 0 0 � � The output � contains� := |� | attack paths: it encompasses all possible combinations of �=1..� incoming attack paths for each�ofconnected threats. Every threat contributes|� | diferent attack paths. By combining all choices of selecting a single attack path for each � difer threat, ent attack paths (with combined attack feasibility factor values) are created for the output set. The node’s unique ID� is added to each attack path. ACM Trans. Cyber-Phys. Syst. Security Risk Assessments: Modeling and Risk Level Propagation • 17 Security Goal Sources: � connected Combined Threats nodes and � connected Combined Security Goals Input nodes Ð Ð Value:� := � ∪ � (the union of the incoming attack paths). � � �=1..� �=1..� The � connected Combined Threats nodes propagate � attack path sets � . Additionally,�the connected Combined Security Goals nodes propagate � attack path sets � , resulting in a total Í Í of� = |� | + |� | incoming attack paths. � � �=1..� �=1..� Local � := �ˆ (no own attack feasibility factor values) � := the node’s unique ID 0 0 Output Targets: All connectedCombined Security Goals nodes Value for� = 0 : ∅ (as the node is not attacked, nothing is propagated) Value for� > 0: {cpths[� , (� , ∅, {� })]}, � 0 0 � ∈� i. e, the node adds its unique�IDto each attack path but does not inluence the attack feasibility factors (as� is set to minimum for each value) or the damage transformation efects. Combined Security Goals Input Sources: � connected Security Goals Value:� attack path sets � of the� connected Security Goals, each with|� | attack paths for a � � total of� = |� | attack paths �=1..� � Local � := �ˆ (no own attack feasibility factor values) � := the node’s unique ID 0 0 Output Targets: All connectedSecurity Goal and Control nodes Value:� := {cpths[� , . . . , � , (� , ∅, {� })] | � ∈ � for every� ∈ {1, . . . ,� }}. 1 � 0 0 � � The output � contains� := |� | attack paths: it encompasses all possible combinations of �=1..� incoming attack paths for each�ofconnected security goals. Every security goal contributes |� | diferent attack paths. By combining all choices of selecting a single attack path for each security goal,� diferent attack paths (with combined attack feasibility factor values) are created for the output set. The node’s unique ID� is added to each attack path. Control Input Sources: � connected Combined Security Goals nodes Value:� := � (the union of the incoming attack paths). �=1..� The � connected Combined Security Goals nodes propagate � attack path sets � . These sets contain a total of � = |� | attack paths � . � � �=1..� Local � := tuple of own attack feasibility factor values � := set of damage transformation efects � := the node’s unique ID 0 0 Output Targets: All connectedCombined Mitigation nodes ACM Trans. Cyber-Phys. Syst. 18 • Angermeier et al. Value for� = 0 (a łleaf nodež): · if � = ∅ : {(� , ∅, {� })}, 0 0 0 · else: {(� , ∅, {� }), (�ˆ, � , {� })}, 0 0 0 0 i. e., two attack paths for a control with at least one damage transformation efect or one attack path for a control without. Note that �ˆ represents the tuple of minimal attack feasibility factor values. Value for� > 0: · if � = ∅ : {cpths[(�ˆ,∅, {� }), � ]} ∪ {(� , ∅, {� })} 0 0 � 0 0 � ∈� · else: {cpths[(�ˆ,∅, {� }), � ]} ∪ {(� , ∅, {� }), (�ˆ, � , {� })} 0 � 0 0 0 0 � ∈� i. e., a control without damage transformation efects propagates � + 1 attack paths. This includes the � attack paths in the input set � combined with the control’s node�ID. As the control is broken via its dependencies, the control’s attack feasibility factor value � is tuple not added to these paths. The (� + 1)th propagated attack path is the same as for a łleafž node, as the control is not broken via its dependencies in this case. A control with damage transformation efects propagates one additional (� + 2)th attack path without the node’s attack feasibility factor values, but its damage transformation efects � and node ID � . This relects an attacker’s option to accept the control’s 0 0 efects on damages instead of breaking the control. Assumption Input Sources: None (assumptions are always łleafž nodes) Value:� := ∅ (no incoming attack paths). Local � := set of damage transformation efects � := the node’s unique node ID 0 0 � := tuple of own attack feasibility factor values ⊥ OR (where ⊥ means that the assumption always causes a damage transformation efect) Output Targets: All connectedCombined Mitigation nodes Value: · if � = ⊥ and � = ∅ : ∅, 0 0 · if � = ⊥ and � ≠ ∅ : {(�ˆ, � , {� })}, 0 0 0 0 · if � ≠ ⊥ and � = ∅ : {(� , ∅, {� })}, 0 0 0 0 · if � ≠ ⊥ and � ≠ ∅ : {(�ˆ, � , {� }), (� , ∅, {� })}, 0 0 0 0 0 0 i. e., no attack paths for an assumption without efects, one attack path for an assumption with only one efect, and two attack paths for an assumption with attack feasibility factor values and a damage transformation efect. Combined Mitigations Input Sources: � connected Control or Assumption nodes Value:� output sets � of the� connected nodes (Controls / Assumptions), each with|� | attack � � paths for a total of � = |� | attack paths �=1..� Local � := �ˆ (no own attack feasibility factor values) � := the node’s unique ID; 0 0 Output Targets: All connectedThreat nodes ACM Trans. Cyber-Phys. Syst. Security Risk Assessments: Modeling and Risk Level Propagation • 19 Value:� := {cpths[� , . . . , � , (� , ∅, {� })] | � ∈ � for every� ∈ {1, . . . ,� }}. 1 � 0 0 � � The output � contains� := |� | attack paths: it encompasses all possible combinations �=1..� of incoming attack paths for each�of connected mitigations (controls / assumptions). Every mitigation contributes |� | diferent attack paths. By combining all choices of selecting a single attack path for each mitigation, � diferent attack paths (with combined attack feasibility factor values) are created for the output set. The node’s unique ID � is added to each attack path. Table 1. Propagation rules for atack feasibility factors and damage transformation efects 5.2.2 Risk Propagation.With the attack feasibility factor values and the damage transformation efects completely propagated, the calculation and propagation of risk levels takes place. The initial calculation happens in Security Goal nodes without an incoming encompassesž ł relation of a Combined Security Goals node (i. e., a łleafž node for risk propagation). For each attack path, we calculate the attack feasibility rating based on the attack path’s attack feasibility factor values. Next, we apply all damage transformation efects of the selected attack path to the Security Goal node’s related Damage Scenario nodes (following the relation łviolation causesž). The resulting damage scenarios provide impact criteria, which in turn provide impact ratings. The highest of these impact ratings deines the impact rating for the selected attack path. Combining this impact rating with the attack feasibility rating of the attack path provides a risk level for the attack path. The highest risk value of any of a node’s attack paths deines the risk for the node itself. The node then propagates all risk levels along its respective attack paths to propagate the results through the graph. We use the following additional notation for these rules: Let � := afr[�] calculate the attack feasibility rating for a attack feasibility factor �. Note value that tuple this function is part of the assessment model and thus not deined heredc . Let [�] := � return the set � of impact criteria assigned to a damage scenario �. Let � := imr [� ] calculate the impact rating for a set of impact criteria � , based on the assessment model. For exampleimr , [] might return the maximum over a set of impact ratings assigned to each impact criterion in the assessment model (where higher numbers imply higher impacts). Let � := rl[�, �] calculate the risk lev�el for an attack feasibility rating � and an impact rating �. Let � := (�, � ) represent a risk level result for an attack path with risk�le and vela set of node IDs� . Finally, let� denote a set of risk level results. Tables 2 and 3 deine the speciics for the risk level calculation inside a Security Goal node and the risk level result propagation. Table 2 also describes the risk level propagation to Damage Scenario nodes (see second łOutputž in the table). Note that instead of propagating risks hop by hop on an attack path, we propagate selected risks directly from Security Goal nodes. The derived risk level result for each attack path is propagated to each node on this attack path. This simpliies the propagation rules, while preserving the semantics of Figure 8. ACM Trans. Cyber-Phys. Syst. 20 • Angermeier et al. Security Goal Input Sources: All� Security Goals where this node is on an attack path (seOutput e in this table) Value:� := � (the union of the incoming results). �=1..� The � Security Goals propagate � result sets� . These sets contain a total of � = |� | results� = (� , � ). � � � � �=1..� Input � := the set of� incoming attack paths (� , � , � ) for�= 1..� (see Table 1) � � � Local � := the set of connectedDamage Scenario nodes � := the node’s unique ID 0 0 � := {dt[�, �]}, �,0 � ∈� ,�∈� Derived � 0 the set of transformed damage scenarios for attack path �after one iteration � := {dt[�, �]}, �,� � ∈� ,�∈� �,�−1 the set of transformed damage scenarios after � + 1 iterations � := � with� = � , the resulting set of transformed damage scenarios. � �,� �,� �,�−1 Note that this allows cycles or ambiguous situations. It is up to the analyst creating the model to prevent or resolve such issues. � = � if � = ∅. � 0 � � := imr [ dc[� ]], the impact rating for attack path � � � � ∈� � � � := afr[� ], the attack feasibility rating for attack � path � � � := rl[� , � ], the risk level for attack path � � � � � := {(� , � )}, the set of all risk level results for all incoming attack� paths in � � �=1..|� | � := max[{� | (� , � ) ∈ � } ∪ {� | (� , � ) ∈ � }], the security goal’s risk value, i. e., the maximum � � � � � � of the node’s own risk levels and the risk levels propagated to the node Output Targets: All nodes� on the incoming attack paths with � ∈ � and (� , � ) ∈ � � � � � � Value:� := {(� , � ∪ {� }) | (� , � ) ∈ � and � ∈ � }. � � � 0 � � � � Propagate to the target node � the results� for all attack paths (local and propagated) with the � � security goal’s ID � on the path. Note that � might be empty (for a łleafž node with � = 0). Output Targets: |� | connected Damage Scenario nodes � ∈ � � � Value:� := {(rl[� , imr [dc[�]]], � ) | � = afr[� ] for all � ∈ {1, . . . , �} and � ∈ � }. � � � � � Propagate to each connected damage scenario� ∈ � the result set� , containing a risk value and the nodes on the attack path � . The risk value is calculated from the attack feasibility � for rating � � that attack path and the damage ratingimr [dc[�]] for damage scenario �. Note that � contains the damage scenarios for an attack path after damage transformation. Table 2. Propagation rules for risk results of Security Goal nodes ACM Trans. Cyber-Phys. Syst. Security Risk Assessments: Modeling and Risk Level Propagation • 21 Combined Security Goals, Combined Threats, Threat, Combined Mitigation, Control or Assumption Input Sources: All� Security Goals where this node is on an attack path (see Table 2) Value:� := � (the union of the incoming results). �=1..� The � Security Goals propagate � result sets� . These sets contain a total of � = |� | results� = (� , � ). � � � � �=1..� Local � := the node’s unique ID � := max[{� | (� , � ) ∈ � }], the node’s risk value, i. e., the maximum of the risk levels propagated � � � Derived to the node Output Targets: None Value: None Table 3. Propagation rules for risk results of Combined Security Goals, Combined Threats, Threat, Combined Mitiga- tion, Control or Assumption nodes. � ≔ cpths � ,� , �,Ƹ ∅, � � � � � Security Goal: Integrity Combined Threats: CT1 encompasses Threat: Man-in-the- � ≔ cpths � ,� , �,Ƹ ∅, � of the Update function Middle attack (mobile) is threatened by � = affmax[r ,r ],∅, � 1 ≤ � ≤ � � ≔ cpths � , � ,∅, � � = � ,∅, � 1 ≤ � ≤ � = affmax � ,� ,� ,∅, � ,� 1 ≤ � ≤ � � = affmax[r ,r ],∅, � ,� ,� � ≔ cpths � , � ,∅, � � = � ,∅, � ,� � = affmax � ,� ,� ,∅, � ,� ,� ,� ,� � = affmax[� ,� ], � , � ,� ,� ,� ,� � = � ,∅, � is mitigated by violation causes Threat: Reverse � ≔ cpths � , �,Ƹ ∅, � Engineering Damage Scenario: Damage Scenario: � ≔ cpths � , �,Ƹ ∅, � � ≔ � ,∅, � Uncontrollable vehicle Unauthorized tuning � ≔ cpths � ,� , �,Ƹ ∅, � Combined Mitigation: � � CM1 encompasses Damage Combined Threats: CT2 � � = � ,∅, � ,� ,� ,� ,� � = � ,∅, � has source Transformation: DT1 has target � = � ,∅, � � = � ,{� }, � ,� ,� encompasses causes � ≔ cpths � , �,Ƹ ∅, � � ≔ cpths � , � , � , � ,� ,� � ≔ cpths � , �,Ƹ ∅, � � ≔ � ,∅, � Assumption: Tuning encompasses Combined Mitigation: is mitigated Threat: Manipulate only CM2 by data on the CAN bus Control: AES GCM � ≔ �,Ƹ {� }, � � = �,Ƹ {� }, � � � = �,Ƹ {� }, � ,� � � � = � ,∅, � ,� ,� ,� depends on Legend �Ƹ≔ tuple with minimal values for each attack feasibility factor � Threat: Key extraction � � ≔ � ,∅, � � Local attack feasibility factor value tuple � � ≔ cpths � , �,Ƹ ∅, � encompasses � Combined Security � Local damage transformation set � Goals: CSG1 � = � ,∅, � � Local node ID � � � = � ,∅, � ,� ,� Combined Threats: CT3 � = � , �,� Set of outgoing attack paths - each tuple denotes � ≔ cpths � , �,Ƹ ∅, � encompasses ′ ′ ′ one attack path � = � ,� ,� �+ is threatened by � ≔ cpths � , �,Ƹ ∅, � Set of incoming attack paths - each tuple � = � , �,� � � = � ,∅, � ,� Security Goal: Confidentiality of key denotes one attack path. An element without ′ ′ ′ � = � ,� ,� �+ icoig path is cosidered a leaf Fig. 9. Propagation of atack factors and damage transformation shown on the sotware update example. Thick borders mark łleafž nodes. ACM Trans. Cyber-Phys. Syst. 22 • Angermeier et al. 5.3 Example We use our example presented in Subsection 4.5 to depict the propagation of attack feasibility factors and damage transformation efects within that example in Figure 9. Thick borders mark the three nodes considered as łleafž nodes for this propagation. These leafs serve as starting points for the propagation. Note that we use unique identiiers for the propagated attack paths in this example and not the indices enumerating elements in a set of attack paths. Rounded boxes with solid borders denote sets of outgoing attack paths, while rounded boxes with dashed borders denote sets of incoming attack paths. Local attributes are depicted as circles. This results in three attack paths for the security goal łIntegrity of the Update functionž: [1] One way to attack this security goal is to extract the private key, conduct reverse engineering, and manipulate the encrypted and signed software update as man-in-the-middle on the mobile connections. The threat łKey extractionž has its attack feasibility factor�values . These are propagated through the security goal łConiden- tiality of keyž to the control AES ł GCMž that depends on this security goal. This results in a broken control and consequently does not add the control’s attack feasibility factor values to the attack path. The attack path is then propagated to the threat łMan-in-the-Middle attack (mobile)ž. This threat has its attack feasibility factor values� . afmax[� , � ] calculates the maximum value for each attack feasibility factor afr,[afmax while[� , � ]] 7 1 7 1 7 calculates the attack feasibility rating. To threaten the target security goal łIntegrity of the Update functionž, the threat łReverse Engineeringž with its attack feasibility� factors needs to be combined with the MitM threat in node łCT1ž and thus, a total attack feasibility rating afr[of afmax[� , � , � ]] is calculated for this attack path. 1 7 8 The traversed node IDs are accumulated along the path, while no damage transformation efects are encountered. [2] A second attack is to break the control AES ł GCMž with its attack feasibility factor � values (not by extracting the key, but e.g., by brute-forcing it because of a short key length), conduct reverse engineering, and manipulate the encrypted and signed software update as man-in-the-middle on the mobile connections. This results in a total attack feasibility rating afr[afmax of [� , � , � ]], no damage transformation efects, and a diferent set of 5 7 8 node IDs compared to attack path 1. [3] A third attack is to conduct reverse engineering and manipulate the software update on the CAN bus inside the vehicle, where no encryption is applied. For this attack, the assumption łTuning onlyž causes damage transformation node łDT1ž with ID� , transforming łUncontrollable vehiclež to łUnauthorized tuningž. This is propagated to the threat łManipulate data on the CAN busž, which possesses the attack feasibility factor values� . Together with łReverse Engineeringž this combines to the total afr[of afmax[� , � ]]. The damage 11 8 11 transformation set for this attack path{�is}. A risk level is calculated for each of the three attack paths, based on each attack path’s attack feasibility rating as well as the damage associated with each damage scenario after damage transformation. For our three attack paths, this results in: • rl[afr[afmax[� , � , � ]], imr [dc[� ], dc[� ]]] (attack path 1) 1 7 8 15 16 • rl[afr[afmax[� , � , � ]], imr [dc[� ], dc[� ]]] (attack path 2) 5 7 8 15 16 • rl[afr[afmax[� , � ]], imr [dc[� ]]] (attack path 3). 8 11 16 Furthermore, each of these risks is propagated to all nodes on the respective attack path. The highest of these risks determines the risk level for each node. Finally, a risk level is calculated for each attack path combined with the resulting damage scenarios after damage transformation on that attack path, i. e.: • rl[afr[afmax[� , � , � ]], imr [dc[� ]]] (attack path 1, damage scenario with ID � ) 1 7 8 15 15 • rl[afr[afmax[� , � , � ]], imr [dc[� ]]] (attack path 1, damage scenario with ID � ) 1 7 8 16 16 • rl[afr[afmax[� , � , � ]], imr [dc[� ]]] (attack path 2, damage scenario with ID � ) 5 7 8 15 15 • rl[afr[afmax[� , � , � ]], imr [dc[� ]]] (attack path 2, damage scenario with ID � ) 5 7 8 16 16 • rl[afr[afmax[� , � ]], imr [dc[� ]]] (attack path 3, damage scenario with ID � ). 8 11 16 16 ACM Trans. Cyber-Phys. Syst. Security Risk Assessments: Modeling and Risk Level Propagation • 23 6 CONCLUSION Security risk analyses are becoming a mandatory development step in many domains due to international regulations. This is already the case in the automotive domain due to the new UN Regulation 36]. No. 155 [ Implementing the necessary processes for systematically evaluating complex systems, such as modern cars, is a demanding task. We address the question of how to implement such a process with a proven, model-based approach. The structure of the presented model, in combination with the methodical approach, forms a common basis upon which system and security engineers can jointly develop and assess an instanced model. To this aim, we mix two artifact types, system and security properties with limited expressiveness, which proved well applicable in our experience. Our metamodel encompasses the SUD itself, composed of functions, data elements, components, and data lows. A ixed set of relations links these elements. We extend this metamodel to include the elements speciic for SRAs (security goals, threats, controls, and assumptions) and additional relations. Thus, we achieve an integrated representation of the SUD as well as its security properties in the context of an SRA. To properly consider the often-intricate dependencies and inluences, we introduce a set of propagation rules. Consequently, the relations between the security-speciic elements can be validated by tracing them back to the elements of the SUD. A consequence of this mixed artifacts strategy is that security modeling requires repeating or extending several steps of functional modeling, i.e., creating use case diagrams and lowcharts, which is not an easily automatable task. In contrast to other approaches that separate elements and relations of the SUD and the security-speciic elements, we provide an integrated perspective that allows users to assess the level of risk and the impact of threats, controls, and assumptions in a qualiied manner. Local or iterative changes to the model rarely require changes to other elements due to the modular structure. This supports the maintainability of analyses, reduces the follow-ups’ eforts on updates or new indings, and improves comprehensibility. To demonstrate our method, we present an application on small ictitious example. In the absence of suitable evaluation criteria for the quality of analyses, it is not yet possible to measure the quality of the applied approach beyond that. The development of such evaluation criteria is the subject of ongoing research. For this purpose, we intend to re-analyze existing assessments to identify relevant properties. However, we collected evidence of our method’s suitability in several hundred real-life security risk assessments in projects with industrial customers. We conducted security risk assessments for the development of vehicle functions and ECUs, industrial components, IT systems, and IoT devices in the course of ten years and continuously improved the method based on our own experience and the feedback of our customers. Limitations of our approach include an increase in the complexity of the resulting models, requiring security experts to apply the method. In the future, expressive and case-speciic SRAs will no longer be suicient. It will be required to infer between diferent models and evaluate them simultaneously to cope with complex, integrated systems. This will require additional methods. Likewise, expertise is required to tailor the assessment model and catalogs to a company’s needs for best results. Additionally, achieving higher precision often comes at the price of increasing model complexity. Creation and maintenance of these models gain from tool support, such as the Yakindu Security Analyst. Although initially developed for automotive security risk analysis, we successfully applied the proposed structure and representation as graphs in other domains, such as industrial security. REFERENCES [1] Daniel Angermeier, Kristian Beilke, Gerhard Hansch, and Jörn Eichler. 2019. Modeling Security Risk A17th ssessments. Embedde Ind Security in Cars (escar Europe)(Stuttgart, Germany, 2019-12-31). Ruhr-Universität Bochum, Bochum, Germany, 133ś146. https: //doi.org/10.13154/294-6670 ACM Trans. Cyber-Phys. Syst. 24 • Angermeier et al. [2] Daniel Angermeier, Alexander Nieding, and Jörn Eichler. 2016. Supporting Risk Assessment with the Systematic Identiication, Merging, and Validation of Security Goals. International In Workshop on Risk Assessment and Risk-driven Testing . Springer, Cham, Germany, 82ś95. [3] Daniel Angermeier, Alexander Nieding, and Jörn Eichler. 2016. Systematic Identiication of Security Goals and Threats in Risk Assessment. Softwaretechnik-Trends36, 3 (2016). http://pi.informatik.uni-siegen.de/stt/36_3/./01_Fachgruppenberichte/Ada/02_Angermeier.pdf [4] Daniel Angermeier, Alexander Nieding, and Jörn Eichler. 2017. Supporting Risk Assessment with the Systematic Identiication, Merging, and Validation of Security Goals. Risk InAssessment and Risk-Driven Quality Assurance , Jürgen Großmann, Michael Felderer, and Fredrik Seehusen (Eds.). Springer International Publishing, Cham, 82ś95. [5] George EP Box. 1979. Robustness in the Strategy of Scientiic Model Building. Robustness In in Statistics , Robert L. Launer and Graham N. Wilkinson (Eds.). Elsevier, Madison, WI, USA, 201ś236. https://doi.org/10.1016/B978-0-12-438150-6.50018-2 [6] Common Criteria Editorial Board. Common 2017. Methodology for Information Technology Security Evaluation: Evaluation Methodology (3.1r5 ed.). Standard. Common Criteria. [7] J. Eichler. 2015.Model-based Security Engineering for Electronic Business Processes . Ph. D. Dissertation. Technische Universität München. http://nbn-resolving.de/urn/resolver.pl?urn:nbn:de:bvb:91-diss-20150721-1238308-1-8 [8] Jörn Eichler and Daniel Angermeier. 2015. Modular Risk Assessment for the Development of Secure Automotive Systems. 31. In VDI/VW-Gemeinschaftstagung Automotive Security (VDI-Berichte, Vol. 2263) . VDI, Düsseldorf, 81ś90. [9] Benjamin Fabian, Seda Gürses, Maritta Heisel, Thomas Santen, and Holger Schmidt. 2010. A comparison of security requirements engineering methods. Requirements Engineering 15, 1 (2010), 7ś40. https://doi.org/10.1007/s00766-009-0092-x [10] Shamal Faily, John Lyle, Cornelius Namiluko, Andrea Atzeni, and Cesare Cameroni. 2012. Model-driven Architectural Risk Analysis Using Architectural and Contextualised Attack Patterns. ProIn ceedings of the Workshop on Model-Driven Security (Innsbruck, Austria) (MDsec ’12). ACM, New York, NY, USA, Article 3, 6 pages. https://doi.org/10.1145/2422498.2422501 [11] Jack Freund and Jack Jones. 2015. Measuring and Managing Information Risk: A FAIR approach . Butterworth-Heinemann, Oxford, UK. [12] Dimitris Gritzalis, Giulia Iseppi, Alexios Mylonas, and Vasilis Stavrou. 2018. Exiting the Risk Assessment Maze: ACM A Meta-Survey. Comput. Surv. 51, 1, Article 11 (Jan. 2018), 30 pages. https://doi.org/10.1145/3145905 [13] Mohammad Hamad and Vassilis Prevelakis. 2020. SAVTA: A hybrid vehicular threat model: Overview and case Information study. 11, 5 (May 2020), 273. [14] Gerhard Hansch, Peter Schneider, and Gerd S. Brost. 2019. Deriving Impact-driven Security Requirements and Monitoring Measures for Industrial IoT. In 5th ACM Cyber-Physical System Security Workshop (Auckland, New Zealand)(CPSS ’19). ACM, New York, NY, USA, 37ś45. https://doi.org/10.1145/3327961.3329528 [15] Hannes Holm, Khurram Shahzad, Markus Buschle, and Mathias Ekstedt. 2015. CySeMoL: P Predictive, Probabilistic Cyber Security Modeling LanguageIEEE . Transactions on Dependable and Secure Computing 12, 6 (Nov. 2015), 626ś639. https://doi.org/10.1109/TDSC. 2014.2382574 [16] IEC. 2020. IEC 62443-3-2:2020 Security for industrial automation and control systems - Part 3-2: Security risk assessment for system design . Standard. International Electrotechnical Commission and others, Geneva, CH. [17] ISO/SAE. 2021. ISO/SAE 21434:2021 Road Vehicles ś Cybersecurity engineering . Standard. International Organization for Standardization, Geneva, CH. [18] Loren Kohnfelder and Praerit Garg. 1999. The threats to our products. Technical Report. Microsoft Interface. https://adam.shostack.org/ microsoft/The-Threats-To-Our-Products.docx [19] Barbara Kordy, Ludovic Piètre-Cambacédès, and Patrick Schweitzer. 2014. DAG-based attack and defense modeling: Don’t miss the forest for the attack trees.Computer science review13 (Nov. 2014), 1ś38. https://doi.org/10.1016/j.cosrev.2014.07.001 [20] Michael Krisper, Jürgen Dobaj, Georg Macher, and Christoph Schmittner. 2019. RISKEE: A Risk-Tree Based Method for Assessing Risk in Cyber SecuritySystems, . In Software and Services Process Improvement - 26th European Conference, EuroSPI 2019, Edinburgh, UK, September 18-20, 2019, Proceedings. Springer, Cham, Germany, 45ś56. https://doi.org/10.1007/978-3-030-28005-5_4 [21] Katsiaryna Labunets, Fabio Massaci, and Alessandra Tedeschi. 2017. Graphical vs. Tabular Notations for Risk Models: On the Role of Textual Labels and Complexity.Pr Inoceedings of the 11th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (Markham, Ontario, Canada)(ESEM ’17). IEEE Press, Piscataway, NJ, USA, 267ś276. https://doi.org/10.1109/ESEM.2017.40 [22] Mass Soldal Lund, Bjùrnar Solhaug, and Ketil Stùlen. 2010. Model-Driven Risk Analysis: the CORAS approach . Springer Science & Business Media, Berlin Heidelberg, Germany. https://doi.org/10.1007/978-3-642-12323-8 [23] Feng Luo, Shuo Hou, Xuan Zhang, Zhenyu Yang, and Wenwen Pan. 2020. Security Risk Analysis Approach for Safety-Critical Systems of Connected Vehicles. Electronics9, 8 (Aug. 2020), 1242. [24] Georg Macher, Harald Sporer, Reinhard Berlach, Eric Armengaud, and Christian Kreiner. 2015. SAHARA: A Security-Aware Hazard and Risk Analysis Method.Pr Inoceedings of the 2015 Design, Automation and Test in Europe Conference (Grenoble, France)(DATE ’15). EDA Consortium, San Jose, CA, USA, 621ś624. [25] Charlie Miller and Chris Valasek. 2013. Adventures in automotive networks and contrDef ol units. Con 21 (2013), 260ś264. [26] Jean-Philippe Monteuuis, Aymen Boudguiga, Jun Zhang, Houda Labiod, Alain Servel, and Pascal Urien. 2018. SARA: Security automotive risk analysis method.Pr Inoceedings of the 4th ACM Workshop on Cyber-Physical System Security . Association for Computing Machinery, ACM Trans. Cyber-Phys. Syst. Security Risk Assessments: Modeling and Risk Level Propagation • 25 New York, NY, USA, 3ś14. [27] Martin J. O’Connor and Amar K. Das. 2009. SQWRL: A Query Language for OWL. In Proceedings of the 6th International Conference on OWL: Experiences and Directions (Chantilly, VA, USA (O) WLED’09, Vol. 529). CEUR-WS.org, Aachen, Germany, 208ś215. http: //dl.acm.org/citation.cfm?id=2890046.2890072 [28] N. Poolsappasit, R. Dewri, and I. Ray. 2012. Dynamic Security Risk Management Using Bayesian AttackIEEE Graphs. Transactions on Dependable and Secure Computing9, 1 (2012), 61ś74. https://doi.org/10.1109/TDSC.2011.34 [29] Alastair Ruddle, Benjamin Weyl, Sajid Idrees, Y. Roudier, Michael Friedewald, Timo Leimbach, A. Fuchs, S. Gürgens, O. Henninger, Roland Rieke, M. Ritscher, H. Broberg, L. Apvrille, R. Pacalet, and Gabriel Pedroza. 2009. Security requirements for automotive on-board networks based on dark-side scenarios. Deliverable D2.3: EVITA. E-safety vehicle intrusion protected applications. Fraunhofer ISI (01 2009). [30] Christoph Schmittner, Thomas Gruber, Peter Puschner, and Erwin Schoitsch. 2014. Security Application of Failure Mode and Efect Analysis (FMEA). In Computer Safety, Reliability, and Security , Andrea Bondavalli and Felicita Di Giandomenico (Eds.). Springer International Publishing, Cham, 310ś325. [31] Adam Shostack. 2014. Threat Modeling: Designing for Security . John Wiley and Sons, Indianapolis, IN, USA. [32] Teodor Sommestad, Mathias Ekstedt, and Hannes Holm. 2013. The Cyber Security Modeling Language: A Tool for Assessing the Vulnerability of Enterprise System ArchiteIEEE ctures. Systems Journal7, 3 (Dec. 2013), 363ś373. https://doi.org/10.1109/JSYST.2012. [33] Amina Souag, Camille Salinesi, Raúl Mazo, and Isabelle Comyn-Wattiau. 2015. A Security Ontology for Security Requirements Elicitation. In Engineering Secure Software and Systems(Milan, Italy (ESSoS ) 2015), Frank Piessens, Juan Caballero, and Nataliia Bielova (Eds.). Springer International Publishing, Cham, Germany, 157ś177. https://doi.org/10.1007/978-3-319-15618-7_13 [34] UNECE WP.29 TF CS and OTA. 2020. UN Regulation on uniform provisions concerning the approval of vehicles with regards to cyber security and cyber security management system . Proposal. UN World Forum for the Harmonization of Vehicle Regulations (WP.29). [35] Jan Wolf, Felix Wieczorek, Frank Schiller, Gerhard Hansch, Norbert Wiedermann, and Martin Hutle. 2016. Adaptive Modelling for Security Analysis of Networked Control Systems. 4thInInternational Symposium for ICS & SCADA Cyber Security Research (Belfast, UK) (ICS-CSR ’16). BCS Learning & Development, Swindon, UK, 64ś73. https://doi.org/10.14236/ewic/ICS2016.8 [36] UNECE GRVA WP29. 2021. UN Regulation No. 155 - Cyber security and cyber security management system . Technical Report. UNITED NATIONS. [37] Peng Xie, Jason H Li, Xinming Ou, Peng Liu, and Renato Levy. 2010. Using Bayesian networks for cyber security analysis. 2010 In IEEE/IFIP International Conference on Dependable Systems & Networks (DSN) . IEEE, IEEE Computer Society, Los Alamitos, CA, USA, 211ś220. https://doi.org/10.1109/DSN.2010.5544924 ACM Trans. Cyber-Phys. Syst.

Journal

ACM Transactions on Cyber-Physical SystemsAssociation for Computing Machinery

Published: Feb 20, 2023

Keywords: Security risk assessment

References