Table 1.

A set of guiding questions to be considered before employing a simulation model in economic cybersecurity decision-making processes. The table is a summary of the preceding sections, where more details are found.

Upholding data integrity and avoiding bias|$\bullet$| Is the data representative and collected using robust methods?
 |$\bullet$| Are there biases in the data collection process?
 |$\bullet$| How does the model handle incomplete or uncertain data?
 |$\bullet$| What mechanisms are in place to update data sources and ensure ongoing relevance?
Precision and scope of data|$\bullet$| Does the data align with the scope of the decision problem?
 |$\bullet$| Is the data granular enough for the specific aspects being simulated?
 |$\bullet$| Are there processes to periodically reassess the relevance and accuracy of the data?
 |$\bullet$| How does the model accommodate data from different and potentially conflicting sources?
Assumptions about statistical distributions and independence|$\bullet$| Are the assumptions about distributions and event independence justified?
 |$\bullet$| How do these assumptions impact the simulation outcomes?
 |$\bullet$| Is there a process for regularly reviewing and updating these assumptions?
 |$\bullet$| How are outliers and anomalies in data handled in the model?
 |$\bullet$| Does the model support causal or evidential decision-making? Is the model transparent enough to answer this question?
Modeling asymmetries and actor behavior|$\bullet$| Are there unjustified asymmetries in how different actors or market dynamics are modeled?
 |$\bullet$| Is the model’s representation of actor behavior (risk-neutral, risk-averse) appropriate?
 |$\bullet$| Does the model account for potential changes in actor behavior or preferences over time?
 |$\bullet$| Are there considerations for unexpected or sudden market changes?
Equilibrium and dynamic modeling|$\bullet$| Does the model appropriately account for nonequilibrium conditions and transitions?
 |$\bullet$| Are steady-state assumptions justified?
Information assumptions|$\bullet$| Are the assumptions about the availability and accuracy of information within the model realistic?
 |$\bullet$| How do these assumptions affect the simulation results?
Transparency and reproducibility|$\bullet$| Is the model and its underlying mechanics documented in a way that allows for reproducibility?
 |$\bullet$| Are the sources of data and the methodologies used clearly stated?
 |$\bullet$| Are there guidelines for interpreting the results of the model?
Contextual appropriateness|$\bullet$| Is the level of abstraction appropriate for the cybersecurity context being simulated?
 |$\bullet$| Does the model account for the specificities of the cyber ecosystem under study?
 |$\bullet$| How does the model integrate interdisciplinary knowledge (e.g. technological, sociological, and psychological)?
Adaptability and updating|$\bullet$| Can the model be easily updated to reflect new data or changes in the cyber threat landscape?
 |$\bullet$| How flexible is the model in adapting to new scenarios or information?
 |$\bullet$| Is there a feedback mechanism for users to suggest improvements or report issues?
Validation and testing|$\bullet$| Has the model been validated against real-world data or scenarios?
 |$\bullet$| Are there mechanisms in place for continuous testing and improvement of the model?
Deployment|$\bullet$| Is the model’s readiness and relevance for deployment and operational use assessed?
 |$\bullet$| Are the model’s limitations and their impacts documented and communicated to the users of the model?
 |$\bullet$| What training and knowledge transfer is required for model’s users to effectively interpret and apply model’s results? How will this training be delivered?
Upholding data integrity and avoiding bias|$\bullet$| Is the data representative and collected using robust methods?
 |$\bullet$| Are there biases in the data collection process?
 |$\bullet$| How does the model handle incomplete or uncertain data?
 |$\bullet$| What mechanisms are in place to update data sources and ensure ongoing relevance?
Precision and scope of data|$\bullet$| Does the data align with the scope of the decision problem?
 |$\bullet$| Is the data granular enough for the specific aspects being simulated?
 |$\bullet$| Are there processes to periodically reassess the relevance and accuracy of the data?
 |$\bullet$| How does the model accommodate data from different and potentially conflicting sources?
Assumptions about statistical distributions and independence|$\bullet$| Are the assumptions about distributions and event independence justified?
 |$\bullet$| How do these assumptions impact the simulation outcomes?
 |$\bullet$| Is there a process for regularly reviewing and updating these assumptions?
 |$\bullet$| How are outliers and anomalies in data handled in the model?
 |$\bullet$| Does the model support causal or evidential decision-making? Is the model transparent enough to answer this question?
Modeling asymmetries and actor behavior|$\bullet$| Are there unjustified asymmetries in how different actors or market dynamics are modeled?
 |$\bullet$| Is the model’s representation of actor behavior (risk-neutral, risk-averse) appropriate?
 |$\bullet$| Does the model account for potential changes in actor behavior or preferences over time?
 |$\bullet$| Are there considerations for unexpected or sudden market changes?
Equilibrium and dynamic modeling|$\bullet$| Does the model appropriately account for nonequilibrium conditions and transitions?
 |$\bullet$| Are steady-state assumptions justified?
Information assumptions|$\bullet$| Are the assumptions about the availability and accuracy of information within the model realistic?
 |$\bullet$| How do these assumptions affect the simulation results?
Transparency and reproducibility|$\bullet$| Is the model and its underlying mechanics documented in a way that allows for reproducibility?
 |$\bullet$| Are the sources of data and the methodologies used clearly stated?
 |$\bullet$| Are there guidelines for interpreting the results of the model?
Contextual appropriateness|$\bullet$| Is the level of abstraction appropriate for the cybersecurity context being simulated?
 |$\bullet$| Does the model account for the specificities of the cyber ecosystem under study?
 |$\bullet$| How does the model integrate interdisciplinary knowledge (e.g. technological, sociological, and psychological)?
Adaptability and updating|$\bullet$| Can the model be easily updated to reflect new data or changes in the cyber threat landscape?
 |$\bullet$| How flexible is the model in adapting to new scenarios or information?
 |$\bullet$| Is there a feedback mechanism for users to suggest improvements or report issues?
Validation and testing|$\bullet$| Has the model been validated against real-world data or scenarios?
 |$\bullet$| Are there mechanisms in place for continuous testing and improvement of the model?
Deployment|$\bullet$| Is the model’s readiness and relevance for deployment and operational use assessed?
 |$\bullet$| Are the model’s limitations and their impacts documented and communicated to the users of the model?
 |$\bullet$| What training and knowledge transfer is required for model’s users to effectively interpret and apply model’s results? How will this training be delivered?
Table 1.

A set of guiding questions to be considered before employing a simulation model in economic cybersecurity decision-making processes. The table is a summary of the preceding sections, where more details are found.

Upholding data integrity and avoiding bias|$\bullet$| Is the data representative and collected using robust methods?
 |$\bullet$| Are there biases in the data collection process?
 |$\bullet$| How does the model handle incomplete or uncertain data?
 |$\bullet$| What mechanisms are in place to update data sources and ensure ongoing relevance?
Precision and scope of data|$\bullet$| Does the data align with the scope of the decision problem?
 |$\bullet$| Is the data granular enough for the specific aspects being simulated?
 |$\bullet$| Are there processes to periodically reassess the relevance and accuracy of the data?
 |$\bullet$| How does the model accommodate data from different and potentially conflicting sources?
Assumptions about statistical distributions and independence|$\bullet$| Are the assumptions about distributions and event independence justified?
 |$\bullet$| How do these assumptions impact the simulation outcomes?
 |$\bullet$| Is there a process for regularly reviewing and updating these assumptions?
 |$\bullet$| How are outliers and anomalies in data handled in the model?
 |$\bullet$| Does the model support causal or evidential decision-making? Is the model transparent enough to answer this question?
Modeling asymmetries and actor behavior|$\bullet$| Are there unjustified asymmetries in how different actors or market dynamics are modeled?
 |$\bullet$| Is the model’s representation of actor behavior (risk-neutral, risk-averse) appropriate?
 |$\bullet$| Does the model account for potential changes in actor behavior or preferences over time?
 |$\bullet$| Are there considerations for unexpected or sudden market changes?
Equilibrium and dynamic modeling|$\bullet$| Does the model appropriately account for nonequilibrium conditions and transitions?
 |$\bullet$| Are steady-state assumptions justified?
Information assumptions|$\bullet$| Are the assumptions about the availability and accuracy of information within the model realistic?
 |$\bullet$| How do these assumptions affect the simulation results?
Transparency and reproducibility|$\bullet$| Is the model and its underlying mechanics documented in a way that allows for reproducibility?
 |$\bullet$| Are the sources of data and the methodologies used clearly stated?
 |$\bullet$| Are there guidelines for interpreting the results of the model?
Contextual appropriateness|$\bullet$| Is the level of abstraction appropriate for the cybersecurity context being simulated?
 |$\bullet$| Does the model account for the specificities of the cyber ecosystem under study?
 |$\bullet$| How does the model integrate interdisciplinary knowledge (e.g. technological, sociological, and psychological)?
Adaptability and updating|$\bullet$| Can the model be easily updated to reflect new data or changes in the cyber threat landscape?
 |$\bullet$| How flexible is the model in adapting to new scenarios or information?
 |$\bullet$| Is there a feedback mechanism for users to suggest improvements or report issues?
Validation and testing|$\bullet$| Has the model been validated against real-world data or scenarios?
 |$\bullet$| Are there mechanisms in place for continuous testing and improvement of the model?
Deployment|$\bullet$| Is the model’s readiness and relevance for deployment and operational use assessed?
 |$\bullet$| Are the model’s limitations and their impacts documented and communicated to the users of the model?
 |$\bullet$| What training and knowledge transfer is required for model’s users to effectively interpret and apply model’s results? How will this training be delivered?
Upholding data integrity and avoiding bias|$\bullet$| Is the data representative and collected using robust methods?
 |$\bullet$| Are there biases in the data collection process?
 |$\bullet$| How does the model handle incomplete or uncertain data?
 |$\bullet$| What mechanisms are in place to update data sources and ensure ongoing relevance?
Precision and scope of data|$\bullet$| Does the data align with the scope of the decision problem?
 |$\bullet$| Is the data granular enough for the specific aspects being simulated?
 |$\bullet$| Are there processes to periodically reassess the relevance and accuracy of the data?
 |$\bullet$| How does the model accommodate data from different and potentially conflicting sources?
Assumptions about statistical distributions and independence|$\bullet$| Are the assumptions about distributions and event independence justified?
 |$\bullet$| How do these assumptions impact the simulation outcomes?
 |$\bullet$| Is there a process for regularly reviewing and updating these assumptions?
 |$\bullet$| How are outliers and anomalies in data handled in the model?
 |$\bullet$| Does the model support causal or evidential decision-making? Is the model transparent enough to answer this question?
Modeling asymmetries and actor behavior|$\bullet$| Are there unjustified asymmetries in how different actors or market dynamics are modeled?
 |$\bullet$| Is the model’s representation of actor behavior (risk-neutral, risk-averse) appropriate?
 |$\bullet$| Does the model account for potential changes in actor behavior or preferences over time?
 |$\bullet$| Are there considerations for unexpected or sudden market changes?
Equilibrium and dynamic modeling|$\bullet$| Does the model appropriately account for nonequilibrium conditions and transitions?
 |$\bullet$| Are steady-state assumptions justified?
Information assumptions|$\bullet$| Are the assumptions about the availability and accuracy of information within the model realistic?
 |$\bullet$| How do these assumptions affect the simulation results?
Transparency and reproducibility|$\bullet$| Is the model and its underlying mechanics documented in a way that allows for reproducibility?
 |$\bullet$| Are the sources of data and the methodologies used clearly stated?
 |$\bullet$| Are there guidelines for interpreting the results of the model?
Contextual appropriateness|$\bullet$| Is the level of abstraction appropriate for the cybersecurity context being simulated?
 |$\bullet$| Does the model account for the specificities of the cyber ecosystem under study?
 |$\bullet$| How does the model integrate interdisciplinary knowledge (e.g. technological, sociological, and psychological)?
Adaptability and updating|$\bullet$| Can the model be easily updated to reflect new data or changes in the cyber threat landscape?
 |$\bullet$| How flexible is the model in adapting to new scenarios or information?
 |$\bullet$| Is there a feedback mechanism for users to suggest improvements or report issues?
Validation and testing|$\bullet$| Has the model been validated against real-world data or scenarios?
 |$\bullet$| Are there mechanisms in place for continuous testing and improvement of the model?
Deployment|$\bullet$| Is the model’s readiness and relevance for deployment and operational use assessed?
 |$\bullet$| Are the model’s limitations and their impacts documented and communicated to the users of the model?
 |$\bullet$| What training and knowledge transfer is required for model’s users to effectively interpret and apply model’s results? How will this training be delivered?
Close
This Feature Is Available To Subscribers Only

Sign In or Create an Account

Close

This PDF is available to Subscribers Only

View Article Abstract & Purchase Options

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Close