Background#

Healthcare discrete-event simulation#

In healthcare, discrete-event simulation (DES) is the most used computational method for modelling [Philip et al., 2022, Roy et al., 2021, Salleh et al., 2017, Salmon et al., 2018]. DES has proven useful within the field of health as it can be used to model patient care pathways, optimise health service delivery, investigate health queuing systems, and conduct health technology assessment. It has been applied to a wide variety of important clinical and health problems such as stroke care [Lahr et al., 2020], emergency departments [Mohiuddin et al., 2017], chronic obstructive pulmonary disease [Hoogendoorn et al., 2021], sexual health [Mohiuddin et al., 2020], reducing delayed discharges [Onen-Dumlu et al., 2022], mental health waiting times, critical care [Penn et al., 2020], managing health services during the Covid-19 pandemic [Yakutcan et al., 2022], and end of life care [Chalk et al., 2021]. Healthcare DES models are often complex research artefacts: they are time consuming to build, depend on specialist software, and logic may be difficult to describe accurately using words and diagrams alone [Monks et al., 2019].

Published computer models: study motivation#

To enhance transparency of model logic, and offer others the potential to understand, learn from, or reuse a model, one option available to authors of healthcare DES studies is to openly publish the computer model. We define a computer model to be either a model written in a specialist simulation software package, or model written in a general purpose programming language. The computer model is an artifact that is an implementation of the study conceptual model [Robinson, 2014]. It is an executable artifact and is used for experimentation.

The current extent of model sharing and practice of sharing DES computer models in the healthcare literature is unknown. To understand if and how authors of DES studies are sharing their models, draw lessons, and evaluate if this can be improved to benefit the wider community, we conduct a review of the contemporary DES literature between 2019 and 2022 inclusive. Reviews in other computational fields report that the sharing of model code and files has historically been low [Brailsford et al., 2019, Collberg and Proebsting, 2016, Janssen et al., 2020, Rahmandad and Sterman, 2012, Stodden et al., 2018]. The closest of these field to our review in healthcare DES is in the field of Agent (or Individual) Based Simulation [Janssen et al., 2020]. This study examined 7500 articles reporting agent-based models and found only 11% of articles shared model code, although there was an upward trend: 18% of ABS publications were found to share their model in some form by 2018.

Sharing models is a subset of reproducibility#

Our focus in this study is on the practice of sharing healthcare DES computer models: to what extent do health researchers openly share their DES computer models, how do they do it, and what actions could the DES community take to improve what is shared? We consider the open publication of models to be a subset of, and complementary to, the broader topic tackling the \textit{reproducibility} of computational analyses and modelling. There has been a long standing effort to provide incentives for authors to make their computational work reproducibile [Ayllón et al., 2021, Grimm et al., 2010, Heroux, 2015, Marco A. Janssen, 2008, Monks et al., 2019, Reinhardt et al., 2018, Ruscheinski et al., 2020, The Turing Way Community, 2022]. One of the most well known of these within the modelling and simulation community is the Association of Computing Machinery’s (ACM) Reproducible Computational Results (RCR) initiative (https://www.acm.org/publications/policies/artifact-review-and-badging-current). The RCR is an optional extra peer review process for authors \textit{who publish in ACM journals}. Computational artifacts, i.e. models or algorithms, are peer reviewed by specialists and author publications are awarded badges based on the results. ACM RCR badges include: artifacts evaluated (as functional or reusable), artifacts available (deposited in a FORCE-11 compliant [Smith et al., 2016] archive such as the Open Science Framework) and Results Validated (either using the author provided artifacts or a higher level using independent methods).

Initiatives such RCR are limited to specific journals, but health researchers publish, and may share DES models, in a wide variety of outlets. For example, mathematical, medical and clinical, Health Economic, health policy, and Operational Research journals; as well as specialist conferences that publish full peer reviewed papers (such as the Winter Simulation Conference). In these non RCR supported journals it is unlikely that model artifacts are peer reviewed. Those authors that share models may instead be guided by discipline norms, journal rules, open research guides such as the Turing Way [The Turing Way Community, 2022], or one of several DES reporting guidelines [Eddy et al., 2012, Monks et al., 2019, Zhang et al., 2020].

The DES reporting guidelines take different approaches to publication of DES computer models. The International Society for Pharmacoeconomics and Outcomes Research and the Society for Medical Decision Making (ISPOR-SDM; [Eddy et al., 2012]) encourage authors to make non-confidential versions of their computer models available to enhance transparency, but state that open models should not be a formal requirement of publication. The task-force note a number of reasons why code might not be able to be shared including intellectual property and cost. The Strengthening the Reporting of Empirical Simulation Studies (STRESS-DES; [Monks et al., 2019]) guidelines takes the position that model code is an enhancement to transparency, not a requirement. The STRESS checklist asks for detailed information on the software environment and computing hardware used to execute the model. Section 6 goes further and requires a statement on how the computer model can be accessed. This is intended to prompt authors to think about enhanced transparency, and enhance publication in journals that do not ask for “code and data availability statements”. The reporting checklist developed by [Zhang et al., 2020] focuses only on logic and validation reporting, and does not prompt users for information on model code.

One exception to the position of publication as an enhancement versus requirement for transparency is perhaps models tackling Covid-19. At the start of the coronavirus pandemic, the lack of transparency and access to epidemiological model code used to inform economic and public health policy contributed to public confusion and polarisation. This has lead to some calling for open publication of all model code related to any aspect of Covid-19 [Sills et al., 2020].

State-of the art practices for sharing computer model artifacts#

The topic of sharing code, simulated experiments, computer model artifacts and the reproducibility of published results is a live topic in other computational fields such as neuroscience, life sciences and ecology [Cadwallader and Hrynaszkiewicz, 2022, Eglen et al., 2017, Halchenko and Hanke, 2015, Heil et al., 2021, Krafczyk et al., 2021]. Outside of the academic literature there are recent community driven guides, standards, and digital repositories. These include the Turing Way developed by the Alan Turing Institute [The Turing Way Community, 2022], the Open Modelling Foundation standards (https://www.openmodelingfoundation.org/), and the ability to deposit models using the Network for Computational Modeling in Social and Ecological Sciences (CoMSES Net; https://www.comses.net/). The state-of-the-art for sharing model artifacts is an emerging and evolving field; although in biostatistics it has been talked about as far back as 2011 [Peng, 2011]. Although this recent literature is disparate, when brought together the literature agrees on a number of practices which benefits the ability of others to find, access, reuse and freely adapt shared model artifacts.

Contemporary sharing of computer model artifacts is best done through a digital open science repository that has FORCE11 compliant citation [Smith et al., 2016] and guarantees on persistance of digital artifacts [Lin et al., 2020]. Examples include Zenodo (https://zenodo.org/); Figshare (https://figshare.com/), the Open Science Framework (https://osf.io/) or CoMSES Net. Deposited models are provided with a permanent Digital Object Identifier (DOI) that can be used to cite the artifact. Researchers should already be familiar with DOIs as they are minted and allocated to published journal articles. An example is 10.1016/j.envsoft.2020.104873 that identifies an article by [Janssen et al., 2020]. The advantage of this approach is that the exact code that is cited in the journal article is preserved (authors are free to work on new versions of the code). A related concept is that of the Open Researcher and Contributor Identifier (ORCID) [Taylor et al., 2017]. This is a unique identifier for an individual researcher. A trusted archive will accommodate ORCIDs within the meta-data of a deposited artifact: providing an unambiguous permanent link back to the authors of the artifact. This offers an improvement over e-mail addresses listed with a journal article that may become outdated shortly after publication.

Published models should also be accompanied by an open license [Eglen et al., 2017, Halchenko and Hanke, 2015, Heil et al., 2021]. A license details how others may use or adapt the artifact(s), as well as re-share any adaptations and credit authors. At a minimum a license specifies the terms of use for a model, and waives the authors of any liability if the artifact is reused. There are many types of standard license to choose from. For example, licensers of models might choose between a permissive type license (e.g. the MIT; or BSD 2-Clause) or a copyleft type license (e.g. GNU General Public License v3). An alternative that is often used with open data, and open access publication, but also relevant for models are Creative Commons licenses such as the CC-BY 4.0 [Taylor et al., 2017].

Permissive and copyleft licenses are also used by DES packages developed using Free and Open Source Software (FOSS). Note that FOSS here is more than open source code. It grants the freedom for users to reuse, adapt and distribute copies however they choose. Examples include R Simmer (GPL-2), SimPy (MIT) and JaamSim (Apache 2.0). For an overview of FOSS packages for DES see [Dagkakis and Heavey, 2016].

To maximise the chances that another user can execute a computer artifact, a model’s dependencies and the software environment must be specified [Heil et al., 2021, Krafczyk et al., 2021]. This can be challenging: many computational artifacts rely on other software that may be operating system specific. Formal methods exist to manage dependencies. Complexity can range from package managers, such as conda or renv, to containerisation (where a model, parameters, an operating system and dependencies are deployed via a container and software such as Docker), to Open Science Gateways that allow remote execution [Taylor et al., 2017]. Such methods may be best suited to computational artifacts written in code; for example a simulation package in Python, or R. Models developed in commercial Visual Interactive Modelling packages such as Arena or Simul8 rely on software with strict proprietary licensing stipulations (i.e. paid licenses), but the software and operating system versions can be reported within the meta-data of the deposited artifact. Several commercial simulation packages now provide cloud versions of their software where users may upload a computer model and allows others to execute it without installation. However, such tools do not adhere to the guarantees provided by a trusted digital repository such as Zenodo.

Execution of a computer model artifact should be guided by a clear set of instructions: for example the inclusion of a README file that includes an overview of what a model does, how to execute it, and how to vary parameters [Cadwallader and Hrynaszkiewicz, 2022, Eglen et al., 2017]. Documentation of models developed using code only could be enhanced with notebooks that combine code and explanation [Ayllón et al., 2021, The Turing Way Community, 2022].

Lastly, if coded models are to be trusted, reused or adapted then some form of testing and verification should be included with the shared model [The Turing Way Community, 2022]. Test driven development is one option for the simulation community [Onggo and Karatas, 2016].

Time, effort, and alternatives#

The state-of-the-art methods and benefits outlined above do come at the cost of time and effort. Publishing a computer model artifact along with a journal article may also prompt authors to clean up code and models ready for sharing. There is some evidence that the time authors are willing to spend on this varies with experience; with more established authors being willing to spend more time than those with fewer publications [Cadwallader and Hrynaszkiewicz, 2022]. Authors may choose to adopt one or a combination of the practices recommended by the literature. More complex methods require more effort. For example, in a small trial the journal [] report that it took a median of nine days for authors to setup an online executable version of their computational artifacts using the containerisation and compute services provided by Code Ocean [Editorial, 2019]. In contrast, depositing code or a model in a trusted digital archive such as Zenodo requires the only the time to upload the data and effort to add meta-data such as ORCIDs.

A simple alternative option to direct publication of computer models is to use a Data Availability Statement (DAS). A DAS provides a way for authors to describe how others might access the computational artifacts used within the research. For example, “the materials used within this study are available from the corresponding author on reasonable request”. A substantial downside is that DAS statements offering to share are frequently not honoured, even in journals mandating reproducibility standards [Collberg and Proebsting, 2016, Gabelica et al., 2022, Stodden et al., 2018]. In the largest simulation review to date the study researchers contacted all authors of ABS papers that included a DAS within their paper. They received a response from less than 1% of authors to provide their code; the majority of these indicated that their model is no longer available, or failed to provide a runnable version [Janssen et al., 2020]. Outside of simulation modelling other disciplines have reported varying results when contacting authors of papers with DAS statements, with positive responses of 7% [Gabelica et al., 2022], 33% [Collberg and Proebsting, 2016], and 44% [Stodden et al., 2018].

References#

ARG+21(1,2)

Daniel Ayllón, Steven F. Railsback, Cara Gallagher, Jacqueline Augusiak, Hans Baveco, Uta Berger, Sandrine Charles, Romina Martin, Andreas Focks, Nika Galic, Chun Liu, E. Emiel van Loon, Jacob Nabe-Nielsen, Cyril Piou, J. Gareth Polhill, Thomas G. Preuss, Viktoriia Radchuk, Amelie Schmolke, Julita Stadnicka-Michalak, Pernille Thorbek, and Volker Grimm. Keeping modelling notebooks with TRACE: Good for you and good for environmental research and management support. Environmental Modelling & Software, 136:104932, February 2021. URL: https://www.sciencedirect.com/science/article/pii/S1364815220309890 (visited on 2023-04-25), doi:10.1016/j.envsoft.2020.104932.

BEK+19

Sally C. Brailsford, Tillal Eldabi, Martin Kunc, Navonil Mustafee, and Andres F. Osorio. Hybrid simulation modelling in operational research: A state-of-the-art review. European Journal of Operational Research, 278(3):721–737, November 2019. URL: https://www.sciencedirect.com/science/article/pii/S0377221718308786 (visited on 2023-04-03), doi:10.1016/j.ejor.2018.10.025.

CH22(1,2,3)

Lauren Cadwallader and Iain Hrynaszkiewicz. A survey of researchers’ code sharing and code reuse practices, and assessment of interactive notebook prototypes. PeerJ, 10:e13933, August 2022. URL: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9406794/ (visited on 2023-04-25), doi:10.7717/peerj.13933.

CRK+21

Daniel Chalk, Sara Robbins, Rohan Kandasamy, Kate Rush, Ajay Aggarwal, Richard Sullivan, and Charlotte Chamberlain. Modelling palliative and end-of-life resource requirements during covid-19: implications for quality care. BMJ Open, 2021. URL: https://bmjopen.bmj.com/content/11/5/e043795, arXiv:https://bmjopen.bmj.com/content/11/5/e043795.full.pdf, doi:10.1136/bmjopen-2020-043795.

CP16(1,2,3)

Christian Collberg and Todd A. Proebsting. Repeatability in computer systems research. Commun. ACM, 59(3):62–69, feb 2016. URL: https://doi.org/10.1145/2812803, doi:10.1145/2812803.

DH16

Georgios Dagkakis and Cathal Heavey. A review of open source discrete event simulation software for operations research. Journal of Simulation, 10(3):193–206, 2016.

EHC+12(1,2)

David M. Eddy, William Hollingworth, J. Jaime Caro, Joel Tsevat, Kathryn M. McDonald, and John B. Wong. Model transparency and validation: a report of the ispor-smdm modeling good research practices task force–7. Medical Decision Making, 32(5):733–743, 2012. PMID: 22990088. URL: https://doi.org/10.1177/0272989X12454579, arXiv:https://doi.org/10.1177/0272989X12454579, doi:10.1177/0272989X12454579.

Edi19

Editorial. Changing coding culture. Nature Biotechnology, 37(5):485–485, May 2019. Number: 5 Publisher: Nature Publishing Group. URL: https://www.nature.com/articles/s41587-019-0136-9 (visited on 2023-04-04), doi:10.1038/s41587-019-0136-9.

EMH+17(1,2,3)

Stephen J. Eglen, Ben Marwick, Yaroslav O. Halchenko, Michael Hanke, Shoaib Sufi, Padraig Gleeson, R. Angus Silver, Andrew P. Davison, Linda Lanyon, Mathew Abrams, Thomas Wachtler, David J. Willshaw, Christophe Pouzat, and Jean-Baptiste Poline. Toward standard practices for sharing computer code and programs in neuroscience. Nature Neuroscience, 20(6):770–773, June 2017. Number: 6 Publisher: Nature Publishing Group. URL: https://www.nature.com/articles/nn.4550 (visited on 2023-04-25), doi:10.1038/nn.4550.

GBP22(1,2)

Mirko Gabelica, Ružica Bojčić, and Livia Puljak. Many researchers were not compliant with their published data sharing statement: a mixed-methods study. Journal of Clinical Epidemiology, 150:33–41, October 2022. URL: https://www.sciencedirect.com/science/article/pii/S089543562200141X (visited on 2023-04-25), doi:10.1016/j.jclinepi.2022.05.019.

GBD+10

Volker Grimm, Uta Berger, Donald L. DeAngelis, J. Gary Polhill, Jarl Giske, and Steven F. Railsback. The ODD protocol: A review and first update. Ecological Modelling, 221(23):2760–2768, November 2010. URL: https://www.sciencedirect.com/science/article/pii/S030438001000414X (visited on 2023-06-02), doi:10.1016/j.ecolmodel.2010.08.019.

HH15(1,2)

Yaroslav O. Halchenko and Michael Hanke. Four aspects to make science open "by design" and not as an after-thought. GigaScience, 4:31, 2015. doi:10.1186/s13742-015-0072-7.

HHM+21(1,2,3)

Benjamin J. Heil, Michael M. Hoffman, Florian Markowetz, Su-In Lee, Casey S. Greene, and Stephanie C. Hicks. Reproducibility standards for machine learning in the life sciences. Nature Methods, 18(10):1132–1135, October 2021. Number: 10 Publisher: Nature Publishing Group. URL: https://www.nature.com/articles/s41592-021-01256-7 (visited on 2023-04-25), doi:10.1038/s41592-021-01256-7.

Her15

Michael A. Heroux. Editorial: ACM TOMS Replicated Computational Results Initiative. ACM Transactions on Mathematical Software, 41(3):13:1–13:5, June 2015. URL: https://dl.acm.org/doi/10.1145/2743015 (visited on 2023-06-02), doi:10.1145/2743015.

HRS+21

Martine Hoogendoorn, Isaac Corro Ramos, Stéphane Soulard, Jennifer Cook, Erkki Soini, Emma Paulsson, and Maureen Rutten-van Mölken. Cost-effectiveness of the fixed-dose combination tiotropium/olodaterol versus tiotropium monotherapy or a fixed-dose combination of long-acting β2-agonist/inhaled corticosteroid for copd in finland, sweden and the netherlands: a model-based study. BMJ Open, 2021. URL: https://bmjopen.bmj.com/content/11/8/e049675, arXiv:https://bmjopen.bmj.com/content/11/8/e049675.full.pdf, doi:10.1136/bmjopen-2021-049675.

JPL20(1,2,3,4)

Marco A Janssen, Calvin Pritchard, and Allen Lee. On code sharing and model documentation of published individual and agent-based models. Environmental Modelling & Software, 134:104873, 2020.

KSB+21(1,2)

M. S. Krafczyk, A. Shi, A. Bhaskar, D. Marinov, and V. Stodden. Learning from reproducing computational results: introducing three principles and the Reproduction Package. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 379(2197):20200069, March 2021. Publisher: Royal Society. URL: https://royalsocietypublishing.org/doi/10.1098/rsta.2020.0069 (visited on 2023-04-25), doi:10.1098/rsta.2020.0069.

LvdZLB20

Maarten M H Lahr, Durk-Jouke van der Zee, Gert-Jan Luijckx, and Erik Buskens. Optimising acute stroke care organisation: a simulation study to assess the potential to increase intravenous thrombolysis rates and patient gains. BMJ Open, 2020. URL: https://bmjopen.bmj.com/content/10/1/e032780, arXiv:https://bmjopen.bmj.com/content/10/1/e032780.full.pdf, doi:10.1136/bmjopen-2019-032780.

LCD+20

Dawei Lin, Jonathan Crabtree, Ingrid Dillo, Robert R. Downs, Rorie Edmunds, David Giaretta, Marisa De Giusti, Hervé L’Hours, Wim Hugo, Reyna Jenkyns, Varsha Khodiyar, Maryann E. Martone, Mustapha Mokrane, Vivek Navale, Jonathan Petters, Barbara Sierman, Dina V. Sokolova, Martina Stockhause, and John Westbrook. The TRUST Principles for digital repositories. Scientific Data, 7(1):144, May 2020. Number: 1 Publisher: Nature Publishing Group. URL: https://www.nature.com/articles/s41597-020-0486-7 (visited on 2023-04-25), doi:10.1038/s41597-020-0486-7.

MAJ08

Lilian Na'ia Alessa Marco A. Janssen. Towards a Community Framework for Agent-Based Modelling. March 2008. Publisher: JASSS. URL: https://jasss.soc.surrey.ac.uk/11/2/6.html (visited on 2023-06-02).

MBSavovic+17

Syed Mohiuddin, John Busby, Jelena Savović, Alison Richards, Kate Northstone, William Hollingworth, Jenny L Donovan, and Christos Vasilakis. Patient flow within uk emergency departments: a systematic review of the use of computer simulation modelling methods. BMJ Open, 2017. URL: https://bmjopen.bmj.com/content/7/5/e015007, arXiv:https://bmjopen.bmj.com/content/7/5/e015007.full.pdf, doi:10.1136/bmjopen-2016-015007.

MGC+20

Syed Mohiuddin, Rebecca Gardiner, Megan Crofts, Peter Muir, Jonathan Steer, Jonathan Turner, Helen Wheeler, William Hollingworth, and Paddy J Horner. Modelling patient flows and resource use within a sexual health clinic through discrete event simulation to inform service redesign. BMJ Open, 2020. URL: https://bmjopen.bmj.com/content/10/7/e037084, arXiv:https://bmjopen.bmj.com/content/10/7/e037084.full.pdf, doi:10.1136/bmjopen-2020-037084.

MCO+19(1,2,3,4)

Thomas Monks, Christine SM Currie, Bhakti Stephan Onggo, Stewart Robinson, Martin Kunc, and Simon JE Taylor. Strengthening the reporting of empirical simulation studies: introducing the stress guidelines. Journal of Simulation, 13(1):55–67, 2019.

ODHF+22

Zehra Onen-Dumlu, Alison L. Harper, Paul G. Forte, Anna L. Powell, Martin Pitt, Christos Vasilakis, and Richard M. Wood. Optimising the balance of acute and intermediate care capacity for the complex discharge pathway: computer modelling study during covid-19 recovery in england. PLOS ONE, 17(6):1–16, 06 2022. URL: https://doi.org/10.1371/journal.pone.0268837, doi:10.1371/journal.pone.0268837.

OK16

Bhakti Stephan Onggo and Mumtaz Karatas. Test-driven simulation modelling: a case study using agent-based maritime search-operation simulation. European Journal of Operational Research, 254(2):517–531, 2016. URL: https://www.sciencedirect.com/science/article/pii/S0377221716301965, doi:https://doi.org/10.1016/j.ejor.2016.03.050.

Pen11

R.D Peng. Reproducible Research in Computational Science. Science, 2011. URL: https://www.science.org/doi/10.1126/science.1213847 (visited on 2023-04-25), doi:10.1126/science.1213847.

PMKA20

M.L. Penn, T. Monks, A.A. Kazmierska, and M.R.A.R. Alkoheji. Towards generic modelling of hospital wards: reuse and redevelopment of simple models. Journal of Simulation, 14(2):107–118, 2020. URL: https://doi.org/10.1080/17477778.2019.1664264, arXiv:https://doi.org/10.1080/17477778.2019.1664264, doi:10.1080/17477778.2019.1664264.

PPM22

Aby M Philip, Shanmugam Prasannavenkatesan, and Navonil Mustafee. Simulation modelling of hospital outpatient department: a review of the literature and bibliometric analysis. SIMULATION, 0(0):00375497221139282, 2022. URL: https://doi.org/10.1177/00375497221139282, arXiv:https://doi.org/10.1177/00375497221139282, doi:10.1177/00375497221139282.

RS12

Hazhir Rahmandad and John D. Sterman. Reporting guidelines for simulation-based research in social sciences. System Dynamics Review, 28(4):396–411, 2012. URL: https://onlinelibrary.wiley.com/doi/abs/10.1002/sdr.1481, arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1002/sdr.1481, doi:https://doi.org/10.1002/sdr.1481.

RRU18

Oliver Reinhardt, Andreas Rucheinski, and Adelinde M. Uhrmacher. ODD+P: COMPLEMENTING THE ODD PROTOCOL WITH PROVENANCE INFORMATION. In 2018 Winter Simulation Conference (WSC), 727–738. December 2018. ISSN: 1558-4305. doi:10.1109/WSC.2018.8632481.

Rob14

Stewart Robinson. Simulation: The Practice of Model Development and Use. Palgrave Macmillan, 2014.

RVG21

Sumanta Roy, Shanmugam Prasanna Venkatesan, and Mark Goh. Healthcare services: a systematic review of patient-centric logistics issues using simulation. Journal of the Operational Research Society, 72(10):2342–2364, 2021. URL: https://doi.org/10.1080/01605682.2020.1790306, arXiv:https://doi.org/10.1080/01605682.2020.1790306, doi:10.1080/01605682.2020.1790306.

RWU20

Andreas Ruscheinski, Tom Warnke, and Adelinde M. Uhrmacher. Artifact-Based Workflows for Supporting Simulation Studies. IEEE Transactions on Knowledge and Data Engineering, 32(6):1064–1078, June 2020. Conference Name: IEEE Transactions on Knowledge and Data Engineering. doi:10.1109/TKDE.2019.2899840.

STB+17

Syed Salleh, Praveen Thokala, Alan Brennan, Ruby Hughes, and Andrew Booth. Simulation modelling in healthcare: an umbrella review of systematic literature reviews. PharmacoEconomics, 35(9):937–949, 2017.

SRBP18

Andrew Salmon, Sebastian Rachuba, Simon Briscoe, and Martin Pitt. A structured literature review of simulation modelling applied to emergency departments: current patterns and emerging trends. Operations Research for Health Care, 19:1–13, 2018. URL: https://www.sciencedirect.com/science/article/pii/S2211692317301042, doi:https://doi.org/10.1016/j.orhc.2018.01.001.

SBA+20

Jennifer Sills, C. Michael Barton, Marina Alberti, Daniel Ames, Jo-An Atkinson, Jerad Bales, Edmund Burke, Min Chen, Saikou Y Diallo, David J. D. Earn, Brian Fath, Zhilan Feng, Christopher Gibbons, Ross Hammond, Jane Heffernan, Heather Houser, Peter S. Hovmand, Birgit Kopainsky, Patricia L. Mabry, Christina Mair, Petra Meier, Rebecca Niles, Brian Nosek, Nathaniel Osgood, Suzanne Pierce, J. Gareth Polhill, Lisa Prosser, Erin Robinson, Cynthia Rosenzweig, Shankar Sankaran, Kurt Stange, and Gregory Tucker. Call for transparency of covid-19 models. Science, 368(6490):482–483, 2020. URL: https://www.science.org/doi/abs/10.1126/science.abb8637, arXiv:https://www.science.org/doi/pdf/10.1126/science.abb8637, doi:10.1126/science.abb8637.

SKNFORCE11SCWGroup16(1,2)

Arfon M. Smith, Daniel S. Katz, Kyle E. Niemeyer, and FORCE11 Software Citation Working Group. Software citation principles. PeerJ Computer Science, 2:e86, September 2016. URL: https://peerj.com/articles/cs-86 (visited on 2023-04-25), doi:10.7717/peerj-cs.86.

SSM18(1,2,3)

Victoria Stodden, Jennifer Seiler, and Zhaokun Ma. An empirical analysis of journal policy effectiveness for computational reproducibility. Proceedings of the National Academy of Sciences, 115(11):2584–2589, 2018.

TAF+17(1,2,3)

Simon JE Taylor, Anastasia Anagnostou, Adedeji Fabiyi, Christine Currie, Thomas Monks, Roberto Barbera, and Bruce Becker. Open science: approaches and benefits for modeling & simulation. In 2017 Winter Simulation Conference (WSC), 535–549. IEEE, 2017.

YHLD22

Usame Yakutcan, John R Hurst, Reda Lebcir, and Eren Demir. Assessing the impact of covid-19 measures on copd management and patients: a simulation-based decision support tool for copd services in the uk. BMJ Open, 2022. URL: https://bmjopen.bmj.com/content/12/10/e062305, arXiv:https://bmjopen.bmj.com/content/12/10/e062305.full.pdf, doi:10.1136/bmjopen-2022-062305.

ZLR20(1,2)

Xiange Zhang, Stefan K. Lhachimi, and Wolf H. Rogowski. Reporting quality of discrete event simulations in healthcare—results from a generic reporting checklist. Value in Health, 23(4):506–514, 2020. URL: https://www.sciencedirect.com/science/article/pii/S1098301520300401, doi:https://doi.org/10.1016/j.jval.2020.01.005.

TheTWCommunity22(1,2,3,4,5)

The Turing Way Community. The Turing Way: A handbook for reproducible, ethical and collaborative research. July 2022. URL: https://doi.org/10.5281/zenodo.7470333, doi:10.5281/zenodo.7470333.