Published October 3, 2018 | Version v1
Conference paper Open

Three laws good: Technology is a dangerous master

  • 1. BAE SYSTEMS (Maritime), Barrow-in-Furness, Cumbria, LA14 1AF

Description

A philosophy of technology use has developed in many safety-critical industries that is based upon the view that human operators are feckless and unreliable system operators, so wherever possible should not be trusted to execute safety-critical tasks.  The implicit view of automation is that it invariably improves system performance and increases reliability. After many decades or even centuries of machine and automation development human error remains one of the dominant features in failures of modern systems. The drive towards introducing automation has claimed a larger performance envelope, lower operating costs with fewer people, less risk of hazard realisation, and a more economical path in development. One of the aims of introducing automation is to introduce higher reliability in the belief that this implicitly brings with it increases in safety. As Leveson (2011) points out high reliability can be misleading because interactions between elements that are working as expected may trigger the system failure because of transverse consequences. The propagation of the view that human operators are the weakest operational link and the pervasive myths about the reliability of automated solutions, which affords automation the easier scenarios of task execution, need to be re-visited (Cook, Thody and Garrett, 2017). This should ensure that the best capability and optimal safety case is developed for future systems based upon operator and system in synergy. This may be especially true if the claims for automation are treated more aggressively in terms of liability.

Files

ISCSS 2018 Paper 056 Cook FINAL.pdf

Files (950.8 kB)

Name Size Download all
md5:e18b01572341dfbd7a474361dc4e9e03
950.8 kB Preview Download

Additional details

References

  • Beugin, J., Renaux, D., & Cauffriez, L. (2007). A SIL quantification approach based on an operating situation model for safety evaluation in complex guided transportation systems. Reliability Engineering & System Safety, 92(12), 1686-1700.
  • Cabrall, C. D., Sheridan, T. B., Prevot, T., de Winter, J. C., & Happee, R. (2018, January). The 4D LINT Model of Function Allocation: Spatial-Temporal Arrangement and Levels of Automation. In International Conference on Intelligent Human Systems Integration (pp. 29-34). Springer, Cham.
  • Charette, R. N. (2005). Why software fails [software failure]. IEEE Spectrum, 42(9), 42-49.
  • Cook, M., Thody, M., & Garrett, D. (2017). I didn't see that coming: The perils of underwater automation. Warship 2017: Naval Submarines & UUVs, 14-15 June 2017, Bath, UK.
  • Dekker, S. W., & Woods, D. D. (2002). MABA-MABA or abracadabra? Progress on human–automation coordination. Cognition, Technology & Work, 4(4), 240-244.
  • de Winter, J. C., & Dodou, D. (2014). Why the Fitts list has persisted throughout the history of function allocation. Cognition, Technology & Work, 16(1), 1-11.
  • Dowson, M. (1997). The Ariane 5 software failure. ACM SIGSOFT Software Engineering Notes, 22(2), 84.
  • Feigh, K. M., & Pritchett, A. R. (2014). Requirements for effective function allocation: A critical review. Journal of Cognitive Engineering and Decision Making, 8(1), 23-32.
  • Firesmith, D. (2004). Engineering safety requirements, safety constraints, and safety-critical requirements. Journal of Object technology, 3(3), 27-42.
  • Fuld, R. B. (1993). The fiction of function allocation. Ergonomics in Design, 1(1), 20-24.
  • Flowers, S. (1996). Software failure: Management failure. J. Wiley and Sons.
  • Hancock, P. A., & Scallen, S. F. (1996). The future of function allocation. Ergonomics in design, 4(4), 24-29.
  • Harris, D., Stanton, N. A., & Starr, A. (2015). Spot the difference: Operational event sequence diagrams as a formal method for work allocation in the development of single-pilot operations for commercial aircraft. Ergonomics, 58(11), 1773-1791.
  • Hughes, D.L., & Dwivedi, V.K. (2015). Success and failure of IS/IT projects: A state of the art analysis and future directions. Springer.
  • Jacobson, I., & Ng, P. W. (2004). Aspect-oriented software development with use cases (Addison-Wesley object technology series). Addison-Wesley Professional.
  • Jahanian, H. (2017). Optimization, a rational approach to SIL determination. Process Safety and Environmental Protection, 109, 452-464.
  • Joe, J. C., O'Hara, J., Medema, H. D., & Oxstrand, J. H. (2014). Identifying requirements for effective humanautomation teamwork (No. INL/CON-14-31340). Idaho National Laboratory (INL).
  • Kemp, R. (2016). Quantitative risk management and its limits. In A. Burgess, A. Alemanno & J.O. Zinn (Eds.) Routledge Handbook of Risk Studies, p.p. 164-178.
  • Klien, G., Woods, D. D., Bradshaw, J. M., Hoffman, R. R., & Feltovich, P. J. (2004). Ten challenges for making automation a" team player" in joint human-agent activity. IEEE Intelligent Systems, 19(6), 91-95.
  • Kuhn, D. R., Wallace, D. R., & Gallo, A. M. (2004). Software fault interactions and implications for software testing. IEEE transactions on software engineering, 30(6), 418-421.
  • Leveson, N.G. (2004). A new accident model for engineering safer systems. Safety science, 42(4), 237-270.
  • Leveson, N.G. (2009). Software challenges In achieving space safety. Journal of the British Interplanetary Society 62, July/August (2009).
  • Leveson, N. G. (2011). Applying systems thinking to analyze and learn from events. Safety Science, 49(1), 5564.
  • Leveson, N.G. (2013). Software and the Challenge of Flight Control. In, R. Launius, J Craig & J. Krige (Eds.) Space shuttle legacy: How we did it/What we learned. American Institute of Aeronautics & Astronautics.
  • Leveson, N. G. (2017). Rasmussen's legacy: A paradigm change in engineering for safety. Applied ergonomics, 59, 581-591.
  • Lions, J.L. (1996) http://www.di.unito.it/~damiani/ariane5rep.html Accessed 3/6/18.
  • Marais, K., Dulac, N., & Leveson, N. (2004, March). Beyond normal accidents and high reliability organizations: The need for an alternative approach to safety in complex systems. In Engineering Systems Division Symposium (pp. 1-16). MIT Cambridge, MA.
  • MacLeod, I. S. (2008). Scenario-based requirements capture for human factors integration. Cognition, Technology & Work, 10(3), 191-198.
  • Nuseibeh, B. (1997). Ariane 5: who dunnit?. IEEE Software, 14(3), 15.
  • Pham, H. (2000). Software reliability. Singapore: Springer-Verlag
  • Pritchett, A. R., Kim, S. Y., & Feigh, K. M. (2014a). Measuring human-automation function allocation. Journal of Cognitive Engineering and Decision Making, 8(1), 52-77.
  • Pritchett, A. R., Kim, S. Y., & Feigh, K. M. (2014b). Modeling human–automation function allocation. Journal of Cognitive Engineering and Decision Making, 8(1), 33-51.
  • Reeves, B., & Nass, C. I. (1996). The media equation: How people treat computers, television, and new media like real people and places. Cambridge university press.
  • Sheridan, T. B. (2000). Function allocation: algorithm, alchemy or apostasy? International Journal of HumanComputer Studies, 52(2), 203-216.
  • Smith, D. J., & Simpson, K. G. (2004). Functional Safety: A straightforward guide to applying IEC 61508 and related standards. Routledge.
  • Smith, D. J., & Simpson, K. G. (2016). The Safety Critical Systems Handbook: A Straightforward Guide to Functional Safety: IEC 61508 (2010 Edition), IEC 61511 (2015 Edition) and Related Guidance. ButterworthHeinemann