The Impact of Artificial Intelligence on Security: a Dual Perspective
Description
This paper analyses the impact of Artificial Intelligence (AI) on security processes. Through the analysis of risk maps (a risk analysis tool), we highlight two opposing views: Beneficial AI and Malicious AI. Beneficial AI focuses on improving security, covering capabilities such as security design and testing assistance, system security monitoring, and decision making upon cyber-attacks. Malicious AI focuses on lowering security, covering capabilities such assistance for attack undetectability, or for attack decision making. While we recall means of attacks ranging from enhanced cyber-attacks to social-engineering, we also describe ways of integrating AI in companies and products’ life cycle and reflections about ethics in AI. We then analyze how impacted IoT systems may be considering the relationships between connected objects and AI models and their use cases. Finally, we conclude with two recommendations: revisiting risk frameworks to integrate AI, and providing recommendations for an ethical approach to AI research.
Files
CESAR_2018_J1-03_A-SZYCHTER_Dual_perspective _AI_in_Cybersecurity.pdf
Files
(731.1 kB)
Name | Size | Download all |
---|---|---|
md5:6d8c37cc7d88fedf091f110b2dec44ff
|
731.1 kB | Preview Download |
Additional details
References
- «Agile Co-Creation of Robots for Ageing,» [En ligne]. Available: https://cordis.europa.eu/project/rcn/207079_en.html. [Accès le 29 06 2018].
- A. KUNG, «AI as a Disruptive Opportunity and Challenge for Security, ETSI security week 2018, Future-Proof IoT Security and Privacy,» 12 06 2018. [En ligne]. Available: https://docbox.etsi.org/Workshop/2018/201806_ETSISECURITYWEEK/IoTSecurity/S03_TRANSFORMATI ON/TRIALOG_KUNG.pdf
- National Institute of Standards and Technology, «An Introduction to Privacy Engineering and Risk Management,» [En ligne]. Available: https://nvlpubs.nist.gov/nistpubs/ir/2017/NIST.IR.8062.pdf. [Accès le 2018 06 27].
- «ETSI TS 102 165-1 V4.2.3, Technical Specification,» [En ligne]. Available: http://www.etsi.org/deliver/etsi_ts/102100_102199/10216501/04.02.03_60/ts_10216501v040203p.pdf. [Accès le 27 06 2018].
- Commission Nationale de l'Informatique et des Libertés, «METHODOLOGY FOR PRIVACY RISK MANAGEMENT, How to implement the Data Protection Act,» [En ligne]. Available: https://www.cnil.fr/sites/default/files/typo/document/CNIL-ManagingPrivacyRisks-Methodology.pdf. [Accès le 27 06 2018]
- «The Malicious Use of Artificial Intelligence, Forecasting, Prevention, and Mitigation,» February 2018. [En ligne]. Available: https://maliciousaireport.com/. [Accès le 28 06 2018]
- M. R. Brown, «Better Business Bureau's work on Cybersecurity (CYBER$3CUR1TY),» 28 June 2017. [En ligne]. Available: http://michaelonsecurity.blogspot.com/2017/06/better-business-bureaus-work-on.html. [Accès le 29 June 2018]
- ISO/IEC/IEEE 15288:2015, Systems and software engineering -- System life cycle processes.
- «ASILOMAR AI PRINCIPLES,» [En ligne]. Available: https://futureoflife.org/aiprinciples/. [Accès le 29 06 2018]
- K. Baxter, «How to Build Ethics into AI — Part I,» [En ligne]. Available: https://medium.com/salesforce-ux/how-to-build-ethics-into-ai-part-i-bf35494cce9. [Accès le 29 June 2018]
- K. Baxter, «How to Build Ethics into AI — Part II,» [En ligne]. Available: https://medium.com/salesforce-ux/how-to-build-ethics-into-ai-part-iia563f3372447. [Accès le 29 June 2018]
- «Social Engineering (security),» [En ligne]. Available: https://en.wikipedia.org/wiki/Social_engineering_(security). [Accès le 28 06 2018]
- Wikipedia, «Adverserial Machine Learning,» [En ligne]. Available: https://en.wikipedia.org/wiki/Adversarial_machine_learning#Poisoning_attacks. [Accès le 28 06 2018]
- S. Ticu, «Intelligence Artificielle : quelles différences entre le Machine Learning et l'approche déterministe ?,» 16 06 2016. [En ligne]. Available: https://yseop.com/fr/blog/intelligence-artificielle-differences-entre-machine-learning-lapproche-deterministe/. [Accès le 27 06 2018]
- Y. Dong et al., "Boosting Adversarial Attacks with Momentum," arXiv:1710.06081 [cs, stat], Oct. 2017
- F. Tramèr, F. Zhang, A. Juels, M. K. Reiter, and T. Ristenpart, "Stealing Machine Learning Models via Prediction APIs," arXiv:1609.02943 [cs, stat], Sep. 2016
- K. Grosse, P. Manoharan, N. Papernot, M. Backes, and P. McDaniel, "On the (Statistical) Detection of Adversarial Examples," arXiv:1702.06280 [cs, stat], Feb. 2017