Regulatory Intelligence Engineering for Global Enterprises: Governance Architectures for AI-Enabled Compliance Operations
Authors/Creators
Description
Global enterprises operate within an increasingly complex regulatory landscape shaped by cross jurisdictional legislation, evolving compliance standards, and heightened expectations for transparency and accountability in digital operations. Traditional compliance management approaches that rely primarily on manual monitoring, static policy documentation, and fragmented governance processes are becoming insufficient for managing regulatory obligations at enterprise scale. The rapid integration of artificial intelligence into enterprise information systems has created new opportunities to transform compliance operations through intelligent monitoring, automated risk detection, and adaptive governance architectures. This study introduces the concept of regulatory intelligence engineering as a systematic approach for designing enterprise governance architectures that enable AI enabled compliance operations across global organizations. The research examines how regulatory intelligence capabilities can be embedded within enterprise governance frameworks to continuously interpret regulatory requirements, monitor operational data streams, detect compliance deviations, and support decision making processes for regulatory risk mitigation. A conceptual architecture is developed to illustrate how AI driven analytics, policy knowledge repositories, governance control mechanisms, and human oversight interfaces can operate together as an integrated regulatory intelligence layer within enterprise systems. The proposed governance architecture emphasizes explainable decision processes, audit ready compliance workflows, and scalable regulatory monitoring across diverse regulatory environments. The study also presents operational models demonstrating how enterprises can integrate regulatory intelligence capabilities with existing enterprise resource planning systems, human capital management platforms, and financial governance infrastructures to enable continuous compliance management. By transforming compliance from reactive documentation based activities into proactive intelligence driven operations, the proposed framework enables organizations to achieve higher levels of regulatory visibility, operational accountability, and governance maturity. The findings contribute to emerging research on enterprise governance in the era of AI enabled decision systems and provide a structured foundation for organizations seeking to operationalize regulatory intelligence as a core component of enterprise compliance strategy.
Files
EJAET-12-4-101-124.pdf
Files
(806.1 kB)
| Name | Size | Download all |
|---|---|---|
|
md5:ce72719e8d7cc3d1e8f097589fed4482
|
806.1 kB | Preview Download |
Additional details
References
- [1]. Gomber, P., Koch, J. A., & Siering, M. (2017). Digital finance and fintech: Current research and future research directions. Journal of Business Economics, 87(5), 537–580. https://doi.org/10.1007/s11573-017-0852-x
- [2]. Zetzsche, D. A., Buckley, R. P., Arner, D. W., & Barberis, J. (2017). Regulating a revolution: From regulatory sandboxes to smart regulation. Fordham Journal of Corporate & Financial Law, 23(1), 31–103. https://doi.org/10.2139/ssrn.3018534
- [3]. Vasarhelyi, M. A., Kogan, A., & Tuttle, B. M. (2015). Big data in accounting: An overview. Accounting Horizons, 29(2), 381–396. https://doi.org/10.2308/acch-51071
- [4]. Alles, M. (2015). Drivers of the use and facilitators and obstacles of the evolution of big data by the audit profession. Accounting Horizons, 29(2), 439–449. https://doi.org/10.2308/acch-51067
- [5]. Van der Aalst, W. M. P. (2016). Process mining: Data science in action. Springer. https://doi.org/10.1007/978-3-662-49851-4
- [6]. Ngai, E. W. T., Hu, Y., Wong, Y. H., Chen, Y., & Sun, X. (2011). The application of data mining techniques in financial fraud detection. Decision Support Systems, 50(3), 559–569. https://doi.org/10.1016/j.dss.2010.08.006
- [7]. Chen, H., Chiang, R. H. L., & Storey, V. C. (2012). Business intelligence and analytics: From big data to big impact. MIS Quarterly, 36(4), 1165–1188. https://doi.org/10.2307/41703503
- [8]. Wamba, S. F., Akter, S., Edwards, A., Chopin, G., & Gnanzou, D. (2015). How big data can make big impact: Findings from a systematic review. International Journal of Production Economics, 165, 234–246. https://doi.org/10.1016/j.ijpe.2014.12.031
- [9]. Sivarajah, U., Kamal, M., Irani, Z., & Weerakkody, V. (2017). Critical analysis of big data challenges and analytical methods. Journal of Business Research, 70, 263–286. https://doi.org/10.1016/j.jbusres.2016.08.001
- [10]. Duan, Y., Edwards, J. S., & Dwivedi, Y. K. (2019). Artificial intelligence for decision making in the era of big data. International Journal of Information Management, 48, 63–71. https://doi.org/10.1016/j.ijinfomgt.2019.01.021
- [11]. Jarrahi, M. H. (2018). Artificial intelligence and the future of work. Business Horizons, 61(4), 577–586. https://doi.org/10.1016/j.bushor.2018.03.007
- [12]. Dwivedi, Y. K., Hughes, L., Ismagilova, E., Aarts, G., Coombs, C., Crick, T., & Williams, M. D. (2021). Artificial intelligence (AI): Multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research, practice and policy. International Journal of Information Management, 57, 101994. https://doi.org/10.1016/j.ijinfomgt.2019.08.002
- [13]. Abdallah, A., Maarof, M. A., & Zainal, A. (2016). Fraud detection system: A survey. Journal of Network and Computer Applications, 68, 90–113. https://doi.org/10.1016/j.jnca.2016.04.007
- [14]. Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). Why should I trust you? Explaining the predictions of any classifier. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 1135–1144. https://doi.org/10.1145/2939672.2939778
- [15]. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Pedreschi, D., & Giannotti, F. (2018). A survey of methods for explaining black box models. ACM Computing Surveys, 51(5), Article 93. https://doi.org/10.1145/3236009
- [16]. Adadi, A., & Berrada, M. (2018). Peeking inside the black box: A survey on explainable artificial intelligence. IEEE Access, 6, 52138–52160. https://doi.org/10.1109/ACCESS.2018.2870052
- [17]. Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1, 389–399. https://doi.org/10.1038/s42256-019-0088-2
- [18]. Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D., & Gebru, T. (2019). Model cards for model reporting. Proceedings of the Conference on Fairness, Accountability, and Transparency, 220–229. https://doi.org/10.1145/3287560.3287596
- [19]. Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., Smith Loud, J., Theron, D., & Barnes, P. (2020). Closing the AI accountability gap: Defining an end to end framework for internal algorithmic auditing. Proceedings of the Conference on Fairness, Accountability, and Transparency, 33–44. https://doi.org/10.1145/3351095.3372873
- [20]. Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., & Galstyan, A. (2021). A survey on bias and fairness in machine learning. ACM Computing Surveys, 54(6), Article 115. https://doi.org/10.1145/3457607
- [21]. Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a right to explanation of automated decision making does not exist in the General Data Protection Regulation. International Data Privacy Law, 7(2), 76–99. https://doi.org/10.1093/idpl/ipx005
- [22]. Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., & Floridi, L. (2016). The ethics of algorithms: Mapping the debate. Big Data & Society, 3(2). https://doi.org/10.1177/2053951716679679
- [23]. Caron, F., Vanthienen, J., & Baesens, B. (2013). Comprehensive rule based compliance checking and risk management with process mining. Decision Support Systems, 54(3), 1357–1369. https://doi.org/10.1016/j.dss.2012.12.012
- [24]. Balasubramaniam, N., Kauppinen, M., Rannisto, A., Hiekkanen, K., & Kujala, S. (2023). Transparency and explainability of AI systems: From ethical guidelines to requirements. Information and Software Technology, 159, 107197. https://doi.org/10.1016/j.infsof.2023.10719