Explainable Artificial Intelligence: Methods and Applications in Decision-making Systems
EOI: 10.11242/viva-tech.01.05.001
Citation
Prof. Shreya Bhamare, Hritika Afandkar, Gayatri Pallan,"Explainable Artificial Intelligence: Methods and Applications in Decision-making Systems", VIVA-IJRI Volume 1, Issue 7, Article 1, pp. 1-14, 2023. Published by Master of Computer Application Department, VIVA Institute of Technology, Virar, India.
Abstract
Explainable Artificial Intelligence (XAI) has emerged as a critical area of research and development, driven by the need for transparency and interpretability in complex machine learning models. This paper provides a thorough overview of various methods employed in XAI to enhance the understandability of AI systems. Rule-based systems leverage explicit conditions for decision-making, while decision trees offer a hierarchical and intuitive structure. Local Interpretable Model-agnostic Explanations (LIME) generate faithful explanations through data perturbation, and SHapley Additive exPlanations (SHAP) apply game theory principles to attribute feature importance. Counterfactual explanations unveil alternative scenarios, and Anchors identify minimal feature sets influencing model predictions. These methods find applications across diverse domains, including healthcare, finance, and autonomous vehicles. As XAI continues to evolve, the pursuit of clear, interpretable, and accountable AI systems remains paramount, ensuring the responsible integration of artificial intelligence into real-world applications.
Keywords
Decision Trees, Explainable Artificial Intelligence (XAI), Interpretability, Local Interpretable Model-agnostic Explanations (LIME), Machine Learning Models, Rule-Based Systems, SHapley Additive exPlanations (SHAP), Transparency.
References
- [1] B. Zhou, D. Bau, A. Oliva, and A. Torralba, Interpreting deep visual representations via network dissection. IEEE Trans. Pattern Anal. Mach. Intell., vol. 41, no. 9, pp. 2131–2145, 2019.
- [2] Z. C. Lipton, The Mythos of model interpretability. CoRR, vol. abs/ 1606.03490, pp. 1–8, 2016. http://arxiv.org/abs/1606.03490.
- [3] M. T. Dzindolet, S. A. Peterson, R. A. Pomranky, L. G. Pierce, and H. P. Beck, The role of trust in automation reliance. Int. J. Hum.–Comput. Stud., vol. 58, no. 6, pp. 697–718, 2003.
- [4] Ethics Guidelines for Trustworthy AI, Nov. 2019. https://ec.europa.eu/digital-single-market/en/news/ethics-guidelinestrustworthy-ai.
- [5] V. Bellotti and K. Edwards, Intelligibility and accountability: human considerations in context-aware systems. Hum. Comput. Interact., vol. 16, pp. 193–212, 2009.
- [6] T. Kulesza, M. Burnett, W. Wong, and S. Stumpf, Principles of explanatory debugging to personalize interactive machine learning. In Proceedings of the 20th International Conference on Intelligent User Interfaces (ACM, 2015), pp. 126–137.
- [7] D. Gunning, Explainable artificial intelligence (XAI), DARPA/I2O; www. cc.gatech.edu/alanwags/DLAI2016/(Gunning)%20IJCAI-16%20DLAI% 20WS.pdf.
- [8] H. H. Clark and S. E. Brennan, Grounding in communication. In Perspectives on Socially Shared Cognition, L. B. Resnick, J. M. Levine, and S. D. Teasley (eds.). (American Psychological Association, 1991), pp. 127–149.
- [9] F. K. Dosilovic, M. Brcic, and N. Hlupic, Explainable artificial intelligence: a survey. In Proceedings of the 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), May 2018, pp. 210–215.
- [10] A. P. Jyothi, C. Megashree, S. Radhika, and N. Shoba, Detection of cervical cancer and classification using texture analysis. J. Contemp. Issues Bus. Govern., vol, 27, no. 3, 2021.
- [11] U. Sakthivel, A. P. Jyothi, N. Susila, and T. Sheela, Conspectus of k-means clustering algorithm. In Applied Learning Algorithms for Intelligent IoT (Auerbach Publications, 2021), pp. 193–213.
- [12] A. P. Jyothi and U. Saktivel, Technique to Balance Energy Efficient Clustering with Data Transmission in Large Scale Sensor Network. 14 XAI: concepts, enabling tools, technologies and applications International Journal of Advanced Networking & Applications (1st International Conference on Innovations in Computing & Networking (ICICN16), CSE, RRCE), pp. 107–113, 2016. [13] A. P. Jyothi and U. Sakthivel, Trends and technologies used for mitigating energy efficiency issues in wireless sensor network. Int. J. Comput. Appl., vol. 111, no. 3, pp. 32–40, 2015.
- [14] O. Biran and C. Cotton, Explanation and justification in machine learning: a survey. Paper presented at the IJCAI-17 Workshop on Explainable AI (XAI), Melbourne, Australia, 20 August 2017. [15] A. Vellido, The importance of interpretability and visualization in machine learning for applications in medicine and health care. Neural Comput. Appl., early access, Feb. 4, 2019, doi:10.1007/s00521-019-04051-w.
- [16] T. Miller, Explanation in artificial intelligence: insights from the social sciences. Artif. Intell., vol. 267, pp. 1–38, 2018.
- [17] F. Doshi-Velez and B. Kim, Towards a rigorous science of interpretable machine learning, 2017, arXiv:1702.08608. http://arxiv.org/abs/ 1702.08608.
- [18] S. Tonekaboni, S. Joshi, M. D. McCradden, and A. Goldenberg, What clinicians want: contextualizing explainable machine learning for clinical end use. CoRR, vol. abs/1905.05134, pp.1–12, 2019. http://arxiv.org/abs/ 1905.05134.
- [19] C. Olah, A. Satyanarayan, I. Johnson, et al., The building blocks of interpretability. Tech. Rep., Jan. 2020. https://distill.pub/2018/building-blocks/.
- [20] S. R. Soekadar, N. Birbaumer, M. W. Slutzky, and L. G. Cohen, Brain– machine interfaces in neurorehabilitation of stroke. Neurobiol. Disease, vol. 83, pp. 172–179, 2015.
- [21] O. C¸ ic¸ek, A. Abdulkadir, S. S. Lienkamp, T. Brox, and O. Ronneberger, 3D U-Net: learning dense volumetric segmentation from sparse annotation. CoRR, vol. abs/1606.06650, pp. 6–7, 2016. http://arxiv.org/abs/1606.06650.
- [22] G. Montavon, W. Samek, and K.-R. Mu¨ller, Methods for interpreting and understanding deep neural networks. Digit. Signal Process., vol. 73, pp. 1–15, 2018.
- [23] W. Samek, T. Wiegand, and K. Mu¨ller, Explainable artificial intelligence: understanding, visualizing and interpreting deep learning models. CoRR, vol. abs/1708.08296, pp. 1–6, 2017. http://arxiv.org/abs/1708.08296.
- [24] S. M. Lundberg and S.-I. Lee, A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems 30, I. Guyon, et al. (eds.), (Red Hook, NY: Curran Associates, 2017), pp. 4765–4774.
- [25] L. H. Gilpin, D. Bau, B. Z. Yuan, A. Bajwa, M. Specter, and L. Kagal, Explaining explanations: An overview of interpretability of machine learning. In Proceedings of the IEEE 5th International Conference on Data Science and Advanced Analytics (DSAA), Oct. 2018, pp. 80–89.
- [26] D. Wang, Q. Yang, A. Abdul, and B. Y. Lim, Designing theory-driven usercentric explainable AI. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (ACM, 2019), Paper no. 601. An overview of past and present progressions in XAI 15
- [27] A. B. Arrieta, N. Dı´az-Rodrı´guez, J. D. Ser, et al., Explainable artificial intelligence (XAI): concepts, taxonomies, opportunities and challenges toward responsible AI. Inf. Fusion, vol. 58, pp. 82–115, 2020.
- [28] S. Lapuschkin, S. Wa¨ldchen, A. Binder, G. Montavon, W. Samek, and K.-R. Mu¨ller, Unmasking Clever Hans predictors and assessing what machines really learn. Nature Commun., vol. 10, no. 1, p. 1096, 2019.
- [29] M. T. Ribeiro, S. Singh, and C. Guestrin, ‘Why should I trust you?’: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, New York, NY: Association Computing Machinery, Aug. 2016, pp. 1135–1144.
- [30] R. R. Selvaraju, A. Das, R. Vedantam, M. Cogswell, D. Parikh, and D. Batra, Grad-CAM: Why did you say that? Visual explanations from deep networks via gradient-based localization. CoRR, vol. abs/1610.02391, pp. 1–21, 2016. http://arxiv.org/abs/1610.02391
- [31] Enholm IM, Papagiannidis E, Mikalef P, and Krogstie, J. Artificial intelligence and business value: a literature review. Information Systems Frontiers. 2021; 24(5):1709–1734.
- [32] Rudin C and Radin J. Why are we using black box models in AI when we don’t need to? A lesson from an explainable AI competition. Harvard Data Science Review. 2019;1(2):1–9.
- [33] Vroegindeweij R and Carvalho A. Do healthcare workers need cognitive computing technologies? A qualitative study involving IBM Watson and Dutch professionals. Journal of the Midwest Association for Information Systems (JMWAIS). 2019;2019(1):4.
- [34] Islam MR, Ahmed MU, Barua S, et al. A systematic review of explainable artificial intelligence in terms of different application domains and tasks. Applied Sciences. 2022;12(3):1353.
- [35] Meske C and Bunde E. Transparency and trust in human–AI-interaction: the role of model-agnostic explanations in computer vision-based decision support. In: International Conference on Human–Computer Interaction. Springer; 2020. p. 54–69.