Adoption of Explainable Artificial Intelligence in Decision Support Systems under Complex Data Environments

Authors

  • Hikmah Adwin Adam Politeknik Negeri Medan
  • Pierre Marcello Lopulalan Politeknik Pelayaran Banten
  • Meryatul Husna Politeknik Negeri Medan

DOI:

https://doi.org/10.31004/riggs.v5i1.7793

Keywords:

XAI, DSS, Credit Scoring, Digital Banking, Interpretability, Quality Of Decisions

Abstract

The increasing complexity of data in the digital banking industry in Indonesia has encouraged the use of Decision Support Systems (DSS) based on Artificial Intelligence (AI), especially in the credit scoring process. However, the black-box characteristics of conventional AI models pose problems related to transparency, trust, and regulatory compliance, especially in credit decision-making that has a direct impact on customers. This study aims to analyze the adoption of Explainable Artificial Intelligence (XAI) in DSS in credit scoring systems in Indonesian digital banking, with an emphasis on how XAI can improve model interpretability in complex and heterogeneous data environments. The research approach used is mixed-methods, which combines quantitative analysis of the performance of the XAI model with a qualitative study through case studies on digital banking institutions. The results show that the implementation of XAI is able to improve the clarity of credit decisions without sacrificing predictive accuracy significantly, especially through the use of post-hoc explainability techniques and hybrid models. In addition, organizational readiness, compliance with Financial Services Authority (OJK) regulations, and data governance maturity are the main determinants in the success of XAI adoption. This research contributes theoretically through the development of a conceptual framework that integrates the dimensions of model clarity, data complexity, and decision quality in the context of digital banking. In practical terms, these findings provide strategic implications for the development of a transparent, accountable, and trust-oriented DSS, thereby supporting fairer and more accountable credit decision-making in Indonesia.

Downloads

Download data is not yet available.

References

N. M. M. Akbary, I. Trinugroho, T. Risfandy, and P. Pamungkas, “Digital transformation and efficiency: Evidence from Indonesian banks,” International Journal of Business and Society, vol. 26, no. 1, 2025.

G. Kostopoulos, G. Davrazos, and S. Kotsiantis, “Explainable artificial intelligence-based decision support systems: A recent review,” Electronics, vol. 13, no. 14, p. 2842, 2024. doi: 10.3390/electronics13142842

N. Thalpage, “Unlocking the black box: Explainable artificial intelligence (XAI) for trust and transparency in AI systems,” Journal of Digital Art and Humanities, vol. 4, no. 1, pp. 31–36, 2023.

A. Ismail, R. Setiawati, H. Herbenita, B. Sutejo, and S. Mulyanto, Management Information System: Practical Insights and Applications in Indonesia. Asadel Publisher, 2024.

D. R. W. Napitupulu, “Regulatory challenges of digital banking supervision in Indonesia,” International Journal of Law Analysis, vol. 4, no. 1, pp. 17–36, 2026.

D. V. Minh, H. X. Wang, Y. F. Li, and T. N. Nguyen, “Explainable artificial intelligence: A comprehensive review,” Artificial Intelligence Review, vol. 55, no. 5, pp. 3503–3568, 2022. doi: 10.1007/s10462-021-10088-y

M.-S. Jameaba, “Digitalization, emerging technologies, and financial stability: Challenges and opportunities for the Indonesian banking industry,” 2022. doi: 10.32388/CSTTYQ

W. I. Permata and H. Subiyantoro, “Digital banking transformation: Opportunities, challenges, and collaboration,” Greenation International Journal of Economics and Accounting, vol. 3, no. 3, pp. 481–493, 2025.

W. Wipulanusat, K. Panuwatwanich, R. A. Stewart, and J. Sunkpho, “Applying mixed methods sequential explanatory design to innovation management,” in Proc. 10th Int. Conf. Engineering, Project, and Production Management, Springer, 2020, pp. 485–495. doi: 10.1007/978-981-15-1910-9_40

M. A. S. Toyon, “Explanatory sequential design of mixed methods research: Phases and challenges,” International Journal of Research in Business and Social Science, vol. 10, no. 5, pp. 253–260, 2021. doi: 10.20525/ijrbs.v10i5.1262

J. Schoonenboom, “The fundamental difference between qualitative and quantitative data in mixed methods research,” Forum Qualitative Sozialforschung, vol. 24, no. 1, 2023. doi: 10.17169/fqs-24.1.3937

H. P. Kothandapani, “Leveraging AI for credit scoring and financial inclusion in emerging markets,” World Journal of Advanced Research and Reviews, vol. 15, no. 3, pp. 526–539, 2022. doi: 10.30574/wjarr.2022.15.3.0732

A. K. M. Haque, “Explainable artificial intelligence (XAI): Making AI understandable for end users,” 2025.

J. Černevičienė and A. Kabašinskas, “Explainable artificial intelligence (XAI) in finance: A systematic literature review,” Artificial Intelligence Review, vol. 57, no. 8, 2024. doi: 10.1007/s10462-024-10854-8

A. Kovari, “AI for decision support: Balancing accuracy, transparency, and trust across sectors,” Information, vol. 15, no. 11, p. 725, 2024. doi: 10.3390/info15110725

K. Coussement, M. Z. Abedin, M. Kraus, S. Maldonado, and K. Topuz, “Explainable AI for enhanced decision-making,” Decision Support Systems, vol. 184, p. 114276, 2024. doi: 10.1016/j.dss.2024.114276

D. Hooshyar and Y. Yang, “Problems with SHAP and LIME in interpretable AI for education,” IEEE Access, vol. 12, pp. 137472–137490, 2024. doi: 10.1109/ACCESS.2024.3470502

C. N. Nwafor, O. Nwafor, and S. Brahma, “Enhancing transparency and fairness in automated credit decisions,” Scientific Reports, vol. 14, p. 25174, 2024. doi: 10.1038/s41598-024-75026-8

C. Rudin, “Stop explaining black box machine learning models for high stakes decisions,” Nature Machine Intelligence, vol. 1, pp. 206–215, 2019. doi: 10.1038/s42256-019-0048-x

B. Somu, “Transforming customer experience in digital banking through machine learning,” International Journal of Engineering and Computer Science, vol. 9, no. 12, 2020.

Y. Chen, R. Calabrese, and B. Martin-Barragan, “Interpretable machine learning for imbalanced credit scoring datasets,” European Journal of Operational Research, vol. 312, no. 1, pp. 357–372, 2024. doi: 10.1016/j.ejor.2023.08.027

T. Tambun, G. Yudoko, and L. Aldianto, “Strategy and innovation in AI oversight: Fiduciary duties of banking boards,” 2025.

Otoritas Jasa Keuangan, “Artificial Intelligence Governance for Indonesian Banks,” Jakarta, 2025. [Online]. Available: https://www.ojk.go.id

E. Papagiannidis, P. Mikalef, and K. Conboy, “Responsible artificial intelligence governance,” Journal of Strategic Information Systems, vol. 34, no. 2, p. 101885, 2025. doi: 10.1016/j.jsis.2024.101885

N. Balasubramaniam et al., “Transparency and explainability of AI systems,” Information and Software Technology, vol. 159, p. 107197, 2023. doi: 10.1016/j.infsof.2023.107197

L. Putranti, R. D. Afriyanti, and A. Sumantika, “Which model is the most accurate?,” UPY Business and Management Journal, vol. 5, no. 1, pp. 83–99, 2026.

M. M. Islam and M. M. Hasan, “Explainable AI models for cloud-based business intelligence,” American Journal of Interdisciplinary Studies, vol. 4, no. 3, pp. 208–249, 2023.

C. Norrie, “Explainable AI techniques for sepsis diagnosis,” 2021.

F. Pérez-Cruz, J. Prenio, F. Restoy, and J. Yong, “Managing explanations: How regulators can address AI explainability,” BIS, 2025.

Downloads

Published

13-04-2026

How to Cite

[1]
H. A. Adam, P. M. Lopulalan, and M. Husna, “Adoption of Explainable Artificial Intelligence in Decision Support Systems under Complex Data Environments”, RIGGS, vol. 5, no. 1, pp. 12601–12609, Apr. 2026.