Ethical and Regulatory Challenges of Using Generative AI in Banking: Balancing Innovation and Compliance
Main Article Content
Abstract
This study examines the ethical and regulatory challenges influencing the adoption of generative AI in the banking sector. Qualitative findings identified key concerns such as data bias in loan approvals, the explainability of AI decisions, and the tension between innovation and operational risk control. Quantitative results revealed that regulatory non-compliance (mean = 4.8, SD = 0.2) and data privacy risks (mean = 4.7, SD = 0.3) are the most significant concerns. Other identified risks include bias and fairness issues (mean = 4.5, SD = 0.4), lack of transparency (mean = 4.6, SD = 0.3), and customer trust erosion (mean = 4.3, SD = 0.5). The analysis demonstrated a 35% reduction in AI adoption speed due to regulatory constraints, with a Pearson correlation coefficient of -0.62, indicating a strong negative relationship between regulatory barriers and innovation pace. Regression results further highlighted that data privacy measures (β = 0.55, p < 0.001) and customer trust (β = 0.37, p = 0.01) positively influence AI adoption, while regulatory complexity (β = -0.45, p = 0.002) negatively impacts it. These findings emphasize the need for enhanced governance frameworks that balance innovation, ethical considerations, and compliance to unlock the full potential of generative AI in the financial sector.
Downloads
Article Details
![Creative Commons License](http://i.creativecommons.org/l/by/4.0/88x31.png)
This work is licensed under a Creative Commons Attribution 4.0 International License.