AI CREDIT ASSESSMENT: ANALYZING FUTURE OF LENDING ACROSS INDIA AND OTHER EMERGING ECONOMIES

By Zoya Farah Hussain and Pratik Biswal.

INTRODUCTION

The financial systems of India and the Global South occupy an intermediate position between paper-based traditional banking and credit assessment methodologies driven by artificial intelligence (“AI”). The recent transformation of banks’ credit assessment methods has facilitated a shift from manual, branch-based reviews of paper documents to automated scoring by AI systems that can process a multitude of data points in seconds.

Applying for a loan has traditionally been an arduous process. Typically, it involves a branch visit, extensive paper documentation (such as income certificates and bank passbooks), and weeks of waiting for approval—which often relies on a loan officer’s subjective judgment. AI credit scoring compresses this process into an instant algorithmic decision, expanding access while removing human discretion as a potential check on socio-cultural, gender, and ethnic biases.

However, AI-driven credit assessments are not only expediting processes but also reshaping the legal concept of creditworthiness, which reflects a borrower’s assessed capacity and willingness to repay a loan based on past financial patterns.1 This piece examines the rise in the use of AI credit assessment systems in developing economies and their associated pitfalls.

EXISTING FRAMEWORKS

Understanding existing governance structures and their shortcomings requires analyzing how courts and legislatures have approached automated credit scoring. In Joachimson v. Swiss Bank Corp., the court highlighted the foundational principles of the banker-customer relationship, specifically the bank’s obligation to maintain the confidentiality of its customers’ account information and the customer’s entitlement to receive accurate information about their account.2 As a result of digital innovations in credit assessment, the banker-customer relationship has been transformed from what was previously a personal discretionary exchange to an algorithmic decision-making process that now determines the borrowing power and repayment potential of borrowers without the need for any human involvement in the process. This change was addressed directly by the Court of Justice of the EU in the case of SCHUFA v. OQ.3 The decision established that AI-based credit scoring constitutes “automated decision-making” under Article 22 of the General Data Protection Regulation (“GDPR”), thereby entitling borrowers to a transparent explanation of how algorithmic decisions are reached.4

However, significant legislative gaps remain in regulating AI credit assessment systems. The EU Artificial Intelligence Act (2024) categorizes AI models according to risk levels and imposes corresponding compliance obligations. Credit scoring systems are classified as “high-risk” under the Annex III, triggering requirements such as conformity assessments and transparency obligations.5 However, these obligations will not take full effect until 2026, creating a transitional compliance gap. Meanwhile, no comparable AI-specific regulatory framework governing high-risk credit scoring systems currently exists across much of the Global South. What does exist are general data protection regimes, which are significant in their own context but not designed to address the algorithmic harms that arise specifically from automated credit decision-making.

For instance, Singapore’s Monetary Authority has issued guidelines on AI governance, including the 2018 Fairness, Ethics, Accountability, and Transparency (“FEAT”) principles and the 2020 Model AI Governance Framework, which emphasizes regular model updates and auditability through internal reviews and third-party assessments.6 Similarly, countries such as Kenya, Nigeria, and Brazil have enacted data protection regimes, including Kenya’s Data Protection Act (2019), Nigeria’s Data Protection Regulation (2019), and Brazil’s General Data Protection Law (2020).7 These regulations facilitate AI-based credit assessment using alternative data sources such as mobile payments, social media activity, and e-commerce transactions.

While these developments have expanded financial inclusion, they often fail to account for context-specific forms of social stratification, including caste, religion, and occupation. These elements are not adequately captured by conventional Western anti-discrimination frameworks. 

BEYOND THE LETTER: THE CHASM BETWEEN POLICY AND PRACTICE


Algorithmic bias in AI credit assessments can arise from the datasets on which these systems are trained. These datasets can reproduce past inequities in three ways. 

First, when past loans were made in an exclusionary way, the results of that process are reflected in the initial data set by showing who received loans and who failed to repay them. For example, redlining practices in the United States systematically denied credit to Black communities; when historical data reflecting these patterns is used to train AI models, those systems can learn and replicate the same discriminatory outcomes.;  

  1. Second, seemingly neutral variables such as postal codes, education level, and/or job classifications can act as proxy measures of race, caste, or class. In India, the postal code of various locations is directly correlated with the caste composition of the area, while certain types of occupations, such as “manual laborer” or “domestic worker,” are directly correlated with caste or tribal origin. Because AI does not distinguish between correlation and causation, it treats the proxy variable as a direct cause of risk, thereby penalizing individuals for historical and social conditions tied to that variable.8
  2. Third, due to the under-representation of marginalized groups in training sets, these models will also be naturally less accurate when predicting the outcomes of those under-represented persons. For instance, many developing economies rely on large informal and gig-based labor markets where workers are engaged in daily wage labor, domestic work, or platform-based gig employment, who lack formal pay slips or bank statements. AI models trained predominantly on urban, salaried populations simply have insufficient data on these groups, producing less accurate and often more adverse predictions for the populations that need affordable credit the most.9

The consequences of these blind spots are measurable. Singapore’s FEAT Principles and comparable frameworks treat discrimination as a matter of enumerated categories such as race, sex, and religion; however, caste, descent, and hereditary occupation fall outside this broad categorization entirely, leaving the primary mechanisms of financial exclusion in South Asia and Sub-Saharan Africa without legal remedy. Empirical evidence from India highlights this disparity: one study found that loan approval rates were 72% for general category applicants, 65% for Other Backward Classes (“OBCs”), and 58% for Scheduled Castes and Scheduled Tribes (“SC/STs”).10 These groups also face higher interest rates, with OBC borrowers paying approximately 1.2% more and SC/ST borrowers paying 2.5% more on average.11 When the training data reflects entrenched social hierarchies, AI systems encode and amplify existing inequalities, rendering them technically efficient but substantively exclusionary.

BRIDGING THE GAP: SUGGESTIONS AND WAY FORWARD


Interventions are needed at the model level in order to address algorithmic bias in AI credit systems. A proposed solution to account for proxy variables is adversarial debiasing, in which an AI credit model is trained alongside a competing AI credit model that attempts to predict the protected characteristic, such as caste or nationality, from the performance of the AI credit model. By forcing the primary model to make decisions that the adversary cannot use to predict the protected class, adversarial debiasing reduces the number of discriminatory outcomes while still maintaining a level of predictive accuracy comparable to that of the primary model.12 However, debiasing will need to move beyond Western AI frameworks to be effective; models must also be calibrated to account for the regional, seasonal, and occupational realities of informal economies, including the cyclical income patterns of agricultural workers, the cash-based transactions of daily wage earners, and the absence of formal employment records among gig workers in developing countries.

In addition to the technical measures outlined above, institutional accountability is also necessary. One way to facilitate this could be by establishing an AI Credit Scoring Oversight Body in the central banks of Global South countries. This organization would involve a multidisciplinary team of experts in technology, ethics, law, and consumer protection to operate the explainability standard outlined in Article 22 of the GDPR at the domestic level. A major goal of the project will be for the body to implement three essential functions:

  1. Firstly, it would ensure that all AI credit scoring models used within its jurisdiction are registered publicly, including mandatory disclosure of input variables, model methodology, and fairness metrics that are used to assess performance. The metrics must be context-specific, recognizing caste, descent, and tribal identity as cognizable grounds of disparity rather than relying on the Western taxonomy of race, sex, and religion alone. 
  2. Second, it would mandate annual algorithmic audits with specific attention to caste, gender, and descent-based disparities in approval rates and interest rate pricing, modeled on the precedent set by New York City’s Local Law 144 (2023), which requires bias audits of automated employment tools and demonstrates that such mandates are practically implementable.13 
  3. Third, it would require that credit decisions be explained to borrowers in clear, accessible language, including in vernacular languages and regional dialects, and in multiple formats such as visual and audio, so that the right to an explanation is not rendered illusory by literacy barriers or language exclusion.

Underpinning both bias mitigation and institutional accountability is the question of who controls credit data and on what terms. In many developing economies, credit-relevant data—including non-traditional data such as mobile payment histories and e-commerce behavior—is concentrated in the hands of a small number of private fintech firms with no public accountability obligations and strong commercial incentives to monetize it. This concentration creates structural barriers for smaller lenders and leaves borrowers with no meaningful ability to challenge the data being used against them. To address this, governments could promote neutral data trusts that would be independently governed, anonymized, and involve regularly audited repositories that would pool credit information across institutions, preventing any single entity from holding a monopoly over the data inputs that determine financial access.14 All credit models drawing on these trusts would also be required to register with the central authority and file public Algorithmic Impact Assessment reports.

CONCLUSION

Achieving a fair AI credit assessment framework in the Global South requires legally enforceable explainability and the calibration of models to local data contexts. This approach must move beyond the procedural minimalism of current legal frameworks and instead establish meaningful accountability for algorithmic systems within specific socio-cultural and economic environments. Together, technically calibrated debiasing, an empowered institutional oversight body, and publicly accountable data governance present a regulatory framework capable of transforming AI from a potential driver of inequality into a tool for genuine financial inclusion.

  1.  Sameer Avasarala & Aryashree Kunhambu, Adoption of Artificial Intelligence in the FinTech Sector: A Regulatory Overview, MONDAQ (Jan. 9, 2025), https://www.mondaq.com/india/fin-tech/1566150/adoption-of-artificial-intelligence-in-the-fintech-sector-a-regulatory-overview.
    ↩︎
  2.  Joachimson v. Swiss Bank Corp., [1921] 3 K.B. 110 (C.A.). ↩︎
  3.  Case C-634/21, OQ v. Land Hessen, ECLI:EU:C:2023:940 (CJEU, Dec. 7, 2023). ↩︎
  4.  Council Regulation 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data, art. 22, 2016 O.J. (L 119) 1 (EU). ↩︎
  5. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence, Annex III, Point 5(b), 2024 O.J. (L) (EU) [EU AI Act].
    ↩︎
  6.  Monetary Auth. of Singapore, FEAT Principles: Fairness, Ethics, Accountability and Transparency in the Use of Artificial Intelligence and Data Analytics in Singapore’s Financial Sector (Nov. 2018); Monetary Auth. of Singapore, Model AI Governance Framework (2d ed. Jan. 2020).
    ↩︎
  7. Data Protection Act, No. 24 of 2019 (Kenya), Kenya Gazette Supplement No. 174, Acts No. 24 (Nov. 8, 2019);  Nat’l Info. Tech. Dev. Agency, Nigeria Data Protection Regulation 2019 (Jan. 25, 2019), superseded by Nigeria Data Protection Act 2023; Lei Geral de Proteção de Dados Pessoais [LGPD], Lei No. 13.709, de 14 de agosto de 2018 (Braz.), effective Aug. 16, 2020.
    ↩︎
  8.  Richard Rothstein, The Color of Law: A Forgotten History of How Our Government Segregated America 64–98 (2017); see also Robert G. Schwemm & Jeffrey L. Taren, Discretionary Pricing, Mortgage Discrimination, and the Fair Housing Act, 45 Harv. C.R.-C.L. L. Rev. 375 (2010);  Rashida Richardson, Jason Schultz & Kate Crawford, Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice, 94 N.Y.U. L. Rev. Online 192 (2019).
    ↩︎
  9.  International Labour Organization, World Employment and Social Outlook 2021: The Role of Digital Labour Platforms in Transforming the World of Work 13–17 (2021), ISBN 978-92-2-031944-4.
    ↩︎
  10.  Ministry of Social Justice and Empowerment (India), ‘Study on Caste-based Discrimination in Fintech Lending’ (2024).
    ↩︎
  11.  Id. ↩︎
  12. Brian Hu Zhang, Blake Lemoine & Margaret Mitchell, Mitigating Unwanted Biases with Adversarial Learning, AIES ’18: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society (2018). ↩︎
  13.  N.Y.C. Local Law No. 144 of 2021, codified at N.Y.C. Admin. Code § 20-871 (2023). ↩︎
  14.  Sylvie Delacroix & Neil D. Lawrence, Bottom-up Data Trusts: Disturbing the ‘One Size Fits All’ Approach to Data Governance, 9 Int’l Data Privacy L. 236 (2019).
    ↩︎

Leave a comment