top of page

Why AI in Lending Needs a Moral Compass

ree

Title

Why AI in Lending Needs a Moral Compass


If a human loan officer rejected your application for being a woman, or from a lower caste, or lacking English fluency — you'd protest.


But what if the rejection came from a machine?


No face. No explanation. No accountability. Just a cold red cross that says: Declined.


Welcome to the age of algorithmic lending — where decisions are fast, data-rich, and dangerously opaque.


India’s digital lending is no longer a fringe experiment. It’s mainstream, scaling faster than regulators can blink. But in this rush to digitize credit, we’ve forgotten to ask:Who trained the algorithm? Who monitors it? And most critically — who takes the blame when it’s wrong?


The Statistical Reality


According to the RBI:


  • Digital lending accounted for just 1.2% of retail credit in FY 2020.

  • This figure more than doubled to 2.5% by FY 2024.

  • It is projected to exceed 5% by FY 2028.


According to a Boston Consulting Group (BCG) report, India’s digital lending market is expected to hit ₹29.2 lakh crore by 2025, up from ₹9.2 lakh crore in 2019.


A 2023 TransUnion CIBIL analysis revealed that nearly 33% of all new-to-credit borrowers accessed credit first through digital platforms, often via Buy Now Pay Later (BNPL) or short-term personal loans.


But here's the risk: A study by NITI Aayog in 2022 warned that algorithmic decision-making may amplify socio-economic bias, especially when alternative data like geolocation or spending patterns are prioritized over conventional financial health indicators.


This growth rides on AI-powered engines — tools built to underwrite, price, and approve loans within seconds. But in eliminating human bias, we risk building automated bias at scale.


The consequences? As per TransUnion CIBIL, nearly 75% of loan applications in India are rejected — primarily due to inadequate or non-existent credit history. These aren't always risky borrowers — many are simply invisible to the algorithm.


On the other end, India’s banking sector sees a 3–4% customer churn per month, often accelerated by frustrating, unexplained rejections. These may include otherwise loyal customers who feel dehumanized by opaque algorithms.


These aren't just missed loans. They are lost relationships, lost trust, and lost opportunities for long-term value creation.


Reconciling TAT with Ethics

In a hyper-competitive lending market, fast decisions are a business imperative. Customers expect instant approvals, and financial institutions are measured on efficiency. However, speed without scrutiny can breed silent harm.


Ethical AI doesn’t mean slow AI. It means smart AI. Institutions can preserve turnaround time (TAT) through:


  • Segmented routing: Let clean, low-risk profiles flow through STP. Flag borderline or low-data cases for manual review.


  • Explainable AI: Models that log and communicate why a decision was made can be faster to audit, escalate, and correct.


  • Parallel workflows: Real-time escalation queues for exceptions can reduce delays without compromising on fairness.


The goal isn't to slow down lending — it’s to speed up just lending


With the right architecture, fairness and efficiency can coexist — not compete.

And much of this is happening through Straight Through Processing (STP) — where credit decisions are taken automatically, without any human interaction. While STP has improved efficiency, it also removes the last layer of empathy and scrutiny. A model trained on flawed assumptions doesn’t just deny one loan — it silently filters out millions. And since STP doesn’t pause to question itself, the damage happens at scale — and invisibly.

Yet, ironically, there are institutions that bypass these systems entirely when it suits them — routinely seeking to push through 'red-tagged' proposals as exceptions. When such overrides become institutional culture rather than legitimate risk-based judgment, it erodes the credibility of both automation and governance.


These exceptions are often justified under business pressure or internal influence — but every such case becomes a loophole that questions the moral integrity of the entire credit framework. If algorithms are rigid and humans are selectively lenient, we don’t have fairness — we have a dual standard masquerading as objectivity.


The Stories Behind the Statistics



1. Corporate Conflict: The Case of Vardhan Bank


Story 1 - Corporate Conflict - Risks of AI in Banking
Story 1 - Corporate Conflict - Risks of AI in Banking

Vardhan Bank, one of India’s top-tier private lenders, had built its digital loan platform on a self-learning AI model. Everything worked brilliantly — until internal audit flagged an alarming pattern: over 65% of MSME loan rejections in their semi-urban markets were traced to a single risk tag — "insufficient formal record."


One case that shook the board involved a profitable women-led dairy cooperative in Maharashtra. Despite a ₹15 lakh monthly turnover, their loan was denied because their income wasn’t digitally declared, and their CIBIL profiles were thin.


The fallout? The cooperative secured funding from a competitor — and brought 82 new current accounts and FD relationships with them.


The CEO's internal note later read:

“Our model filters out risk. But today, it filtered out opportunity, goodwill, and long-term value. We need to design not just for precision, but for principle.”

This is what happens when AI systems don’t just calculate — they conclude. And when those conclusions are misaligned with intent, they cost more than loans — they cost relationships.


2. Rekha, the Invisible Entrepreneur

Story 2 - The Invisible Entrepreneur - Algorithmic Bias in Credit
Story 2 - The Invisible Entrepreneur - Algorithmic Bias in Credit

Rekha, a 38-year-old single mother from Madurai, applied for a ₹75,000 loan to expand her

tailoring unit. Despite a clean repayment history and steady income, the bank’s AI platform rejected her within seconds. The reason? "Low digital engagement" and "unstable cash flow."


Her cash-based business model didn’t match the algorithm's preferred digital profile. It took a human intervention to override the system, prove her creditworthiness, and approve the loan. But most like Rekha don’t get second chances.


3. Aarav, Penalised for Patterns



Story 3 - Pattern Penalty - Fair AI Lending Models
Story 3 - Pattern Penalty - Fair AI Lending Models

Aarav Mehta, a 29-year-old data analyst in Bengaluru, applied for a used car loan. His CIBIL score was a strong 774. Yet he was quoted a 13.75% interest rate — 4% higher than a colleague with similar income.


The AI system flagged his recent job switch, frequent address changes, and online gadget spends as risky behaviour. There was no room to explain that he had zero defaults and regular savings. Aarav took the loan, but moved his banking relationship elsewhere.




4. Irfan, the Unseen Borrower


In Barabanki, UP, Irfan Qureshi ran a modest mobile recharge store. When he sought a ₹60,000 loan to expand, he was rejected outright by a digital NBFC. His business was informal, cash-based, and not PAN-linked — making him invisible to the credit algorithm.

Despite being reliable, Irfan was pushed toward a moneylender charging 36% interest —

because the AI simply couldn't see him.

Story 4 - The Unseen Borrower - Enterprise Risk in AI
Story 4 - The Unseen Borrower - Enterprise Risk in AI

😶 This Is Not a Glitch. It's the Design.


Rekha was invisible because she dealt in cash. Aarav was penalised because he moved houses. Irfan was excluded because he didn’t exist in a database.


This is not a bug. It’s the design. AI doesn’t see people — it sees patterns. It doesn’t understand intent — only correlation.


And to be clear — this design was built by people. AI doesn’t choose its goals. Humans do. If it rewards digital footprints over repayment intent, that’s a human choice. If it penalises low-income borrowers for living in the “wrong” postal code, that’s a human bias coded into a machine.


Flawed design isn’t a machine error. It’s a reflection of human priorities — and sometimes, our blind spots.


And in the process, it ends up replacing nuanced, compassionate lending with cold, coded logic that excludes those who most need inclusion.


The RBI Speaks: Mandates for Responsible AI

1. Digital Lending Guidelines (Sept 2, 2022)

RBI Circular DOR.CRE.REC.66/21.07.001/2022-23:

“Credit-decision algorithms must be designed to flag any potential discrimination factors and be fully auditable.”

This clause is central to ethical AI: bias detection and model explainability are no longer optional — they are regulatory expectations.


2. Master Directions on Digital Lending (May 8, 2025)

“Certain concerns had emerged around the methods of designing, delivering and servicing digital credit products… these concerns, if not mitigated, may impact the borrower’s confidence in the digital lending ecosystem.”

Here, the RBI underscores the need for fairness, transparency, and borrower protection.


The Courts Weigh In: Legal Backing for Ethical AI

1. Delhi High Court, Jan 25, 2023

In a PIL, the Court directed the RBI and Centre to enforce digital lending regulations:

"...the RBI and the Government of India shall ensure strict compliance of the regulatory framework… and take immediate steps in accordance with law."

This shows that even the judiciary recognises the risks of uncontrolled digital lending.

2. Dharanidhar Karimojji v. Union of India, Jan 23, 2023

The petitioner challenged opaque loan pricing, hidden fees, and algorithmic denials. Though not strictly AI-centric, the case reflects growing judicial attention to fairness and transparency in digital lending models.


Why a Moral Compass Matters

AI is not inherently biased, but it learns from the past. If past data reflects social exclusion, AI will replicate it. If algorithms optimize for default prediction alone, they may ignore intent, effort, or recovery.


A moral compass in AI for lending is not an idealistic luxury. It's a regulatory necessity, a legal expectation, and most importantly, a human obligation.

Because lending isn’t just about risk. It’s about trust.


⚖️ The Conscience Question


The question isn’t whether AI in lending is here to stay — it is. The question is: Will we make it fair before it becomes irreversible?


Because lending is not just about data. It’s about dignity. Not just about profiling risk. But about empowering lives.


If an algorithm can approve a ₹50,000 loan in 5 seconds, it should also be able to explain why it denied it.


The RBI has sounded the alarm. The courts have echoed the concerns. But unless banks, fintechs, and data scientists build with conscience, we are coding injustice into our financial systems — one decision at a time.


So before we ask if the AI is accurate — let’s ask if it’s just.


Because the future of lending should not just be digital. It must be ethical.


Integrating ERM: Ethical AI as an Enterprise Risk Imperative

🔍 Fraud Risk Perspective – ACFE Insights


According to the ACFE’s 2024 Report to the Nations, over 50% of occupational fraud cases stem from internal failings—32% due to lack of controls and 19% from override of controls. In an AI-driven lending world, similar vulnerabilities apply:


  • When models are opaque and human oversight is limited or easily bypassed, risk tags and red flags become easy targets for override—especially amid business pressures.

  • Without integrated fraud-risk governance, lenders may unintentionally enable biased flagging, manipulated model outcomes, or selective approvals that erode trust and oversight.


Moreover, ACFE findings show that 8 in 10 fraud prevention teams plan to deploy generative AI by 2025—a double-edged sword. While such tools can aid detection, unchecked models invite exploitation: fraudsters may reverse-engineer approvals or feed deceptive patterns into AI systems.

Ethical AI must therefore be strengthened not only through fairness but also through robust process integrity, auditability, and cross-functional risk oversight.

AI-based lending doesn't just fall under technology governance — it’s now a core Enterprise Risk Management (ERM) concern. When algorithms can influence systemic exclusion, mispricing, or reputational fallout, they pose risks across multiple dimensions:


  • Credit Risk: Over-optimizing for past patterns can blind the system to viable borrowers who defy traditional data logic.

  • Operational Risk: Unsupervised models introduce the risk of unintended discrimination, opaque denials, and governance failure.

  • Compliance Risk: Non-alignment with RBI directions or legal rulings (as seen in recent court cases) can trigger regulatory sanctions.

  • Reputational Risk: Public backlash or borrower distrust stemming from unfair AI decisions can erode long-standing brand equity.


A Sound ERM Framework Must Include:

  • Model Risk Governance Committees involving Risk, Compliance, IT, and Business.

  • Bias Testing and Simulation as part of model validation.

  • Exception Tracking with triggers for red-flag reviews at portfolio and policy levels.

  • Incident Disclosure and Grievance Loopbacks integrated into ERM dashboards.


ERM must treat AI decisions with the same rigor traditionally reserved for credit, market, or liquidity risks. Because in a data-first lending ecosystem, the algorithm is the policy — and every flawed policy is a ticking risk event.


Next Steps for Ethical AI in Lending


  • Conduct regular fairness audits of models.

  • Include human-in-the-loop interventions for borderline cases.

  • Maintain grievance redressal options beyond bots.

  • Ensure explainable decisions, not just optimal ones.


The future of AI in lending isn’t just about speed or scale. It’s about integrity.

Let us build systems that don’t just approve or deny — but understand.


Comments


© 2025 Vivek Krishnan. All rights reserved.  
Unauthorized use or duplication of this content without express written permission is strictly prohibited.  
Excerpts and links may be used, provided that clear credit is given to Vivek Krishnan with appropriate and specific direction to the original content.

bottom of page