Abhishek Nagesh is a veteran finance professional with over 15 years of experience in financial accounting, regulatory compliance, and enterprise risk management. With deep expertise in BCBS, PRA, and US Tri-agency regulations, he is widely regarded as a subject matter expert in credit risk, counterparty credit risk, and market risk methodologies.
As artificial intelligence and machine learning continue to reshape the financial sector, Abhishek is at the forefront of how institutions are rethinking traditional risk frameworks, shifting from manual, reactive models to proactive, AI-driven systems that can analyze massive datasets, detect patterns, and inform real-time decision making.
In this interview, he discusses how advanced technologies are transforming credit and market risk management, the potential blind spots of automation, and how financial institutions can strike the right balance between innovation and oversight.
How are artificial intelligence and machine learning reshaping how financial institutions approach credit and market risks?
Artificial intelligence and machine learning are fundamentally transforming how banks approach credit and market risks today.
Banks are moving from manual and subjective qualitative assessments to instantaneous data-driven automated credit risk evaluations. For example, machine learning models now assist in obligor risk rating (credit scoring) by analyzing exponentially more data about a borrower than a human could, not just financial statements but transaction and internet search history, spending patterns, market trends, and even news and social media interactions.
This means credit decisions like loan approvals can happen a lot more quickly. Multiple AI tools such as NLP, ML, and RPA are increasingly tasked to work together to give a simple “go or no-go” recommendation on prospective loan exposures. Banks already use ML for fraud detection and anti-money-laundering oversight, so they can increasingly catch suspicious patterns that traditional methods usually miss. This is building an early warning system for banks where AI can detect subtle changes in a customer’s behavior or broader economic signals that suggest rising risk, allowing the bank to take action in real time.
What algorithms or models are most effective in predicting and managing financial risk today?
Today’s risk management leverages various advanced algorithms and models, each suited to particular types of risk.
Here are some of the most effective ones in use: Banks increasingly use supervised learning algorithms like decision trees, random forests, and neural networks to predict the probability of default for loans and credit portfolios. These models often outperform traditional regression-based risk scorecards by considering nonlinear interactions and a wider range of data. For instance, a gradient boosting LLM analyzes hundreds of borrower attributes (income, spending patterns, even social data where allowed) to produce a more precise credit risk score. Using ML models, banks can approve credit more confidently and identify risky loans earlier.
Financial fraud and money laundering are areas where AI models shine today. Techniques like clustering, outlier detection, and neural networks (especially autoencoders) are employed to flag unusual transaction patterns that indicate money laundering or illicit activity. These algorithms don’t necessarily predict a specific outcome; instead, they monitor behaviors and raise red flags when something looks off compared to a baseline. A model can project what a regular spending pattern on a credit card looks like for an individual and then instantly spot when a series of transactions doesn’t fit that pattern (potentially indicating a stolen card or account takeover). Similarly, banks use network analysis algorithms to detect rings of transactions that hint at money laundering. These AI-driven systems are far more effective than static rules because they can adapt to new fraud tactics as they emerge by recognizing shifts in the data.
How can AI-driven tools improve the accuracy and efficiency of risk assessments compared to traditional methods?
AI-driven tools often outperform traditional risk management methods in both accuracy and efficiency. AI can sift through thousands of publicly available disclosures about an entity or a counterparty and spot a hidden trend indicating inefficiencies or imbalances in credit profiling, which is the base of credit strategy. Traditional methods, which rely on sampling data or simpler statistical models, might overlook these nuances as a human expert cannot reproduce the speed and volume of a seamless machine logic.
By catching these details, AI provides a more accurate picture of risk as there are fewer blind spots and surprises. In short, decisions based on AI analysis are based on vast amounts of verified data, which improve their accuracy. Tasks that once took weeks, like creating macroeconomic factor simulations, are now becoming automated with seamless data integration and embedment of algorithmic processes and can be done in minutes. Many banks have now piloted using robotic process automation (RPA) to auto-fill regulatory report forms and gather data from various platforms. This automation speeds up reporting and reduces human errors. Similarly, AI-driven risk models can recalculate exposures or simulate instantaneous market shocks on the fly, giving risk managers up-to-the-minute insights rather than waiting for end-of-day reports. This efficiency lets banks respond to changing conditions faster, a competitive advantage when markets move today at lightning speed.
AI tools are excellent at continuous monitoring. For example, banks use AI to monitor transactions and communications 24/7 and flag anomalies immediately. Traditional rule-based systems might generate too many false alarms or miss novel fraud tactics. Still, human-reinforced machine learning can learn what “normal” behavior looks like and detect out-of-pattern events more accurately. Increasingly, AI prevents insider market manipulation and abuse by monitoring trader chats and calls.
What are some of the potential dangers or blind spots that arise when relying on AI for regulatory compliance?
Relying heavily only on AI for regulatory compliance without balanced human judgment can introduce new risks and blind spots that banks need to manage carefully.
One significant danger is that, over time, AI models evolve into a black box and begin to give contradictory results. Suppose a machine learning model declines a loan or flags a transaction without a detailed, clear explanation. Compliance is more than just about making the right decision i.e., banks have to prove they followed the rules and made a prudent decision.
AI models learn, unlearn, and re-learn from data, and historical datasets can be biased or incomplete. If there were inadvertent biases in past lending, a credit risk AI might perpetuate or even amplify those biases. As an example, disadvantaging groups of borrowers based on ethnicity leads to compliance and ethical issues. Conversely, there might be blind spots in the data: if a certain type of risk never occurred in the past, the AI won’t know how to spot it in the future. This particularly worries regulatory compliance, where new rules might target issues that haven’t been prevalent before. Still, the AI might simply not flag a compliance issue because it has no precedent for it in its training data. That’s why regulators often emphasize examining AI models for bias and completeness, and this is precisely why human oversight is needed to catch things the AI might miss.
How should financial institutions balance automation with human oversight in risk-related decision-making?
Banks should aim for a human-machine joint venture where the AI handles heavy data lifting, and humans provide strategic handholding and judgment. The idea is to let AI do what it does best i.e, processing vast amounts of information and identifying patterns and connectivity, etc. In contrast, humans must do what they do best, which is understanding context, making nuanced decisions, and ensuring ethical standards are met. Thus, successfully adopting AI means combining expert human judgment with AI analytics. In practice, balancing automation and oversight might look like this: an AI system combs through thousands of contacts or derivative agreements and flags a handful as high risk based on complex patterns it has found. Instead of automatically raising a flag on increased margin call payment, a human credit officer reviews the AI’s findings and checks for any factors the model might not fully comprehend and then makes the final decision.
As technology evolves, how should financial institutions prepare to adapt their risk strategies over the next five to ten years?
The rapid pace of technology and emerging risks evolution has mandated that financial institutions make their risk management strategies future-proof and flexible. Bank Risk strategies should explicitly adopt AI tools as not just as experiments on the side but integrated into the core risk framework. Over the next 5-10 years, we can expect further Basel regulatory revisions, new market shocks, cross ricross-riskial capital requirements etc. and those will likely require handling even more data and complexity. By building capabilities in machine learning, big data analytics, and automation now, banks lay the groundwork to tackle those future challenges. In practical terms, this could mean setting up dedicated AI MRM teams within risk management, investing in modern data infrastructure, and continually upgrading models with the latest techniques. The institutions that treat AI and advanced analytics as strategic assets in risk (many are already viewing it this way) will have a significant edge in resilience and adaptability.
The next frontier of risk management goes beyond traditional credit or market risk. Banks should prepare to deal with cross-disciplinary risks such as climate change impacts on portfolios, cybersecurity threats to financial systems, risks from fintech innovations, and crypto-asset volatility. These areas are converging as new risk domains that will become part of mainstream risk regulations. For instance, regulators are already talking about climate stress tests for banks. Being proactive here – say, running internal climate risk assessments or monitoring crypto exposures even if not required yet – will prepare institutions for when these risks formally enter the regulatory sphere.