Artificial intelligence has been around for decades, but its presence in public consciousness has increased dramatically since the advent of generative AI tools like ChatGPT. There have been benefits and costs to this perceptive shift for AI-focused fintechs.
Jamie Twiss, Founder and CEO of Beforepay, told The Financial Revolutionist that interest from financial institutions has grown. However, many discussions are now more about exploring AI concepts rather than concrete plans to use Beforepay’s solutions.
I can say with a high level of confidence that in 10 years, everybody is going to be lending this way
Jamie Twiss, Founder and CEO of Beforepay
Despite some executives’ limited knowledge on the subject, Twiss predicts a major shift. He believes all lenders will soon use AI and ML for lending decisions.
However, this change hasn’t been matched by regulatory updates. This disconnect between technology and law has left the industry uncertain, forcing it to anticipate and prepare for potential state actions.
For AI-driven platforms, these regulatory uncertainties have sparked debates within the industry about the appropriate technologies and algorithms to use.
According to Stratyfy, a startup specializing in AI-driven predictive analytics and decision management for financial institutions, upcoming regulations should encourage the use of interpretable algorithms in lending and financial operations. These algorithms are preferred over black-box or even explainable ones, as they are more understandable to laypersons.
When you’re in financial services there are a lot of regulations, and when you’re with the regulators, anytime you’re using algorithms, you need to be able to explain what it is doing
Deniz A. Johnson, Stratyfy’s COO
“You have a very high risk of getting a fine if you can’t explain what you’re doing in an understandable way for all stakeholders, from your compliance team down to your regulators.”
Having interpretable lending algorithms, in Stratyfy’s view, creates staffing advantages; explainable models still require data-fluent employees in order to translate complex algorithms into plain language.
“When you want to lend to more underrepresented groups, when you want to shift your credit box differently, you have the option to do that without needing a whole data science team,” Johnson said of interpretable algorithms. “I think we as a society have to make a decision about how much discretion do we want to give lenders on these lending decisions”.
Beforepay argues that interpretability in models involves tradeoffs, particularly between a model’s interpretability and its accuracy.
Fintech believes model accuracy directly impacts fairness and inclusion, advocating for a balance between algorithm accessibility and complexity. However, he dismisses completely unexplainable neural network-based models as unviable.
Twiss suggests using SHAP (SHapley Additive exPlanations) values to identify key variables in multifactor algorithms. Instead of highlighting every part of a decision tree, SHAP values can pinpoint critical reasons for loan approvals or rejections, such as a history of gambling, irregular income, or other significant factors.
by Peter Sonner