I remember how phone banking relied on the honesty of individual, accountable employees. How do you hold AI accountable? You give it low limits, write off the mistakes and keep scaling and ignoring failure even at the million-dollar-mistake level?
Banks indeed look at things in terms of risk mitigation, but this is silly. You will not see a credible bank incorporate LLM-based features because it is a genuinely enormous attack surface with marginal ROI. Any logical developer would create an ELIZA-style on-rails system instead, and it would never misbehave like ChatGPT.
> 1. online banking
??! You've got a system of unpredictable reliability and you're going to let it handle money?
Online banking has always been extremely conservative, sometimes for good reason.