As artificial intelligence becomes more integrated into risk and compliance operations in banking and fintech, human oversight remains a crucial factor. A recent global survey by Moody’s, which involved 600 professionals in the field, indicates that while AI use is expanding rapidly, the need for human supervision continues to be widely recognized.
The study found that 91% of participants are aware of AI’s growing role in risk and compliance, with 53% actively using or testing these technologies—up from 30% in the previous year. Sectors such as fintech, asset and wealth management, and professional services show the highest adoption rates. In contrast, government entities and corporate organizations have been more cautious about integrating AI systems. Larger companies based in North America, Europe, and Asia-Pacific lead this trend, but they still face regulatory uncertainty and challenges related to integration.
Despite enthusiasm about AI’s potential benefits—84% believe it offers significant advantages—only 30% see these benefits clearly realized so far. Concerns persist around overreliance on automation, data privacy issues, possible errors, and transparency shortcomings. To address these risks, organizations commonly implement safeguards like employee training programs and strong governance frameworks.
“Ultimately, it is the human beings who must be accountable. You can’t outsource accountability. That’s a principle in regulation that will always stay, so I think human involvement has to be mandatory,” said a Head of Compliance at a professional services firm operating in Europe, the Middle East, and Africa.
“There needs to be a human component because while AI is great, sometimes nothing can beat good old common sense and intuition,” added a Chief Financial Officer at a North American corporation.
Although most respondents support maintaining human oversight when deploying AI for risk management or compliance tasks, about 5% are comfortable with fully autonomous systems without any human involvement. This minority is represented across various sectors including banking and professional services.
The report notes that for many organizations—42% of those surveyed—human oversight is considered non-negotiable. The current trend divides responsibilities: humans focus on high-risk or complex decisions while AI handles repetitive or low-risk work. Oversight has shifted from direct operational decision-making to roles centered on quality assurance.
Regulators are monitoring this transition closely as increased automation raises new questions regarding staffing models and governance structures. Some large institutions are experimenting with innovative approaches under regulatory guidance but continue to emphasize robust human review processes before granting greater autonomy to automated agents.
Three scenarios illustrate different approaches:
– In one model (“human in the loop”), AI highlights potential issues but requires final sign-off from a compliance officer.
– Another approach (“human out of the loop”) involves fully autonomous decision-making by AI—a strategy seen as efficient but potentially risky if not carefully controlled.
– A third model (“human on the loop”) uses ongoing monitoring by professionals who evaluate results after decisions have been made.
Moody’s analysis concludes that as technology evolves within risk and compliance functions across industries worldwide, striking an appropriate balance between innovation and accountability remains essential for building trust both internally and with regulators.
For further information about Moody’s global survey findings on this topic: http://moodys.com/kyc/ai-study



