Feb 25, 2026

Governance before AI

A Supervisory Framework for Commercial Banking Expansion, Regulatory Remediation, and AI Governance

By Teveia R. Barnes

Former Commissioner
California Department of Financial Institutions
(now California Department of Financial Protection and Innovation)

Executive Summary

Commercial banking growth offers meaningful opportunity for financial institutions to expand market presence, diversify revenue streams, and deepen client relationships. It also introduces material supervisory exposure.

Institutions that expand commercial portfolios without strengthening governance, documentation discipline, and control infrastructure frequently encounter regulatory friction. In practice, that friction may take the form of Matters Requiring Attention (MRAs), growth limitations, capital distribution constraints, formal enforcement actions, or prolonged supervisory oversight. Growth ambition alone does not trigger these outcomes. Inadequate governance does.

From a supervisory perspective, growth is rarely constrained by ambition. Examiners do not ask whether you want to grow. They ask whether your controls prove that growth is being managed. It is constrained by governance maturity. Governance does not slow growth. It makes growth sustainable.

The institutions that grow successfully are those that translate board-approved risk appetite into operational execution, apply standards consistently across business lines, preserve documentation that withstands independent review, and maintain control environments that evolve proportionately with portfolio complexity. These same characteristics enable institutions under supervisory sanction to remediate effectively and restore regulatory confidence.

A parallel dynamic now applies to artificial intelligence. AI offers significant efficiency, analytical scale, and operational enhancement. It also introduces governance risk that regulators are increasingly evaluating across credit, compliance, fraud, and operational domains. The principle that governs sustainable commercial expansion applies equally to AI adoption: governance before growth, and governance before automation.

This paper outlines how boards and executive leadership should evaluate commercial expansion, regulatory remediation, and AI deployment through a supervisory lens, and how structured, deterministic, evidence-grounded infrastructure reinforces governance execution in a scalable and defensible manner.

Why This Matters Now


Supervisory expectations are not static. What passed 3 years ago will not pass today.

In the current environment, shaped by heightened regulatory scrutiny following bank failures, increased coordination among federal and state agencies, and accelerated adoption of AI-enabled tools, governance standards are materially higher than they were even a few years ago. Institutions are expected not only to manage risk but to demonstrate and replicate clearly how risk is managed.

Examiners increasingly assess whether control infrastructure is proportionate to complexity, whether processes are institutionalized rather than personality dependent, and whether technology deployments enhance or obscure governance visibility.

Growth initiatives, remediation efforts, and AI adoption decisions are occurring in an environment of elevated transparency expectations and reduced tolerance for opacity.

Supervisory Expectations in Commercial Banking Growth

Regulators do not evaluate growth solely by asset size, earnings performance, or market share. They assess whether governance systems evolve in proportion to portfolio expansion.

As commercial banking operations grow, underwriting structures become layered, borrower relationships become more intricate, guarantor arrangements become more complex, and transaction flows become more dynamic. Supervisory focus correspondingly intensifies.
Examiners are not primarily concerned with projections. They are more concerned with patterns.

Patterns of inconsistent documentation.
Patterns of undocumented exceptions.
Patterns of deviation between written policy and actual practice.
Patterns often signal structural weakness. Structural weakness invites supervisory constraint.

The central supervisory question is straightforward: can the institution demonstrate that similar risks are treated similarly and that decisions are anchored in verifiable evidence?

Boards should periodically ask management:

• Can we reproduce any material credit decision from documentation alone?
• Would 10 comparable credits reflect consistent analytical structure and threshold application?
• Can independent review functions reconstruct decision pathways without relying on institutional memory?
The answers often predict examination outcomes.
Where growth outpaces infrastructure, regulatory concern follows. The same principle applies when technology adoption outpaces governance capacity.

Credit Risk Governance and Underwriting Discipline

Commercial expansion increases structural complexity in credit decision-making. Supervisors evaluate whether underwriting standards are applied uniformly across relationship managers and credit officers, whether financial analysis reconciles directly to source documentation, and whether internal risk ratings align with documented findings. Exception tracking is closely scrutinized, particularly when trends suggest a drift from board-approved risk appetite.

Institutions that rely heavily on narrative memoranda without structured reconciliation to financial statements or defined analytical thresholds frequently encounter supervisory criticism. Variability, even where underlying credit quality remains sound, creates examination risk.
Consider a regional institution expanding its commercial real estate portfolio across multiple markets. Relationship managers possess strong local expertise, yet credit memoranda vary in structure and depth. Financial spreads are prepared differently across regions. Covenant stress testing is unevenly documented. The underlying loans may perform. However, supervisory concern emerges not from asset weakness, but from process inconsistency.

Examination risk often arises from variability, not necessarily credit deterioration.

Structured review environments that normalize document intake, reconcile financial information systematically, and apply institution-defined thresholds consistently can materially reduce that variability. When analytical outputs are traceable to source documentation and preserved in reproducible format, underwriting becomes more defensible, quality control more effective, and supervisory review more predictable.

The supervisory advantage lies not in automating credit decisions, but in institutionalizing sound credit decisions consistently.

BSA/AML and Sanctions Governance

Commercial growth expands exposure to financial crime risk. Transaction volumes increase. Beneficial ownership structures become more layered. Industry-specific risk indicators require heightened vigilance.

Supervisory evaluation centers on defined monitoring thresholds, consistent application, reconciliation of account activity to stated business purpose, and documented escalation decisions. Technology does not reduce documentation burden in this domain. It intensifies it.
Institutions deploying AI-assisted monitoring without clearly documented alert thresholds, escalation logic, and case resolution standards frequently discover that supervisory scrutiny increases rather than decreases. Automation without governance is not modernization. It is potentially accelerated exposure.

Deterministic, policy-constrained systems can strengthen consistency and reduce reviewer variability. However, supervisory alignment requires that human reviewers retain final authority over decisions. Every Suspicious Activity Report filed, and every decision not to file, must remain traceable to a documented human judgment. Sanctions screening escalations must be similarly attributable.
Responsibility in banking is never automated. It is assigned.

Operational Scalability and Workflow Governance

Operational fragility often surfaces before balance sheet weakness.

As growth accelerates, onboarding documentation may become uneven, files may reside across fragmented systems, and exception tracking may depend excessively on individual diligence. Supervisors examine whether processes are institutionalized, version-controlled, and independently reconstructable.

Repeatability is the objective.
Repeatability creates visibility.
Visibility supports control.
Control builds supervisory confidence.

Institutions that embed governance into structured workflows, normalize documentation intake, systematically validate completeness, and preserve outputs in a traceable form demonstrate resilience under examination. Management gains clearer insight into emerging risk patterns, and regulators gain confidence in institutional discipline.

Operational maturity, more than growth rate alone, distinguishes institutions that maintain regulatory confidence from those that encounter supervisory constraint.

AI Governance as Control Infrastructure

Artificial intelligence is no longer peripheral in financial services. Institutions are deploying AI across underwriting support, transaction monitoring, fraud detection, customer due diligence, and operational workflow management.
Regulatory expectations mirror those applied to any material risk domain: governance first. If the institution cannot explain, validate, and reproduce outputs over time, the tool becomes a supervisory issue, not an operational advantage.

Model Risk Management

AI systems fall within model risk management frameworks. Institutions must maintain model inventories that include AI tools, conduct independent validation prior to deployment, establish performance monitoring protocols, and define revalidation triggers when model inputs or behavior change materially. Governance should scale with materiality, meaning a limited pilot and an enterprise-wide decision system should not be governed the same way.

Examiners increasingly ask these fundamental questions:

• Do you know precisely what this model is doing?
• Can you explain it clearly?
• Can you demonstrate that it performs as intended over time?
• Can you explain outcomes to an examiner and, when required, to a customer?
• Can you validate and monitor it independently?

Opacity is not merely a technical issue. In a regulated environment, it is a governance failure when deployed without transparency and control.
Infrastructure designed to operate deterministically, anchored to documentation, constrained by approved thresholds, and supported by full audit traceability, aligns more closely with supervisory expectations than systems that cannot clearly explain analytical pathways.

Fair Lending and Algorithmic Bias

AI-driven credit decisions carry the same fair lending obligations as traditional underwriting. Regulators do not distinguish between human bias and algorithmic bias when assessing outcomes. Supervisors expect testing, monitoring, and documented response when disparities appear. Institutions remain accountable for disparate impact regardless of the mechanism that produced it.

Structured disparate impact testing, documented bias evaluation prior to deployment, and ongoing governance oversight are compliance obligations, not discretionary safeguards.

Vendor and Third-Party Risk

Many institutions deploy AI through third-party vendors. Supervisory expectations do not diminish in vendor contexts. Accountability remains with the institution.

If a model cannot be independently validated, it should not be independently trusted. At a minimum, the institution must be able to evidence what the model does, how it was tested, how it is monitored, and how changes are controlled.

Vendor agreements must address access to model documentation sufficient for validation, audit rights for examination purposes, and notification requirements for model updates or retraining. Silent updates are unacceptable.

Data Governance and Change Management

AI output quality depends entirely on input integrity. Institutions must document data sourcing methodologies, transformation processes, and lineage to ensure outputs are traceable to evidentiary foundations.

Models evolve. Change management processes must govern updates, document behavioral impact, and trigger revalidation when warranted. Version control for AI systems should be treated with the same rigor as board-approved credit policy changes.

Institutions Under Supervisory Sanction

For institutions operating under consent orders or formal supervisory agreements, remediation demands structural reinforcement rather than incremental adjustment.

Sanctions often arise not from misconduct, but from ambition and growth that outpaced infrastructure. The correction required is structural.
Common findings include inconsistent threshold application, gaps between written standards and execution, insufficient cross-document reconciliation, and reviewer variability. Training alone rarely resolves these issues sustainably.

Remediation becomes durable when governance is embedded into workflow, standardizing analytical pathways, reinforcing documentation traceability, and strengthening independent quality control. When AI tools contribute to underlying deficiencies, remediation plans must explicitly address model governance.

Supervisory confidence is restored not when deficiencies are temporarily corrected, but when improved processes are demonstrably reproducible.

Determinism, Explainability, and Regulatory Alignment

Regulatory concern regarding advanced technology centers on opacity and variability. Systems that produce inconsistent outputs for identical inputs, or cannot clearly explain their analytical reasoning, introduce enterprise-wide governance risk.

Supervisory preference is not for a particular technological architecture. It is for transparency, reproducibility, and human accountability.
Technology cannot substitute for governance. Properly governed, it reinforces control infrastructure and institutional discipline. Improperly governed, it amplifies weakness and accelerates exposure.

Conclusion

Commercial banking growth, regulatory remediation, and AI adoption share a common foundation: operationalized governance. If governance is not operationalized, growth becomes fragile, and AI becomes an accelerant of that fragility.

Institutions that deploy AI without model risk management frameworks, fair lending testing protocols, explainability standards, defined change management processes, and documented human oversight are not modernizing risk management. They are introducing new complexity into an already demanding supervisory environment.

Institutions that anchor decisions in verifiable evidence, apply standards uniformly, preserve documentation integrity, and embed discipline into structured workflows are positioned to expand sustainably, remediate effectively when necessary, and deploy technology responsibly.
In modern banking, scale is measured not only in assets, but in the strength and reproducibility of governance.

Boards and executive leadership should view governance infrastructure, including AI governance, not as an administrative burden, but as strategic capacity.

Growth without it invites scrutiny.
Growth supported by demonstrable, reproducible governance earns regulatory trust. Regulatory trust is strategic capital.

About the Author

Teveia R. Barnes served as Commissioner of the California Department of Financial Institutions (now the California Department of Financial Protection and Innovation), where she oversaw supervision of state-chartered banks and other depository financial institutions across California. She also serves as the Independent Director of StandardC. Ms. Barnes brings extensive experience in supervisory oversight, regulatory remediation, and governance strategy.