With mission-critical operations, artificial intelligence (AI) has the potential to produce incredible benefits – not only for businesses but also for the people they serve and employ. Examples are everywhere. You see it when systems detect fraudulent purchases and keep a consumer’s account safe. It’s in autonomous and self-driving cars, which are programmed to help keep drivers safe and avoid collisions. It’s also at work in Fair AI-powered recruiting.
In each of these examples, AI is a tool to learn complex patterns – including some that are practically undetectable. The result is more impactful and, with appropriate oversight, better and fairer decision-making. All these examples share a common theme of an advanced algorithm that learns from data to generate better results.
So, what is Responsible AI?
Responsible AI is often a set of principles or a framework that practitioners or systems can follow to reduce the potential for adverse human impacts from artificial intelligence. Key thematic principles include A Human in the Loop; Transparency & Explainability; Bias Mitigation; Security, Privacy, and Safety; and more. The industry has put forth several broad Responsible AI frameworks (NIST, Google, PwC, IBM, and Regions, to name a few). Being vigilant about the responsible management of AI systems, even ones deemed as “AI for good,” is essential to Responsible AI risk management.
Some of the most sensitive and important AI use cases that deserve the utmost responsible AI risk management are use cases that directly involve or impact people or their well-being (such as access to credit, jobs, housing, or money). Common people-centric use cases of AI in banking are around credit decisioning, fraud detection, and workforce planning. At Regions, we are sensitive to these important use cases and ensure responsible AI practices are in place while continuing to monitor for evolving best practices in the industry. One way we are proactively managing these risks is by serving as a core working group member in the Data & Trust Alliance
The Data and Trust Alliance (D&TA) is an industry-led consortium created with the intent of building out a responsible AI framework. The goal is to give member companies greater tools in properly assessing data and AI decisioning systems, with the most recent initiative being Algorithmic Safety: Mitigating Bias in Workforce Decisions. This initiative, which Regions associates helped develop as core working group members, provides tools that enable member companies to ensure qualified personnel are given opportunities to advance through the hiring process, in turn delivering better outcomes. From this framework are four key domains for assessing a workforce planning vendor that uses AI.
The most essential domain to responsible AI is compliance. For talent and workforce planning, understanding legal and regulatory requirements is critical to generating sound, beneficial results for everyone involved. Global, federal, state, and municipal laws may apply, adding complexity to the design and management of any AI system. The D&TA framework as well as AI risk management at Regions guide HR practitioners to ask the right questions to ensure there is proper coverage or escalation to either an expert on AI or an HR lawyer with AI experience.
Data are subject to considerations around privacy, such as the California Consumer Privacy Act (CCPA) and General Data Protection Regulation (GDPR), bias, sourcing, and governance. Additional complexity arises if internal data for an organization is blended with external data to build any AI systems. In any scenario, Regions takes special care to ensure the data are relevant to our employee and candidate base and that potential bias is detected and eliminated.
Behind algorithms and AI systems are typically some mathematical, statistical, or computer science methodologies to learn from the data. For example, when a customer swipes a credit card in the payment system at a store, debit and credit card payments processors are running models and algorithms to almost instantly determine if that transaction may be fraudulent or not.
Regions has specialized teams to assess the risk of these advanced models and algorithms. While these teams have been in place to assess traditional risk models around credit and capital, these teams are also equipped to assess the risk of machine learning and artificial intelligence systems by considering additional factors such as how quickly the systems learn, the amount of data consumed, the explainability of the system and the complexity of the underlying techniques.
Good governance means well-managed oversight over AI systems. A well governed AI system is reproducible, explainable, documented, monitored, performant, and includes a broad cross-section and diversity of talent. The result of a well-governed system is that the system is easier to understand, less prone to failure, and is less likely to introduce bias or other adverse outcomes. AI systems that are not well governed may not only fail, but that failure may go unnoticed until it is too late, or the impact is too costly (for both consumers and the business).
At Regions, our governance and AI risk management program requires AI systems to be assessed by our internal Model Risk Management team prior to moving to production as well as ongoing performance monitoring programs to detect deterioration in performance of AI systems.
Not all workforce planning AI systems present the same amount of risk. Key areas of focus include hiring and assessments, pay and compensation, onboarding, and performance management. The key theme here is not only compliance with laws governing fair and equitable hiring, but also the ability to support hiring organizations and prospective talent by better matching the talent available with the positions that are best suited for their skills.
Regions takes a prudent risk management approach to AI and machine learning (ML). Within our independent Model Risk Management team, we assess artificial intelligence risks that cover these domains while serving as an industry steward as a core working group member of the Data & Trust Alliance. All models, including AI and ML models at Regions, are assessed by the independent Model Risk Management Team prior to implementation and are subject to rigorous and independent effective challenge around the design, methodology, data, implementation, performance, and governance of these systems. For any model tied to sensitive areas that impact humans such as workforce planning, special considerations around these four domains of compliance, data, algorithms, and governance are applied to ensure any AI adopted by Regions is responsible and appropriately managed.
For more information, see the following articles on Doing More Today: