Add Your Heading Text Here
Though a rhetorical theme , We can safely defer the discussion about whether artificial intelligence will eventually take over board functions. We cannot, however, defer the discussion about how boards will oversee AI — a discussion that’s relevant whether organizations are developing AI systems or buying AI-powered software. With AI adoption in increasingly widespread use, it’s time for every board to develop a proactive approach for overseeing how AI operates within the context of an organization’s overall mission and risk management.
According to a recent global AI survey, although AI adoption is increasing rapidly, overseeing and mitigating its risks remain unresolved and urgent tasks: Just 41% of respondents said that their organizations “comprehensively identify and prioritize” the risks associated with AI deployment. Board members recognize that this task is on their agendas: According to the 2019 National Association of Corporate Directors (NACD) Blue Ribbon Commission report, “Fit for the Future: An Urgent Imperative for Board Leadership,” 86% of board members “fully expect to deepen their engagement with management on new drivers of growth and risk in the next five years.”
Why’s this an imperative ? Because AI’s potential to deliver significant benefits comes with new and complex risks. For example, the frequency with which AI-driven facial recognition technologies misidentify nonwhite or female faces is among the issues that have driven a pullback by major vendors — which are also concerned about the use of the technology for mass surveillance and consequent civil rights violations. Recently, IBM stopped selling the facial technology altogether. Further, Microsoft said it would not sell its facial recognition technology to police departments until Congress passes a federal law regulating its use by law enforcement. Similarly, Amazon said it would not allow police use of its technology for a year, to allow time for legislators to act.
The use of AI-driven facial recognition technology in policing is just one notorious example, however. Virtually all AI systems & platforms in use today may be vulnerable to problems that result from the nature of the data used to train and operate them, the assumptions made in the algorithms themselves, the lack of system controls, and the lack of diversity in the human teams that build, instruct, and deploy them.Many of the decisions that will determine how these technologies work, and what their impact will be, take place largely outside of the board’s view — despite the strategic, operational, and legal risks they present. Nonetheless, boards are charged with overseeing and supporting management in better managing AI risks.
Increasing the board’s fluency with and visibility into these issues is just good governance. A board, its committees, and individual directors can approach this as a matter of strict compliance, strategic planning, or traditional legal and business risk oversight. They might also approach AI governance through the lens of environment, social, and governance (ESG) considerations: As the board considers enterprise activity that will affect society, AI looms large. The ESG community is increasingly making the case that AI needs to be added to the board’s portfolio.
How Boards can assess the quality & impact of AI
Directors’ duties of care and loyalty are familiar and well established. They include the obligations to act in good faith, be sufficiently informed, and exercise due care in oversight over strategy, risk, and compliance.
Boards assessing the quality and impact of AI and oversight is required should understand the following:
- AI is more than an issue for the technology team. Its impact resonates across the organization and implicates those managing legal, marketing, and human resources functions, among others.
- AI is not a siloed thing. It is a system comprising the technology itself, the human teams who manage and interact with it, and the data upon which it runs.
- AI systems need the accountability of C-level strategy and oversight. They are highly complex and contextual and cannot be trustworthy without integrated, strategic guidance and management.
- AI is not static. It is designed to adapt quickly and thus requires continuous oversight.
- The AI systems most in use by business today are efficient and powerful prediction engines. They generate these predictions based on data sets that are selected by engineers, who use them to train and feed algorithms that are, in turn, optimized on goals articulated — most often — by those developers. Those individuals succeed when they build technology that works, on time and within budget. Today, the definition of effective design for AI may not necessarily include guardrails for its responsible use, and engineering groups typically aren’t resourced to take on those questions or to determine whether AI systems operate consistently with the law or corporate strategies and objectives.
The choices made by AI developers — or by an HR manager considering a third-party resume-screening algorithm, or by a marketing manager looking at an AI-driven dynamic pricing system — are significant. Some of these choices may be innocuous, but some are not, such as those that result in hard-to-detect errors or bias that can suppress diversity or that charge customers different prices based on gender. Board oversight must include requirements for policies at both the corporate level and the use-case level that delineate what AI systems will and will not be used for. It must also set standards by which their operation, safety, and robustness can be assessed. Those policies need to be backed up by practical processes, strong culture, and compliance structures.
Enterprises may be held accountable for whether their uses of algorithm-driven systems comply with well-established anti-discrimination rules. The U.S. Department of Housing and Urban Development recently charged Facebook with violations of the federal Fair Housing Act for its use of algorithms to determine housing-related ad-targeting strategies based on protected characteristics such as race, national origin, religion, familial status, sex, and disabilities. California courts have held that the Unruh Civil Rights Act of 1959 applies to online businesses’ discriminatory practices. The legal landscape also is adapting to the increasing sophistication of AI and its applications in a wide array of industries beyond the financial sector. For instance, the FTC is calling for the “transparent, explainable, fair, and empirically sound” use of AI tools and demanding accountability and standards. The Department of Justice’s Criminal Division’s updated guidance underscores that an adequate corporate compliance program is a factor in sentencing guidelines.
From the board’s perspective, compliance with existing rules is an obvious point, but it is also important to keep up with evolving community standards regarding the appropriate duty of care as these technologies become more prevalent and better understood. Further, even after rules are in force, applying them in particular business settings to solve specific business problems can be difficult and intricate. Boards need to confirm that management is sufficiently focused and resourced to manage compliance well, along with AI’s broader strategic trade-offs and risks.
Risks to brand and reputation. The issue of brand integrity — clearly a current board concern — may most likely drive AI accountability in the short term. Recent issues faced by individuals charged with advancing responsible AI within companies found that the “most prevalent incentives for action were catastrophic media attention and decreasing media tolerance for the status quo.” Well before new laws and regulations are in effect, company stakeholders such as customers, employees, and the public are forming opinions about how an organization uses AI. As these technologies penetrate further into business and the home, their impact will increasingly define a brand’s reputation for trust, quality, and its mission.
The role of AI in exacerbating racial, gender, and cultural inequities is inescapable. Addressing these issues within the technology is necessary, but it is not sufficient. Without question, we can move forward only with genuine commitments to diversity and inclusion at all levels of technology development and technology consumption.
Business continuity concerns. Boards and executives are already keenly aware that technology-dependent enterprises are vulnerable to disruption when systems fail or go wrong, and AI raises new board-worthy considerations on this score. First, many AI systems rely on numerous and unknown third-party technologies, which might threaten reliability if any element is faulty, orphaned, or inadequately supported. Second, AI carries the potential of new kinds of cyber threats, requiring new levels of coordination within any enterprise. And bear in mind that many AI developers will tell you that they don’t really know what an AI system will do until it does it — and that AI that “goes bad,” or cannot be trusted, will need remediation and may have to be pulled out of production or off the market.
The ”NEW” strategic imperative for Boards
Regardless of how a board decides to approach AI fluency, it will play a critical role in considering the impact of the AI technologies that a business chooses to use. Before specific laws are in effect, and even well after they are written, businesses will be making important decisions about how to use these tools, how they will impact their workforces, and when to rely upon them in lieu of human judgment. The hardest questions a board will face about proposed AI applications are likely to be “Should we adopt AI in this way?” and “What is our duty to understand how that function is consistent with all of our other beliefs, missions, and strategic objectives?” Boards must decide where they want management to draw the line: for example, to identify and reject an AI-generated recommendation that is illegal or at odds with organizational values .
Boards should do the following in order to establish adequate AI fluency mechanics:
- Learn where in the organization AI and other exponential technologies are being used or planning to be used, and why.
- Set a regular cadence for management to report on policies and processes for governing these technologies specifically, and for setting standards for AI procurement and deployment, training, compliance, and oversight.
- Encourage the appointment of a C-level executive to be responsible for this work, across company functions.
- Encourage adequate resourcing and training of the oversight function.
It’s not too soon for boards to begin this work; even for enterprises with little investment in AI development, it will find its way into the organization through AI-infused tools and services. The legal, strategic, and brand risks of AI are sufficiently grave that boards need facility with them and a process by which they can work with management to contain the risks while reaping the rewards.AI Fluency is the new strategic agenda.