AI brings unusual opportunity as well as risks to businesses and their clients. Safeguarding all these interests from harm is an essential job of governance. We have developed a practical concept, The Ethical Compliance Model.
The Ethical Compliance Model is a way that a board can ensure that it stays within recognized ethical boundaries in its company’s use of AI. It is based on nine broad ethical principles for making and using AI that are recognized widely in government, industry and academia. The model is used in five steps. The broad ethical principles, described more fully below, are that AI and its use must include compliance with international human rights, promotion of human values, professional responsibility, human control of technology, fairness and nondiscrimination, transparency and explainability, safety and security, accountability and privacy.
Based on these principles, the Ethical Compliance Model is a five-step process that corporate leadership uses to assure compliance with these concepts. These steps, detailed at the end of this article, are:
- Understanding your company’s position in the AI landscape and the likely future role of regulation.
- Formulating a preliminary Ethical Compliance Model based on the nine essential ethical principles listed above.
- Using the Ethical Compliance Model to provide corporate training for key executives, led by an independent AI ethics specialist.
- Creating compliance checklists from the Ethical Compliance Model to deploy within existing governance structures for the entire business life cycle.
- Ensuring that the Model, checklists and procedures are kept current.
The ethical pitfalls of investment in the impressive science of AI can be both subtle and surprising. Here are some potential examples:
- An early-stage company engaged in drug development discovers that clinical reports include bot-generated patient responses from dead people.
- Private equity acquisitions fail because an impressive AI application refines due diligence of targets with faulty learning algorithms.
- Customers dramatically turn away from services delivered automatically in favor of a perceived human touch.
- Legal claims for unlawful dismissal rise as AI-based personnel evaluation exhibits ethnic or gender bias.
AI is in its infancy and evolving rapidly. The enthusiasm about its rapid progression, however, may cause exaggerated expectations. As many boards see already, this hyping of AI and its growth potential is rich with opportunity to pass close to or through ethical boundaries. And because AI can be deployed broadly, it poses risk across the entire ecosystem of boards, investors, limited partners, investment committees and executive leadership teams for businesses that create, market or deploy AI technology.
It’s a board-level issue, sometimes delegated to a board subcommittee. While AI has the potential to substantially impact the operation of organizations for the good, the board must be attentive to ethics when the definition of “good” gets blurred by varied cultural and economic values across an enterprise or when potent negative outcomes, such as invasion of privacy and biased results, surface or even are suspected. Decisions to invest in or use AI thus require top-level approaches to understanding and managing risk.
Look at the developments in human health care, for example. AI can identify tumors in medical images better than humans can. But the ethics here get sticky. How far should AI take over from the physician? Should the intelligence of the AI platform be the final word on the nature of the disease or the immediate and long-term treatment? And when should usage of such an AI application cause second thoughts about direct or indirect investment in such technology?
Should business leaders and investors care about these ethical issues when the productivity gains are potentially so great? It is worth considering how an outcome can become obscured, corrupted or even obliterated in the case of exaggeration about return on investment in AI, or through the amplified fear of investment failure.
Here is a public company example: Consider the hype around Tesla’s autopilot AI promoted as a completely competent replacement for the driver of a car. The technology, in fact, had glitches. Yet its name — Autopilot — implied that it could completely take over the driver’s role, which it could not. But, as a result of the CEO’s hype, drivers became overconfident and had deadly accidents when they allowed the autopilot to work unattended. Whenever such distortions occur, they disturb the equilibrium of the downstream decision-making process and, ultimately, the long-term valuation of the business. Governance’s job is to bring the necessary oversight to moderate this.
The clear takeaway is that from the initial investment thesis through approval and into business guidance throughout the development period, AI’s enormous perceived benefits can set a course that may be unachievable, disruptive, suboptimal or even downright illegal. In this context, ethical governance means infusing this process with checks and balances that sustain an ethical expectation throughout the entire business life cycle.
Many businesses are already aware of and creating ways to look at this. Our framework, which follows here, suggests a way for investment firms and businesses to embody a permanent process to invest in AI with less risk.
Starting a little more than a decade ago — about 2010 — tech corporations, governments and scholarly AI ethicists began reacting to the already burgeoning AI industry. These groups began producing reports about what ethical standards should be used in making and using AI. The idea behind these reports was to maximize the benefits of AI and mitigate risk to humans from the rapid development of such powerful tools.
In 2020, Harvard University’s Berkman Klein Center for Internet & Society gathered together what researchers at the center considered to be the 36 key AI ethics reports from the institutes, corporations and scholars mentioned above. These reports also represented values from various countries around the globe. The Berkman Klein Center filtered these reports to assess their consensus on crucial ethical principles for guiding AI manufacture and use.
Broad Ethical Principles
The Berkman Klein Center study identified nine foundational ethical principles (or themes) from those reports. Understanding them is the starting point for anybody creating or deploying AI and building a framework for discussion of ethical risk.
International human rights. AI must not be designed or used in such a way as to harm human rights across the globe.
Promotion of human values. AI should promote human values, such as kindness and harmony, rather than menace. AI should be designed to operate in ways that align with good human values. This includes designers making efforts to anticipate how AI might glitch or go off task in ways harmful to humans.
Professional responsibility. Designers and users of AI must be careful to act within the evolving guidelines of ethics of their professions.
Human control of technology. AI must be applied to keep humans in the control loop, so that it cannot easily “go rogue.”
Fairness and nondiscrimination. AI should not promote unfair biases. For example, developers should strive to use fair, nondiscriminatory data to train AI, so that it will not give biased answers or results that harm certain groups, like women or particular ethnic groups.
Transparency and explainability. AI must be understandable to people in order to keep it safe. One of the issues right now is that even AI developers often cannot fully explain how some AI works in such a way that trust is built through transparency.
Safety and security. AI must be safe to use and secure from interference, such as hacking.
Accountability. If AI causes harm, an individual person must be answerable for it. Developers and users cannot claim that they are not responsible if their model caused harm (by asserting that it “evolved” or that they weren’t aware it could commit such harm).
Privacy. People’s privacy must not be impaired by AI. This includes, for instance, using personal data to train an AI without the data owner’s consent.
All nine of these principles are a valuable starting point for corporate governance and some, depending on circumstances, quickly become priority areas for considering checks and balances. For example, there has been a special focus recently on AI privacy (especially a problem with scraping individuals’ data off the internet to use as training material for AI) and data bias (which is an aspect of fairness and nondiscrimination). A further example is the incredibly rapid development of generative AI that can produce very realistic fake videos, pictures and text within minutes. It is seen by governments, corporations and ethicists alike as hugely threatening to the integrity of social media, which is, in turn, highly influential to people’s opinions, especially the young.
Along with the previously discussed possibility of investment losses due to AI hype and exaggerated expectations as well as reputational damages, the legal repercussions of being unprepared for government regulations should be front-and-center for boards.
The biggest short-term risk to organizations that want to create or use AI is the possibility that what they develop or use might violate governmental rules aimed at mitigating risk. In part, this problem is caused by the fact that developing technology is vastly outpacing regulation; so, when governments finally do produce rules, companies that are early adopters of AI may already be in violation of these regulations.
Here are the main AI regulations instituted so far by the governments of the United States and the European Union.
U.S. regulation of AI. In October 2022, the “Blueprint for an AI Bill of Rights” was published by the Biden White House. It identifies five principles that should be followed for safe implementation of AI. According to the “Blueprint,” these principles are:
- Safe and effective systems.
- Algorithmic discrimination protections.
- Data privacy.
- Notice and explanation.
- Human alternatives, consideration and fallback.
By September 2023, the Biden administration had convinced the 15 largest corporations working on AI to commit to following voluntary rules to ensure the safety of the AI platforms they developed. These corporations include Google, Meta and OpenAI. There are eight rules, consisting mostly of things these corporations wanted to do in any case, such as committing to internal and external security testing of their AI systems before their release and facilitating third-party discovery and reporting of vulnerabilities in their AI systems.
At this time, the U.S. government is deciding which agency will regulate AI, and it is still working out how further nonvoluntary regulation will operate.
EU regulation of AI. The European Union’s “AI Act,”which has been in the works for over a year, is about to be made law for all 27 nations in the union. This act is stricter and more comprehensive than what the United States has put forth.
According to the European Commission’s website, “The AI Act aims to provide AI developers, deployers and users with clear requirements and obligations regarding specific uses of AI. At the same time, the regulation seeks to reduce administrative and financial burdens for business, in particular small and medium-sized enterprises.”
The AI Act will:
- Address risks specifically created by AI applications.
- Propose a list of high-risk applications.
- Set clear requirements for AI systems of high-risk applications.
- Define specific obligations for AI users and providers of high-risk applications.
- Propose a conformity assessment before the AI system is put into service or placed on the market.
- Propose enforcement after such an AI system is placed in the market.
- Propose a governance structure at the European and national levels.
Many firms are already concerned about the potential for AI ethics issues to arise. But using too many piecemeal solutions runs the risk of falling behind the rapid evolution of AI. The solution is a permanent adaptative cycle of governance as investment and operations demand. Here are our suggested adaptation elements:
Step 1: Understand your position in the AI landscape and the likely future role of regulation. Experience shows that using an external ethics specialist is very effective in developing a shared understanding of risks. The specialist is tasked with facilitating what type of AI is considered, what the company’s specific aims are for it and what the company’s initial ethical stance is regarding the social, ethical and regulatory ramifications. The board and operating ethics committees should simulate ethical events and assess safeguards.
Step 2: Formulate a preliminary ethical compliance model. The principles and accountabilities involved in this understanding of the board and operating ethics committees should be discussed regularly in executive and senior management meetings and training sessions for senior employees in operating companies, so that they fully understand responsibilities. This is the purpose of the Ethical Compliance Model.
Step 3: Use the Ethical Compliance Model to provide corporate training for key executives, led by an independent AI ethics specialist. This training will include items such as what AI is and which types the company will use. It will ensure deep understanding of the relevant parts of the AI ethics landscape and how responses to ethical breaches will be dealt with. This is not static — the Ethical Compliance Model is always evolving.
Step 4: From the Ethical Compliance Model, create compliance checklists to deploy within existing governance structures for the entire business life cycle. The awareness and practice of ethical checklists is needed for all stakeholders and employees at all levels. Checklists include required accountabilities and practices for simulating and responding to threats.
Step 5: Ensure that the model, checklists and procedures are kept current. With AI, everything is developing rapidly. Changes in regulations and how they are implemented are probably most important. So, review and revision over time will be necessary.
We suggest that ethical considerations be integrated into (not separated from) ongoing governance processes either as a formal board agenda item or as an important item delegated to a board subcommittee.