This article aims to provide a general view of responsible AI. To learn more about IBM’s specific point of view, see our AI ethics page.
The widespread adoption of machine learning in the 2010s, fueled by advances in big data and computing power, brought new ethical challenges, like bias, transparency and the use of personal data. AI ethics emerged as a distinct discipline during this period as tech companies and AI research institutions sought to proactively manage their AI efforts responsibly.
According to Accenture research: “Only 35% of global consumers trust how AI technology is being implemented by organizations. And 77% think organizations must be held accountable for their misuse of AI.”1 In this atmosphere, AI developers are encouraged to guide their efforts with a strong and consistent ethical AI framework.
This applies particularly to the new types of generative AI that are now being rapidly adopted by enterprises. Responsible AI principles can help adopters harness the full potential of these tools, while minimizing unwanted outcomes.
AI must be trustworthy, and for stakeholders to trust AI, it must be transparent. Technology companies must be clear about who trains their AI systems, what data was used in that training, and, most importantly, what went into their algorithm’s recommendations. If we are to use AI to help make important decisions, it must be explainable.