Code of Ethics in Technology as a Business Concept
There is no doubt in most people’s minds that AI is one of the most transformative technologies of our time. There is no doubt in mine that it is the most transformative technology ever.
The “of our time” position has been borne out already. By the end of 2024, my “ever” position will likely be validated as well.
AI has lost no time in unfolding its immense potential to improve efficiency, aid research and discovery, enhance decision-making, and solve complex problems – all at a blinding rate of acceleration. It is precisely this rate of acceleration that’s propelling the future towards us faster than ever before, the reason I believe that will lead us to the conclusion this year that AI will have surpassed all other advances ever made.
The cold, hard truth
Given AI’s potential to be a more powerful positive force than any other in history, it stands to reason that it will have the same possibility on the negative side of the equation, as it’s been with every other invention or discovery ever. This also raises significant ethical concerns that demand our attention and thoughtful consideration.
So I did an informal poll of six experts with lots of AI experience – as practitioners and as academics – and asked them, simply, what we should be worried about. How, I asked them, should we and could we ensure that AI systems act ethically? On six issues, here’s their collective thinking.
1. Data Bias
One of the foremost ethical concerns surrounding AI is data bias. AI systems are only as good as the data they’re trained on, so objective data curation becomes paramount. Thus, developers and researchers must prioritize and standardize rigorous testing and continuous monitoring.
2. Privacy
As AI systems become more sophisticated and far reaching in data collection and analysis, the line between security and surveillance blurs. From facial recognition to smart home devices, the potential for invasions of privacy, not to mention election tampering and corporate hacking, is ominous.
3. Accountability
As AI systems make more decisions that impact our lives, it becomes more critical to establish clear lines of responsibility. Who should be held accountable when an autonomous vehicle makes a mistake? Or when healthcare diagnoses are made and decisions about medications or therapies are carried out? This applies in the legal arena, too.
4. Job Displacement
New technologies inevitably lead to job losses in old industries but even greater job creation in new ones. Seamless transition from old to new industries depends on a four-part coalition among: the individuals who need the jobs, the employers who will offer them, higher education that will develop a skilled workforce, and government that will fund this. At the same time, must be remembered that, over the last three years, progress has been made to shrink the income and wealth gap, ground tht has been gained and that cannot be relinquished. This is nothing less than an issue of national interest.
5. Transparency
All AI stakeholders – producers, educators, users, and casual observers – deserve to have a clear understanding of how AI systems make decisions. Algorithms are just as easily sinister as they are life supporting, making scrutiny a key factor.
6. What’s Ahead?
We’re nowhere near the development of superintelligent AI – the 800-pound gorilla in the room – but we’re closer than we think, as the rate of acceleration comes into play. As we move closer to creating AI systems that surpass human intelligence, questions about their control and alignment with human values come to the fore. If you haven’t seen the film 2001: A Space Odyssey yet, do not wait any longer. Safeguards must be in place to prevent AI from evolving in ways that could threaten humanity. We can start by defaulting to Isaac Asimov’s Three Laws of Robotics, a good place for me to end this essay and implore you to look it up and continue from there.