Introduction – AI brings new ethical challenges and the need for professional registration
With the rapid development and growing demand for the use of Artificial Intelligence (AI) there are new and serious challenges for the ethical use of these systems.
The issues are currently most visible in emerging generative AI platforms (used for creating content such as text and images). They include misinformation, intellectual property and plagiarism, bias in training data, and the ability to influence and manipulate public opinion (for example during elections).
The Post Office Horizon IT Scandal has highlighted the vital importance of independent standards of professionalism and ethics in the application, development and deployment of technology.
It is right that the UK aims to play a leading role in developing, safe, trustworthy, responsible and sustainable AI, as demonstrated by the summit in November 2023 and the associated Bletchley Declaration [1].
We argue these objectives can only be achieved when:
- AI and other high stakes technology practitioners meet shared standards of accountability, ethics and competence, as licenced professionals
- Non-technical CEOs, leadership teams and governing boards making decisions on the resourcing and development of AI in their organisations, share in that accountability and have a greater understanding of the technical and ethical issues.
This paper makes the following recommendations to support the ethical and safe use of AI:
- Every technologist working in a high-stakes IT role, in particular AI, should be a registered professional meeting independent standards of ethical practice, accountability and competence.
- Government, industry and professional bodies should support and develop these standards together to build public trust and create the expectation of good practice.
- UK organisations should publish policies on ethical use of AI in any relevant systems for customers and employees – and those should also apply to leaders who are not technical specialists including CEOs.
- Technology professionals should expect strong and supported routes for whistleblowing and escalation when they feel they are being asked to act unethically or, for example, to deploy AI in a way that harms colleagues, customers or society.
- The UK government should aim to take a lead in and support UK organisations to set world-leading ethical standards.
- Professional bodies, such as the BCS, should support this work by seeking and publishing regular research on the challenges their members face, and by advocating for the support and guidance they need and expect.
A PDF version of this paper is available for download.
The BCS Ethics Survey 2023 – Key Findings
BCS’ Ethics specialist group carried out a survey of IT professionals in the summer of 2023, to help in identifying the challenges that practitioners face [2], ahead of the AI Safety Summit. This paper outlines the key findings and suggests some actions that should happen to help address the challenges. The online survey was sent to all UK BCS members in Augus 2023. It was answered by 1,304 individuals. The findings show that AI ethics is a topic that BCS members see as a priority, that many have encountered personally, and which is problematic in many ways. There is a lack of consistency in how companies deal with ethical issues in tech, with many organisations reportedly not giving any support to staff. A summary of the findings highlights that:
- 88% of participants believe it is important that the UK government takes a lead in shaping global ethical standards, in AI and other high-stakes technologies.
- When considering working for or partnering with an organisation, 90% stated that their reputation for ethical use of AI and other emerging technologies is important.
- 82% think that UK organisations should be required to publish ethical policies on their use of AI and other high-stakes technologies.
- When it came to technologists demonstrating their ethical credentials through recognised professional standards, 81% of respondents feel this is important.
- The greatest number of respondents (24%) indicated that health and care should take priority in establishing ethical standards for AI.
- 19% of those questioned have faced an ethical challenge in their work over the past year.
- Support from employers on these issues varies, with many respondents stating they had poor or no support in dealing with ethical issues relating to technology, though there were examples of good practice.
Analysis of results:
Priority Areas for establishing ethical standards in AI
Almost a quarter saw health and care as the key area, which is not surprising given the obvious possible ramifications of AI in surgery, diagnostics, or patient interaction. However, other fields such as defence, criminal justice, but also banking and finance can lead to harmful consequences of AI use and call for attention, according the respondents.
Ethical Issues in Professional IT Practice
When asked whether technologists had faced ethical challenges in their work over the past year, 69% of respondents answered no. This suggests that on an annual basis about a third of BCS members faced ethical challenges, which implies that dealing with ethical challenges is virtually unavoidable throughout a professional career, thus supporting the long-standing emphasis on including ethics in BCS-accredited training. Interestingly, 12% of the respondents were not sure whether they had faced ethical challenges, suggesting either lack of conceptual clarity with regards to ethics or rapidly shifting concerns that do not allow for easy categorisation of ethical issues.
Organisational Reputation for Ethical use of AI and other high stakes technologies
The recognition of the importance of ethics and the awareness of potential harm caused by technology explains why the respondents had a very strong preference for working with organisations with a strong reputation for ethical use of technology. 90% of BCS members who responded to the survey felt that this is either very important or important. This raises challenges for organisations with regards to appropriate mechanisms that allow them to demonstrate and signal their commitment to ethical work, one of which is the publication of ethical policies.
Requirement to publish policies on ethical use of AI and other high stakes technologies
The high standard to which respondents hold the ethical reputation of organisations they work with was reflected by the strong support for a requirement for organisations to publish their policies on ethical use of technologies, including AI.
More than 80% of respondents supported this and only 5% opposed it outright. This position is important in that it not only supports the creation of ethical policies and their application but goes beyond this by calling for a requirement to publish them.
Importance of Government Leadership on AI and technology ethics
The previous question already shows that respondents feel that government has a key role to play in ensuring ethical use of critical technologies, such as AI. To further investigate the theme of government, in terms of policies and standards, we posed the question “How important is it that the UK government takes a lead in shaping global ethical standards, in AI and other high-stakes technologies?”.
88% of respondents believe it is important that the UK government takes a lead in shaping global ethical standards, in AI and other emerging technologies. This also shows the need for parties preparing manifestos ahead of the general election in 2024, to clarify positions on ethical use of technology. This also applies to the devolved administrations and the relevant policies within their remits. Initiatives like the Centre for Data Ethics and Innovation (CDEI) are a good way of developing these polices; the CDEI’s ethics advisory board term ended in September 2023; the centre will continue to seek ‘expert views and advice in an agile way that allows us to respond to the opportunities and challenges of the ever-changing AI landscape’ [3]. In November, the UK government announced the creation of the AI Safety Institute [4].
Demonstrating ethical credentials is ‘very important’
Respondents clearly saw that they face responsibilities in their role as IT professionals. This can be seen from their strong support for the acquisition of ethical credentials, for example through ethical standards in areas such as AI or cybersecurity. Only 5% of respondents saw these as not important with 81% seeing them as either important or very important.
Support for IT Professionals on ethical issues in AI and other high stakes technologies
In our final question, we asked “How did your organisation support you in raising and managing the [ethical] issue?”.
Of those who responded, 41% said they received no support and 35% received ‘informal support’ (such as talking to their line manager/colleagues).
The survey showed examples of good practice, as for example expressed by this response:
“My employer listened to my potential ethical concern. I was supported in discussing the potential concern with our customers. Our customers and my employer agreed to put in place controls to ensure the potential ethical situation was appropriately managed.”
This shows that some organisations follow good practice, engage in ethical discussions and to develop clear policies and procedures for employees.