South Africa has a deep appreciation for the role of ethics, particularly when it comes to the problems faced by large public sector organisations and the private sector. As a country, we acknowledge the critical role played by whistleblowers and the importance of guarding governance.
As a business school, many of our efforts to define and understand ethics are channelled through the GIBS Centre for Business Ethics into all our programmes and in the way we engage in general. Externally, the Centre for Business Ethics’ flagship Ethics Barometer is now being applied across multiple sectors, notably the accounting sector, a profession that is supposed to be the custodian of accountability (a crucial dimension in ethics-based culture).
At the intersection of business, the government and society, we observe ethical failings – consider the happenings at the Zondo Commission. Given our context of high unemployment and elevated income inequality, the Ethics Barometer is also being used to unearth the worrying challenges small and medium-sized business owners face when dealing with large corporations and state-owned entities.
However, when we think about business ethics, we need to move beyond the issues of the day that are evident for all to see and seek to consider how the choices we make now might present untenable ethical challenges in the future. Many of these future-focused ethical choices and conundrums relate to our ever-expanding use of technology and the rise of artificial intelligence (AI). Therefore, as we increasingly adopt digital technologies in many aspects of social life, business and in our interaction with all spheres of government, we ought to consider the extent to which we are coding our biases into the future.
Wired for good or for harm?
Have we considered how ethical choices, and their impact on our societies and personal liberties, are being coded into our digital infrastructure? All-pervasive technology has the power to build either inclusive or exclusive societies, making today’s ethical choices of utmost importance for tomorrow.
Ethical considerations should always be at the forefront of any technological endeavour. Failure to do so has the potential to encourage fraudulent Silicon Valley start-ups such as Elizabeth Holmes’ Theranos health technology business, which was exposed for falsifying blood-testing technology and deceiving investors by Wall Street Journal reporter, John Carreyrou. In the case of Theranos, its founder is now facing criminal charges.
There are many other less conspiratorial instances where being blind to ethical considerations can result in harm. For instance, the impact of AI in recruitment, where algorithms are now being used to sift through CVs to find the ideal candidate. While the creators of these algorithms profess to use objective measures, like education and experience, it is not clear whether consideration is given to the biases of the coders which are inherently embedded in these algorithms. Consider how Amazon’s AI recruitment tool was found to be biased against women, resulting in few women being offered programming roles. The problem, in this case, was that Amazon’s recruitment AI was trained on 10 years of men’s resumes.
As AI steps up to facilitate more human decision-making responsibilities, it is clear that a failure to recognise the importance of encoded programs devoid of conscious or unconscious bias may ultimately cause untold societal harm. Imagine an algorithm that red-flags women, minorities or those from certain regions and negatively impacts access to healthcare, financial services or career advancement?
In-built human bias
Of course, it’s not the algorithm’s fault. AI only spews back our own human biases. This highlights the ethical importance of interrogating the development of high-tech processes and programs. Our colleague at the University of Pretoria, Professor Emma Ruttkamp-Bloem, calls for actionable research in AI to help scholars and practitioners address problems like ethics shopping (given the proliferation of AI ethics codes), ethics bluewashing (evidence by superficial engagement with AI ethics), ethics lobbying (to promote self-regulation by technologists), ethics dumping (exporting of weak ethical practises to vulnerable jurisdictions), and ethics shirking (weak execution of ethical obligations).
Meanwhile, the likes of Reid Blackman, a US-based ethical risk practitioner, argues for the adoption of diverse institution review boards within organisations to deal with complex quandaries and ‘ethical risk’ in a more comprehensive manner. Another option is to consider the ethical propensity of technology experts, founders of tech start-ups and the people whose coding efforts underpin so much of our connected world.
Do these people demonstrate ethical intelligence1? Are corporate technology policies sufficient to provide guidelines to those with the power to access data across an organisation at the click of a button? If not, are you possibly handing the keys to the vault to individuals untrained and unfocused when it comes to shouldering this level of ethical responsibility? Former big-tech ethical stars, Timnit Gebru and Margaret Mitchell2, among others, are very sceptical.
The starting point in this far-reaching game, as is so often the case, is people. Business and society must train their people to understand ethical responsibility. If we are not sharing ethical abuse case studies, how can technology experts and innovators know how best to manage these issues?
Kay Firth-Butterfield, head of AI at the World Economic Forum, asks us to consider that the current conversations around AI should focus on positive futures, not the sort of dystopian sci-fi tomorrows beloved by films and television. “The next conversation has to be: how do we systematically want to grow and develop AI for the benefit of the world and not just sectors of it?” she says.
This conversation is impossible without putting ethics front and centre.
1 Wickham, M. (2012). Developing an Ethical Organization: Exploring the Role of Ethical Intelligence. Organization Development Journal, 30(2).
2 Raji, I. D., Gebru, T., Mitchell, M., Buolamwini, J., Lee, J., & Denton, E. (2020, February). Saving face: Investigating the ethical concerns of facial recognition auditing. In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society (pp. 145-151).