Ethical reflections on the use of AI

Sunbears
4 min readDec 6, 2023

--

Sam Altman, CEO of OpenAI
Source: CNBC

In less than a week, OpenAI has made two significant personnel decisions, as OpenAI co-founder Sam Altman was fired by the Board of Directors on 17th November after it was noted that he had been less than forthcoming in his communications with the Board, hindering the Board’s ability to fulfil its responsibilities. However, on 21st November, four days later, OpenAI announced it welcomed Sam back while partially reorganising the Board that fired him.

OpenAI’s core product, ChatGPT, has never ceased to be controversial since its introduction. On the one hand, as described in our previous articles, ChatGPT relies on its powerful natural language processing capabilities to facilitate people’s lives in healthcare, finance, transportation, cybersecurity, and entertainment. On the other hand, it also has worrisome safety concerns and faces challenges such as ethical use, privacy protection, and avoidance of bias and misuse. Therefore, reasonable regulation is necessary to ensure that AI serves humans safely and reduces risks. As the founding company of ChatGPT, OpenAI should be the first to lead by example in regulation. However, OpenAI’s five days of chaos have exposed weaknesses in the company’s autonomy. This worries those who believe AI poses an existential risk and proponents of AI regulation.

Ethical questions often accompany the birth of cutting-edge technology. Altman, accustomed to comparing artificial intelligence to nuclear weapons, associates himself with Oppenheimer. However, faced with ethical dilemmas, Oppenheimer felt deeply guilty and responsible after the war for his role in the development of the atomic bomb, reciting a line from the Bhagavad Gita after the first bomb exploded, “Now I am become death, the destroyer of worlds”. Altman, however, is rather optimistic. In Altman’s view, this technological revolution is inevitable, and the trajectory of AI modelling is so clear that OpenAI has developed a roadmap for future development. This is a scientific endeavour designed to ensure that AI behaves as expected without unintended consequences. Altman emphasised the importance of this alignment to prevent AI from causing harm.

Of course, it would be foolish to ignore the potential risks of AI. OpenAI realizes this. All along, OpenAI, led by Altman, has been committed to responsible development and deployment of AI technologies, establishing a balanced governance framework, and adopting proactive measures to address the disruptive challenges posed by AI. For example, OpenAI continually updates its usage policy to establish special requirements for specific uses such as healthcare, finance, and law; OpenAI also provides a throttling endpoint and security best practices to help application developers maintain the security of their applications; and Altman has repeatedly called on governments to regulate him and his company.

However, when the ethical threat of AI materialises, do existing controls remain effective? The answer is ambiguous. “Maybe someone will use AI to invent superbugs; maybe someone will use AI to launch a nuclear weapon; maybe AI itself will turn against humans — the solution to each scenario is unclear”. So says Heidy Khlaaf, an engineer specialising in evaluating and validating safety protocols for drones and large nuclear power plants.

Rayid Ghani, Distinguished Career Professor of Artificial Intelligence and Public Policy at Carnegie Mellon University, said, “I think it’s complete nonsense to say, ‘Please regulate us because if we don’t, we’re going to destroy the world and humanity.’ — I think it’s a complete distraction from the real risks that are happening right now around job loss, discrimination, transparency and accountability”.

On top of that, OpenAI’s board of directors has questioned the ChatGPT development team led by Altman. They were worried that Altman was paying too little attention to the dangers posed by AI — one of the key reasons the board decided to fire Altman. Dramatically, in just five days, the OpenAI board decided to welcome back Altman once again, for reasons we may not know, but what is clear is that the balance of power is tilted in the direction of those who wish to profit from innovation, led by Altman and that these people may drive the development of ChatGPT in the future.

In conclusion, the regulation of AI is a complex dilemma from an ethical perspective. It shouldn’t be “We need to be regulated!” such a vague slogan, nor should it be a rule made by the winners in a capital struggle. We still have a long way to go to realize how to make AI safe for humans.

Reference List

Guest P. and Meaker M. (2023) Sam Altman’s Second Coming Sparks New Fears of the AI Apocalypse [online] available from https://www.wired.com/story/sam-altman-second-coming-sparks-new-fears-ai-apocalypse/

Metz R., Chang E. and Savov V. (2023) Altman Returns as OpenAI CEO in Chaotic Win for Microsoft [online] available from https://www.bloomberg.com/news/articles/2023-11-22/sam-altman-to-return-as-openai-ceo-with-a-new-board#xj4y7vzkg

Titcomb J. and Field M. (2023) How the Oppenheimer of AI defeated the doomsayers [online] available from https://www.telegraph.co.uk/business/2023/11/23/sam-altman-openai-chatgpt-oppenheimer-doomsayers/

--

--

Sunbears

Driven by our passion for sports, we have made it our goal to contribute to the development of the sports world. // 私たちはスポーツへの情熱を胸に、スポーツ界の発展に貢献することを目標としています。