Call for urgent regulations on artificial intelligence
By Pavlo O. Dral and Arif Ullah (the views and opinions expressed in this article are those of the authors and do not necessarily reflect the official policy or position of the authors’ affiliated institutions)
Preprints are available on Zenodo https://doi.org/10.5281/zenodo.7827174 and SSRN https://ssrn.com/abstract=4418449
AI is extraordinarily useful technology but it is becoming increasingly powerful with rapidly growing capabilities to disrupt and harm human society. We call on international and national organizations and individuals to join forces in banning the development of superintelligent AI and introducing regulations to prevent and mitigate AI-caused harm.
AI is clearly established as a very powerful technology and, as with any other technology such as nuclear power, chemical technology, or biotechnology, it can be used both for peaceful and ethical purposes and in non-ethical applications, and for creating weapons. While we have treaties and regulations on matured types of technology, such as prohibiting the cloning of people, and the development and proliferation of nuclear, chemical, and biological weapons of mass destruction, we do not have any regulations on the development of AI. This is seriously disturbing given the destructive and disruptive capabilities of AI and the rapid pace of its development.
We, humans, can do many things, but it does not mean we should do them. Our task is to set the boundaries of what we can do and, more importantly, outline what we should not do. In the field of AI, currently, no boundaries exist which is a recipe for unmitigated disaster. Only non-bounding calls by researchers and tech entrepreneurs exist to limit the use of AI for developing weapons1 or halt the development of AI models more powerful than GPT-42.
To prevent or at least ameliorate harm from AI, we call for immediate action:
- Introduce an international ban on the development of superintelligent AI which is vastly more intelligent than human society in its entirety and the brightest of human minds.
- Form an international committee to regulate the development and use of AI.
Below we elaborate on how AI can be continued to be used for the betterment of our lives, what risks the unregulated development and use of AI can have, and what steps are required.
AI can be good…
In the modern era, artificial intelligence (AI) has become an essential part of our lives. AI is making smart homes with AI-powered devices (thermostats, lighting systems, etc.), assisting us as personal assistants such as Alexa and Siri, self-driving cars, and improving user experience on the web platforms. In addition, AI can improve efficiency and productivity by optimizing methods, making better decisions, forecasting future trends, improving accuracy and precision, making maintenance easier, reducing cost, and improving traffic flow in cities, the health care system, and customer services. The use of AI in scientific research is enhancing our understanding of the natural world, accelerating scientific discoveries and outcomes, and ultimately improving the quality of human lives. In the authors’ own research, we also greatly benefit from using AI to develop faster and more accurate physically-motivated models which enable researchers to solve problems in chemistry and physics.3-6
…and harmful
Despite all these appealing aspects, the shift of human society to an AI-dependent society can have many negative impacts:
Unemployment: AI has replaced and may continue to replace workers in manufacturing, customer services, lab technicians, medical professions, and other industries. According to a recent global economic analysis, approximately 300 million jobs will be affected by AI.7
Over-dependence: With the increased use of AI, humans will be over-dependent on it, leading to the loss of critical thinking, creativity, and innovation.
Dehumanization: AI-empowered machines may see humans just as data points with no human empathy and compassion.
Social seclusion: In one of the studies on the negative impact of social media, the authors reported less in-person social interaction with peers among U.S. adolescents in the 21st century.8 The over-dependence on AI can worsen this phenomenon and will lead to a loss of social cohesion and community.
Biased outcomes: AI models can be biased such as associating Muslims with violence.9 If used in decision-making, AI can lead to discrimination in hiring, lending, rental, and customer services.
Security risks and access to sensitive user information: The use of AI in self-driving cars (autonomous cars) and smart homes raises security concerns as they can be hacked leading to a leak of sensitive information and risk to human lives. In a recent study, leakage of potentially sensitive user information has been reported for some smart home devices available on the market.10 The cybersecurity vulnerability of autonomous vehicles can result in new ways of car theft and intended car accidents. In addition, they can be used for robbery and assault.11
Privacy: As AI systems are data-dependent, they often need to gather a lot of data on individuals for training, however, the same data could be used for unsolicited commercial and other unethical purposes. Recently, Italy, the first in Europe has banned ChatGPT over privacy concerns.12
Use in weapons: The implementation of artificially intelligent systems in modern warfare has drawn significant attention from the military, with the United States Army’s Future Combat Systems Project as one of the most striking examples. The project’s objective is to develop a “robot army” that can be deployed in wars.13 In addition, the current Russian–Ukrainian war has shown a glance at the possible use of AI-empowered weapons.1 The use of AI in weapons has raised numerous concerns around the world as it will enhance the speed and lethality of the weapons and start a new arms race. In addition, AI-empowered weapons cannot replace human empathy, judgment, and intuition in complex situations, thus raising ethical questions on its autonomous decision-making about life and death. An AI-empowered weapon can misunderstand human culture and values and thus can result in harming innocent people. Moreover, a malfunctioned AI weapon can cause atrocities and then it will be difficult to determine whom to make accountable.
Other concerns: Should AI be allowed to make decisions about health, life, and death in civil settings, e.g., in medicine (think of robotic surgeons, diagnosis and medicine prescription, etc.) or autonomic vehicles (when a crash is unavoidable, the decision must be made whose life is prioritized – of passengers or pedestrians)? In education, should AI eventually replace teachers and to what extent should we allow AI use by students and teachers? How do we solve intellectual property issues when AI is trained on human-created and copyrighted work? How much AI should know about us and how to protect our data and personal privacy? How many resources should we allow for AI use, as even training and using large language models leave a huge carbon footprint?14 How human-like should we create AI, as it may become manipulative?15 Should researchers use AI to write papers?16, 17
Can AI get out of control?
We argue that recent developments such as ChatGPT give us only a glimpse of what AI will be capable of in the not-so-distant future. The stakes are too high: if AI becomes vastly more intelligent than humans, we will have very little control over this technology and our own destiny. There is no guarantee that such superintelligent AI will serve to benefit humanity.
Some doubters may say that machines will never reach such a stage and we will always be able to control them. One article even declared that the current ChatGPT is just a blurry JPEG (Graphics file type) of the Web with many limitations.18 This is wishful thinking reminiscent of plentiful examples in history when even very bright humans often failed to recognize that if something is impossible today, it does not mean that technology will not be improved in the future: in 1895, the president of the Royal Society Lord Kelvin famously said that ‘heavier-than-air flying machines are impossible’ and, more recently, people doubted whether AI can play chess Go better than humans19. Similarly, after a decade or two, we may look back at the current AI models such as ChatGPT or GPT-4 as the early computers from 1970s. Thus, it is important to get aware of AI positive and negative impacts on human society and realize our responsibilities before AI goes out of hand. That’s why, in this opinion paper, we call for the ban on developing superintelligent AI putting on line the societal order. This ban is absolutely essential at least until we better understand the dangers of the new technology and find ways to control it.
AI raises existential questions
As mentioned above, AI can potentially replace hundreds of millions of human workers7 that raises important societal and even existential questions. AI creates impressive artworks and can be used to write books over a weekend. Even in science, AI can already do highly complex work and, e.g., prove mathematical theorems better than the best mathematicians20 and one study21 claims ‘emergent autonomous scientific research capabilities’ of AI for chemistry. We can imagine that AI may become better than the best human scientists, engineers, programmers, and artists. No occupation is safe. We pride ourselves that we are currently the most intelligent beings, but this soon can be over. Do we as humans really want that machines take away from us our jobs, livelihood, and vocation? What will be the role of humans in such a ‘brave new world’, where we cannot do any work better than machines, and then why would machines need us, humans?
What should we do?
As AI is on its path to advancement and will be more integrated into our society, it is extremely important to take steps to mitigate the negative impact and prevent the misuse of AI. International organizations, national governments, companies, policymakers, and other stakeholders as well as individuals should collaborate on establishing ethical guidelines, promoting accountability and transparency, and nurturing the culture of critical thinking and responsible use of AI.
It is of paramount importance to establish an international committee for creating binding regulations on the development and use of AI. The international committee should include specialists from different areas ranging from AI researchers to political scientists to international law experts. Governments should work together on committing to and enforcing the regulations based on suggestions outlined by the international committee. Such a committee may be formed by United Nations.
This committee will have to work through a multitude of open questions raised by AI technologies. Here are some suggestions.
Regulations: Governments and international organizations should get together and establish regulations and ethical guidelines for the development and use of AI. These regulations should address issues such as the lethal use of AI, bias, privacy, security, and transparency.
Mitigating the impact on the job market: In one of the recent studies, legal services and securities, commodities, investments, customer services, telemarketing, and teaching jobs are found to be the top industries exposed to advancement in AI.22 To mitigate the negative impact of AI on these sectors, governments, companies, and educational institutions should launch programs to retrain their workers and equip them with the skills needed to work alongside AI. In addition, the retraining programs should help workers to transition to other industries where jobs are abundant.
Accountability and legal considerations: As AI cannot be held responsible for the harm it may cause, mechanisms should be established to hold accountable any organization or individual responsible for its misuse or deployment. Any organization or individuals involved in the development of AI should abide by the regulations and laws preventing the misuse of AI.
Ethical considerations: AI developers should consider the implications of their research on human society and prioritize the developments aligned with the social benefits of human society.
Awareness: It is necessary to educate the public, policymakers, AI developers, and users about the potential risks of AI and to inform them about the best practices. This will help in preventing harmful developments and the misuse of AI.
Watchdogs: Regulatory bodies should be established to oversight the developments and use of AI and make sure that they all meet the legal and ethical standards.
Preventing AI use in weapons: All tech companies and AI developers should consider the misuse of their developments in weapons and should constantly convey their findings and concerns to policymakers and other regulatory bodies.
Robust testing: It should be ensured that the data used for AI training is diverse and free of bias and error. It should be ensured that AI does not learn anything harmful.
Preventing AI from over-regulations: Having said all this about required regulations, we want to emphasize that all stackholders including governments, organizations, AI developers, manufacturers, teachers, and researchers should get their thoughts together and ensure that it is not over-regulated. It should be ensured that regulations are preventing the misuse of AI and not hindering the advancements in the field of AI for peaceful purposes.
Overall, preventing the misuse of AI needs a comprehensive framework of regulations, ethical responsibilities, and technical and social considerations. In many disciplines of life, AI can benefit if used with care, however, we need to realize our responsibilities.
Summary
AI technology is extremely useful and can bring many benefits to humanity. We ourselves extensively use AI to advance our areas of research. Thus, we are not calling for a complete ban on AI, rather we simply want that AI remains on a level of a useful tool which requires us to create regulations that would hinder AI use for non-ethical and harmful applications as well as prevent AI from becoming superintelligent and too autonomous and replacing vast numbers of humans in their jobs.
AI offers enormous potential for technological and scientific advancements; however, its use comes with responsibilities. It is important to ensure that the use of AI meets ethical standards and is safe for both humans and the surrounding environment. Researchers should consider the negative impact of their research and developments on society and should prioritize research that is in line with ethical and moral standards. In addition, regulations and ethical guidelines should be introduced which help in preventing the misuse of AI. Regulatory organs should play the role of watchdogs and should ensure the accountability of tech companies and individuals involved in the deployment and use of AI for harmful purposes.
While we call for establishing international regulatory bodies, we realize that the unprecedented speed of AI development is at odds with a slow process of international efforts. Thus, we also call for national governments, companies, and individuals to reflect on the dangers of AI, form internal ethics and regulation committees, and not develop non-ethical and dangerous technology. Commercial organizations and entrepreneurs working on AI may need to voluntarily put on hold their own ambitions and desire for profit or power for the sake of humanity. Examples are a recent decision by Italy to ban ChatGPT12 and an open letter calling for a moratorium on developing models more powerful than GPT-42.
References
1. Russell, S., AI weapons: Russia’s war in Ukraine shows why the world must enact a ban. Nature 2023, 614, 620–623.
2. Pause Giant AI Experiments: An Open Letter. https://futureoflife.org/open-letter/pause-giant-ai-experiments/.
3. Dral, P. O.; Barbatti, M., Molecular Excited States Through a Machine Learning Lens. Nature Reviews Chemistry 2021, 5, 388–405.
4. Zheng, P.; Zubatyuk, R.; Wu, W.; Isayev, O.; Dral, P. O., Artificial Intelligence-Enhanced Quantum Chemical Method with Broad Applicability. Nature Communications 2021, 12, 7022.
5. Ullah, A.; Dral, P. O., Predicting the future of excitation energy transfer in light-harvesting complex with artificial intelligence-based quantum dynamics. Nature Communications 2022, 13, 1930.
6. Dral, P. O., Quantum Chemistry in the Age of Machine Learning. Elsevier: Amsterdam, Netherlands, 2023.
7. The Potentially Large Effects of Artificial Intelligence on Economic Growth (Briggs/Kodnani). https://www.key4biz.it/wp-content/uploads/2023/03/Global-Economics-Analyst_-The-Potentially-Large-Effects-of-Artificial-Intelligence-on-Economic-Growth-Briggs_Kodnani.pdf.
8. Twenge, J. M.; Spitzberg, B. H.; Campbell, W. K., Less in-person social interaction with peers among us adolescents in the 21st century and links to loneliness. Journal of Social and Personal Relationships 2019, 36, 1892–1913.
9. Abid, A.; Farooqi, M.; Zou, J., Large language models associate muslims with violence. Nature Machine Intelligence 2021, 3, 461–463.
10. Apthorpe, N.; Reisman, D.; Feamster, N., A smart home is no castle: Privacy vulnerabilities of encrypted iot traffic. arXiv preprint arXiv:1705.06805 2017.
11. Algarni, A.; Thayananthan, V., Autonomous vehicles: The cybersecurity vulnerabilities and countermeasures for big data communication. Symmetry 2022, 14, 2494.
12. Artificial intelligence: stop to ChatGPT by the Italian SA. Personal data is collected unlawfully, no age verification system is in place for children. Garante per la protezione dei dati personali. https://www.gpdp.it/web/guest/home/docweb/-/docweb-display/docweb/9870847#english. 2023.
13. Feickert, A.; Lucas, N. J., Army future combat system (fcs) “spin-outs” and ground combat vehicle (gcv): Background and issues for congress. Library Of Congress Washington DC Congressional Research Service 2009.
14. An, J.; Ding, W.; Lin, C., ChatGPT: tackle the growing carbon footprint of generative AI. Nature 2023, 615, 586.
15. Veliz, C., Chatbots shouldn’t use emojis. Nature 2023, 615, 375.
16. Stokel-Walker, C.; Van Noorden, R., What ChatGPT and generative AI mean for science. Nature 2023, 614, 214–216.
17. van Dis, E. A. M.; Bollen, J.; Zuidema, W.; van Rooij, R.; Bockting, C. L., ChatGPT: five priorities for research. Nature 2023, 614, 224–226.
18. Chiang, T. Chatgpt is a blurry jpeg of the web. https://www.newyorker.com/tech/annals-of-technology/chatgpt-is-a-blurry-jpeg-of-the-web.
19. Silver, D.; Huang, A.; Maddison, C. J.; Guez, A.; Sifre, L.; van den Driessche, G.; Schrittwieser, J.; Antonoglou, I.; Panneershelvam, V.; Lanctot, M.; Dieleman, S.; Grewe, D.; Nham, J.; Kalchbrenner, N.; Sutskever, I.; Lillicrap, T.; Leach, M.; Kavukcuoglu, K.; Graepel, T.; Hassabis, D., Mastering the game of Go with deep neural networks and tree search. Nature 2016, 529, 484–489.
20. Castelvecchi, D., How will AI change mathematics? Rise of chatbots highlights discussion. Nature 2023, 615, 15–16.
21. Boiko, D. A.; MacKnight, R.; Gomes, G., Emergent autonomous scientific research capabilities of large language models. arXiv:2304.05332v1 [physics.chem-ph] 2023.
22. Felten, E.; Raj, M.; Seamans, R., How will language modelers like ChatGPT affect occupations and industries? arXiv preprint arXiv:2303.01157 2023.
0 Comments on “Call for urgent regulations on artificial intelligence”