Dr. Yeasted, As the complexity of AI evolves, who would most appropriately decide what limits and rights are appropriate. Should that be left in the hands of government leaders or the citizens of a society, and should these decisions apply worldwide or nation by nation? Is there a point at which AI should be self-regulating, and if so, what qualities would it need to possess to make this transition?
–Rachel, New York
It’s a great question, and the answer is actually quite beautiful.
The first thing that influenced our thoughts about appropriate limitations on new technology came from popular literature and other media, some science fiction, some history and philosophy. Then came the modern-day philosophers and ethicists who spotted the potential perils of the technology and banded together to create guidelines for safe use. Such was the case at the Asilomar Conference on Beneficial AI held in January 2017. Documentaries soon followed to make the concerns about AI more tangible, carrying the message directly to everyone on their couch watching Netflix. Public awareness increased just enough for the people to raise concerns to government officials. In May of 2018, the EU produced the GDPR, a series of data protection laws, followed closely by the US Executive order on AI signed by Biden in October 2023. This order requires AI developers to share safety test results with the government before releasing to the public, develops standards for ethical AI and for detecting and labeling AI content, limiting the replacement of humans in the workforce, preventing data theft and the spreading of misinformation.
The US has reached out to other nations to remind them that this is not a US or even a European problem, but a worldwide concern. But it is probably a good thing to let each nation decide its own guardrails for use of AI within its borders, since each country may have a distinct culture and history that shapes how its people regard the technology. Japan, for example, seems to be very accepting of AI. That being said, certain worldwide ethical standards, particularly those regarding the rights of humans, should still be followed and, when necessary, enforced (just in case a country like Turkey decides to use fully autonomous weapons on civilians).
As encouraging as the above process has been as a symbol of humanity’s adaptability to curtail a new existential threat, AI has still been allowed to progress faster than we can control it. Many would still argue that state and federal governments are far too slow to keep up with the pace of technological development. So, “soft laws” produced by companies, organizations or local governments are often the best we can do when new technology first appears. Even before soft law, we have our instant intuitive framework called our conscience, based on years of interaction with other humans and, yes, popular literature and media. As a wise man once told me, “An itemized list of robot rights won’t be as influential as public opinion.” In other words, if enough of us FEEL like we’re interacting with a sentient lifeform worthy of respect, that may be the deciding factor in what rights we grant it. Only after the fact will those rights be put into law.
As for self-regulation, humans should always stay in the loop and maintain meaningful control.
Any time we instill technology with autonomy, at least three criteria should be met: it should align with our human values, be transparent, and be turn-off-able (it’s a word now)!
Thanks so much for your question, Rachel!
Thank you all so much for the love and support for my new book!
Christian