Whether it is “1984” by George Orwell or a Black Mirror episode on Netflix, predictions of the future of artificial intelligence have been common. But did anyone ever truly anticipate these predictions to be a reality in today’s history? Probably not. The perception of artificial intelligence as either a threat or a source of progress is a topic that has ignited the ongoing debates by politicians, CEOs, and common people, creating significant divisions.
Reactions to AI and its regulations thus far have largely been based on what a state values. As AI will significantly impact digital everyday life, Foreign Affairs depicts how different countries have reacted differently to digital regulation for AI. Since the United States is a market-driven state, it approaches AI legislation exactly in that context. It has been concerned more with AI’s progress, favoring freedom of speech. However, China has adopted a state-driven approach, which is no surprise given the history of China’s involvement with internet censorship. Alternatively, the European Union is rights-driven, focusing on protecting individuals’ fundamental liberties. These different approaches leave a disagreement among major powerful states regarding regulating AI.
Since the U.S. is more in favor of AI freedom, The Associated Press reports emerging strategies for philanthropy’s impact on AI trajectory are coming to light. Tech industry billionaires are increasingly inclined to back initiatives and organizations that promote the beneficial aspects of AI, whereas foundations lacking substantial wealth tend to prioritize addressing the risk associated with AI. Some are concerned more with the benefits, some are concerned more with the disadvantages, and others are equally concerned about both.
Even though billionaires in the U.S. are optimistic about the future of AI, many individuals are concerned as to how AI may worsen systematic racism. Histories of past and present systemic racism within U.S. history raises questions about how AI regulation will address concerns of these tools worsening these issues. As the ACLU states, “There is ample evidence of the discriminatory harm AI tools can cause to already marginalized groups. After all, AI is built by humans and deployed in systems and institutions that have been marked by entrenched discrimination — from the criminal legal system to housing, to the workplace, to our financial systems.” Because of human influence, the ACLU continues that AI software is often biased, in both the situations AI is asked to predict and the data used to train AI tools. These situations often unrepresentative of marginalized groups and discriminatory at worst, and can impact the “design, development, implementation, and use” of AI.
These concerns emerge and intertwine from the questioning of the regulation of AI. Foreign Policy paints a picture what the Geoffrey Hinton, the “godfather of AI” talks about to the ; the major root of these concerns is that no one is quite sure how to control AI, since it generates itself. Technologists and researchers have published an open letter to put a delay in the development of AI. Foreign Policy mentions that Hinton eventually quit working with Google because of the consistent uncertainty surrounding AI’s future.
Image courtesy of Medium