TechnologyTrending

Singularity: The Future

By Ethan James
Trending Writer

(Photo courtesy of Air & Space Magazine)

Is the world ready for a computer that is smarter than humans? What seems like the imagination of science-fiction writers just a couple decades ago has become the prediction of futurists and engineers alike. The discussion of inevitability is no longer a look into the distant future, but within the next two decades. We are not talking about a new technology; we are talking about the next level of intelligence.

The Singularity refers to the technological intelligence that goes beyond human intelligence. In the article, “The Coming Technological Singularity: How to Survive in the Post-Human Era”, Vernor Vinge defines the path to singularity as the creation of, “an ultraintelligent machine [that can] design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind.” This intelligence explosion is being compared to the entire span of human evolution and intelligence, but with what took humans thousands of years occurring in mere days and hours. Just like the progress of human development, the progress of machines with such capabilities will bring both positive and negative consequences.

Engineers like Ray Kurzweil, Google’s Director of Engineering, welcome this kind of future while Elon Musk and Bill Gates have warned of a future where machines become smarter than human beings. Kurzweil’s opinion is that of progress, stating in a SXSW interview, “What’s actually happening is [machines] are powering all of us. They’re making us smarter. We’re really going to exemplify all the things that we value in humans to a greater degree.” He, along with Vinge and others, believe that the advancement beyond human intelligence is going to bring the next wave of human progress. The progress will come in a multitude of directions, especially in areas that are already receiving major funding in research such as limb prosthetics, cerebral palsy, national security, and the financial markets. Could the cure to cancer or aging be on this horizon?

Why are Bill Gates and Elon Musk warning us of this future? In an interview at Code Conference, Elon Musk was expressive of a major concern of having a computer that is smarter than humans, but it does not come from the occurrence of singularity, as even he sees its inevitability. His concern comes from the danger he foresees if such intelligence gets in the hands of “some small set of people”, something his OpenAI Inc. is trying to mitigate through the power of collective will. Simply put, “You can’t [prevent that] if there is one AI that’s a million times better than anything else.” So, his solution to the concern of the occurrence of singularity was to create a nonprofit company that conducts research in the field of artificial intelligence. Any technology conceived can bring on concerns, but what drives the urgency that Elon Musk speaks of or the concerns of CEOs in large technology companies?

The biggest realistic fear is not the quick extinction of humans but the inability to regulate artificial intelligence. How does one govern the behavior or conduct of a system that is on a higher level of understanding and computing? The United States Supreme Court is already challenged with cases in telecommunications, cyberspace, and computer technology. Meanwhile, engineers around the world see the inevitability for a new level of intelligence and capability that is beyond the scope of current understanding and regulation present. The challenge we will face is creating a force to govern the capabilities of an intelligence we cannot assimilate.

 

Contact Ethan at ethan.james@student.shu.edu

Leave a Reply

Your email address will not be published. Required fields are marked *

Pin It on Pinterest