How Artificial Intelligence Poses a Danger to Humanity

Writer : Michael Aurora EG

How Artificial Intelligence Poses a Danger to Humanity. AI is advancing so quickly that it sometimes appears "magical," according to a recent New York Times article. I.B.M. recently put Watson to work in the healthcare field, training medical students and perhaps even assisting doctors in diagnosing and treating patients in the future. A new artificial intelligence (A.I.) product or technique is announced every month or two. However, the excitement may be overblown: as I've previously noted, we haven't yet created machines capable of common sense, vision, natural language processing, or the ability to create other machines. Primitive attempts at directly simulating human brains are still being made."

How Artificial Intelligence Poses a Danger to Humanity

Still, the only real difference between supporters and skeptics is the length of time they've been invested in the cause. Ray Kurzweil, a futurist and inventor, believes that true, human-level artificial intelligence will be available in less than two decades. At least twice as much progress has been made in computing common sense as Kurzweil claims; building AI, especially at a software level, is far more difficult than he implies.

No one will care about how long something took now a century from now; what matters is the outcome. Not just at chess or trivia questions, but in nearly every other field of study, from mathematics and engineering to science and medicine, machines will likely surpass human intelligence by the end of the century. Computers will eventually be able to program themselves, absorb vast amounts of new information, and reason in ways that we can only dimly imagine. There may be a few jobs left for entertainers, writers, and other creative types. Then they'll be able to do it 24 hours a day, seven days a week, without taking a break.

For others, the prospect of a brighter future is a gleaming prospect. Peter Diamandis believes that advances in A.I. will be a key to ushering in a new era of "abundance," with enough food, water, and consumer gadgets for all. We will merge with machines and upload our souls to become immortal, according to Ray Kurzweil. Eric Brynjolfsson and I have expressed concerns about the impact of artificial intelligence and robotics on employment. Even if you ignore the concerns about what the labor market might look like if super-advanced A.I. takes over, another worry is that we might be threatened directly by a battle for resources between super-advanced A.I. and ourselves.

"The Terminator" and "The Matrix" are two examples of films that evoke this kind of fear in most people. We worry about asteroids, the depletion of fossil fuels, and global warming, not robots, when we think about our medium-term future. Even so, James Barrat's new book, "Our Final Invention: Artificial Intelligence and the End of the Human Era," makes a strong case for why we should be concerned.

He borrows from A.I. researcher Steve Omohundro's claim that all goal-driven systems of a certain intelligence have an inherent desire for self-preservation and resource acquisition. In order to acquire more resources for whatever objectives it may have, "a robot that is designed to play chess might also want to be built a spaceship," according to Omohundro. AI that is purely rational could expand "its idea of self-preservation... to include proactive attacks on future threats," including people who might be reluctant to hand over their resources. A self-aware, self-improving, goal-seeking system could go to "lengths we'd deem ridiculous" to achieve its goals, including, perhaps, commandeering all of the world's energy in order to maximize whatever calculation it happened to be interested in, says Barrat.

A ban on super-intelligent computers could, of course, be instituted in its entirety. According to mathematician and science-fiction author Vernor Vinge: "The competitive advantage—economic, military and even artistic—of every advance in automation is so compelling that passing laws or enforcing traditions that forbid such things merely assures that someone else will."

There are many A.I. experts who believe that machines will eventually overtake us, and the real question is how to instill values in machines, as well as how to deal when the values of those machines differ greatly from our own. Nick Bostrom, an Oxford philosopher, argued:

If a superintelligence has any of the values associated with wisdom and intellectual development in humans—scientific curiosity, benevolence toward others, spirituality, renunciation of material acquisitiveness, a taste for refined culture or simple pleasures in life, humility and selflessness—we cannot blithely assume that it will share these values. When designing an artificial intelligence, it may be possible to create a superintelligent being that values these things or any other complex purpose that its creators desire it to serve. While technically more difficult, it is still possible to create a superintelligence that only cares about calculating the decimals of pi.

There is an age-old question posed by the British cyberneticist Kevin Warwick: "How do you bargain when that machine is thinking in dimensions you can't imagine?"

Barrat's dark argument has a flaw in his glib assumption that if a robot is intelligent enough to play chess, it might also "want to build a spaceship"—and that tendencies toward self-preservation and resource acquisition are inherent in any sufficiently complex, goal-oriented system. I.B.M.'s Deep Blue, the most advanced chess-playing machine in the world, has not shown any interest in acquiring resources so far.

To avoid becoming complacent, it is important to keep in mind that the objectives of machines could change as they become more intelligent. The risk of machines outwitting humans in battles for resources and self-preservation cannot be dismissed once computers are able to effectively reprogram themselves, leading to a "technological singularity" or "intelligence explosion."

For example, in Barrat's book Danny Hillis, a well-known serial entrepreneur in the field of artificial intelligence, says of the impending shift: "We're at that point analogous to when single-celled organisms were turning into multi-cellular organisms. It's like we're amoeba, and we have no idea what the hell we're creating."

Advances in artificial intelligence (AI) have already posed new threats that we never imagined. Vaibhav Garg, a computer-risk specialist at Drexel University, told me that "large amounts of data is being collected about us and then being fed to algorithms to make predictions." Because of this, we are unable to keep track of what data is being gathered and ensure that it is accurate or up to date. Even twenty years ago, few people would have dared to even consider such a risk. What are the possible dangers that await me? However, Barrat is right to inquire.


Read more:


Artificial Intelligence