Artificial Intelligence Benefits And Dangers

Writer : Michael Aurora EG

What is Artificial Intelligence Benefits And Dangers.

WHAT IS AI?

Artificial intelligence (AI) is advancing at a rapid pace, from SIRI to self-driving cars. Robots with human-like characteristics are common in science fiction, but AI can be anything from Google's search algorithms to IBM's Watson or even autonomous weapons systems.

To describe today's form of artificial intelligence, which focuses on a single task, we can use the term "narrow AI" (or weak AI) (e.g. only facial recognition or only internet searches or only driving a car). In the long run, many researchers are aiming for a more general AI (AGI or strong AI). While narrow AI may outperform humans in a specific task, such as playing chess or solving equations, AGI would outperform humans in almost every cognitive task.

WHY RESEARCH AI SAFETY?

The field of artificial intelligence (AI) is advancing at a rapid pace, from SIRI to self-driving cars. Robots with human-like characteristics are common in science fiction, but AI can include anything from Google's search algorithms to IBM's Watson and even autonomous weapons systems. AI is not just for robots.

Due to its narrow focus on a single task, today's version of artificial intelligence is more appropriately dubbed 'weak AI' (e.g. only facial recognition or only internet searches or only driving a car). In the long run, many researchers are aiming for a more general AI (AGI or strong AI). While narrow AI may outperform humans in a specific task, such as playing chess or solving equations, AGI would outperform humans in nearly every cognitive task.

HOW CAN AI BE DANGEROUS?

A superintelligent AI is unlikely to show human emotions like love or hate, and no reason should be expected for AI to become intentionally good or bad. Experts believe that two scenarios are most likely when it comes to AI becoming a risk:

  1. Devastating actions are pre-programmed into the AI. Artificial intelligence systems designed to kill are known as autonomous weapons. These weapons have the potential to cause widespread devastation if they fall into the wrong hands. Furthermore, an AI arms race could accidentally lead to an AI war, which would also result in a large number of casualties on the human side. Weapons like these would be designed to be nearly impossible to "turn off," making it more likely that humans would lose control of the situation. This is a risk that exists even at low levels of AI intelligence and autonomy, but it increases as those levels rise.
  2. Even though the AI has been programmed to do something good, it comes up with a destructive method of doing it. As long as our goals and those of the AI do not coincide, we run the risk of this happening. In order to get to the airport as quickly as possible, an obedient intelligent car might do exactly what you want it to do, even if it's not what you wanted. Superintelligent systems may view human efforts to stop them as threats to be dealt with, if they're given the task of undertaking an ambitious geoengineering project.

It's not evilness but competence that has people worried about the potential of artificial intelligence, as these examples show. We'll have a problem if a super-intelligent AI's goals don't match ours, because it'll be excellent at achieving them. If you're in charge of a hydroelectric green energy project and there's an anthill in the area to be flooded, you're probably not an evil ant-hater who steps on ants out of malice. AI safety research aims to avoid putting humans in the position of those ants.

WHy is there so much interest in AI safety?

Many prominent scientists and technologists, including Stephen Hawking, Elon Musk, Steve Wozniak, Bill Gates, and others, have recently expressed concern about the dangers of artificial intelligence (AI) in the media and through open letters. Why is this topic making headlines now?

For a long time, the prospect of achieving a powerful artificial intelligence (AI) was dismissed as science fiction. Recent advances in artificial intelligence (AI) technology have now reached milestones that experts only five years ago thought were decades away. This has led many experts to believe that superintelligence may be attainable within our lifetime. However, while some experts still believe that human-level AI is centuries away, the majority of AI researchers at the 2015 Puerto Rico Conference estimated that it would occur before 2060. It is prudent to begin the required safety research now because it could take decades to complete.

Because artificial intelligence (AI) has the potential to surpass human intelligence, we are unable to predict how it will act. No previous technological developments can be used as a basis because we've never created anything that can outsmart us, either intentionally or unintentionally. Even our own evolution could serve as a good example of what to expect in the future. It's no longer the strongest, fastest or largest that rule the world; it's the smartest that do. How long will we be in charge if we're no longer the smartest people?

If we manage to outpace the growing power of technology, FLI believes our civilization will continue to prosper. FLI believes that the best way to win the race in AI technology is to support AI safety research rather than to impede the latter.

The most common misconceptions about advanced artificial intelligence

The future of artificial intelligence and what it will/should mean for humanity is currently the subject of a fascinating debate. A number of hot-button issues have emerged in the AI debate, including: whether or not human-level AI is possible; whether or not this could lead to an intelligence explosion, and whether or not we should welcome or fear it. People can get into a stale argument if they don't understand each other or if they talk past each other. Let's dispel a few common misconceptions so that we can keep our attention on the important debates and unanswered questions.

TIMELINE MYTHS

In the first myth, how long will it be before machines surpass human intelligence? It's a common misunderstanding that we already know the answer.

Some people believe that we'll have superhuman AI by the end of this century. In fact, technological hype has a long history. Who told us we'd have nuclear reactors and flying cars by now? In the past, AI has also been over-hyped, even by some of the field's founders. Artificial intelligence pioneers John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon wrote this overly optimistic forecast about what could be achieved in two months with stone-age computers:. "We propose a two-month artificial intelligence study at Dartmouth College in the summer of 1956, involving ten men." It will be attempted to find ways to make machines use language, form abstractions and concepts, solve problems that have traditionally been reserved for humans, and improve their own abilities. For the summer, we believe that a small group of scientists can make significant progress on one or more of these problems.

Contrary to popular belief, we know that we won't be able to create superhuman AI in this century. Researchers have come up with a wide range of estimates for how far we are from superhuman AI, but given the dismal track record of such techno-skeptic predictions, we cannot say with great confidence that the probability is zero this century. Atomic energy was dubbed "moonshine" by Ernest Rutherford, arguably the greatest nuclear physicist of his time, less than 24 hours before Szilard's discovery of nuclear chain reactions. Interplanetary travel was derided by the Royal Astronomer of the British Empire Richard Woolley in 1956. An extreme version of this myth holds that superhuman AI is physically impossible and therefore will never be developed. Even though the brain is made up of quarks that can be used as powerful computers, there is no law of nature to stop us from creating even more intelligent quark-blobs. This is the conclusion of physicists.

In a number of surveys, AI researchers have been asked how many years from now they think we'll be able to achieve human-level AI with at least a 50% chance. We simply don't know, as the world's leading experts agree in all of these polls. A poll of AI researchers at the 2015 Puerto Rico AI conference found that the average answer was 2045, but some guessed hundreds of years or more.

Another common misconception about AI is that it will be here in just a few short years. Many of those who have expressed concern about superhuman AI believe it will be at least decades before it becomes a reality. As long as we don't know for sure that this will never happen, we should begin safety research now in order to be prepared for the worst-case scenario. Human-level AI's safety issues are so complex that they could take decades to solve. So it's better to start looking into them now rather than the night before some Red Bull-drinking programmers decide to turn one on.

CONTROVERSY MYTHS

AI safety research advocates aren't just luddites who don't know much about AI, according to another common misconception. At the Puerto Rico talk of Stuart Russell, who wrote the standard textbook on artificial intelligence, the audience laughed loudly when he mentioned this. Supporting AI safety research is widely believed to be controversial, a misconception that is closely related. Instead of being convinced that risks are high, people only need to be aware of a non-negligible probability of the house burning down to justify a modest investment in AI safety research.

Maybe the media has exaggerated the polarization of the AI safety debate. For the most part, fear-mongering articles are more likely to generate clicks than more nuanced and balanced ones. When two people only know about each other's positions from media quotes, they are more likely to believe they disagree more than they do. It's possible that someone who has only read about Bill Gates' position in a British tabloid might believe wrongly that Gates believes in the impending arrival of superintelligence. As with Andrew Ng's position on artificial intelligence safety, someone in the pro-beneficial-AI movement who only knows of his quote about Mars's overpopulation may assume that he doesn't care about it when in fact he does. Essentially, Ng prioritizes short-term AI challenges over long-term ones because his timeline estimates are longer.

Myths about the dangers of artificial intelligence

When they see the headline "Stephen Hawking warns that the rise of robots may be disastrous for mankind," many AI researchers roll their eyes. As a result, many people have lost track of the number of similar articles they've read. We should worry about robots rising up and killing us because they've become conscious or evil, according to these articles, which typically feature an evil-looking robot with a weapon. It's actually quite impressive that these articles succinctly sum up the scenario that AI researchers aren't concerned about. Concerns about consciousness, evil, and robots can all be found in this scenario.

Colors, sounds, and other sensations are all part of your subjective experience as you drive down the road. Is it possible for a self-driving car to have an emotional reaction? Is it at all like driving a self-driving vehicle? As intriguing as the question of consciousness may be, it has no bearing on the risk of artificial intelligence (AI). If you're hit by a driverless car, it doesn't matter if it thinks it's conscious or not. While AI's subjective feelings are important, they have no bearing on our lives because we'll only be affected by what it does.

Another fallacy is the idea that machines can be corrupted. Not malice, but competence is the real cause for concern. We need to make sure that the goals of a superintelligent AI are aligned with ours because it is by definition very good at accomplishing them. In general, people do not despise ants, but since we are superior in intelligence, if we want to build a hydroelectric dam in an area where there is an anthill, the ants will suffer the consequences. The proponents of beneficial artificial intelligence (AI) hope to keep humanity out of the same predicament as those ants.

Machines can have goals, but they can't have consciousness, which is a common misconception. However, it is obvious that machines are capable of exhibiting goal-oriented behavior, such as the heat-seeking missile's ability to aim for its target. In this case, it's not the machine's consciousness or sense of purpose that bothers you; rather, it's the machine's goals as they are narrowly defined that put you in danger. Probably not, if that heat-seeking missile were chasing after you: "I'm not worried, because machines can't have goals!"

My heart goes out to Rodney Brooks and others who feel unfairly maligned by sensationalist tabloids because of their obsession with robots and the red shiny eyes they adorn many of their articles with. To put it simply, the focus of those who advocate for a more beneficial use of AI isn't on robots but rather on artificial intelligence. An internet connection is all that would be required for such misaligned superhuman intelligence to cause us problems. This could allow it to outwit financial markets, out-create humans in the research labs, out-manipulate human leaders, and develop weapons we can't even comprehend. A super-intelligent and super-rich AI could easily pay or manipulate many humans to do its bidding even if building robots was physically impossible.

There is a common misconception that robots cannot be controlled by humans. In order for humans to control tigers, it is not necessary for us to be physically stronger, but rather for us to be more intelligent. As a result, it's possible that if we give up our status as the world's smartest, we may also lose our power.

CONTROVERSIES OF INTEREST

By avoiding the aforementioned misunderstandings, we are able to devote more time to important controversies about which even the experts are divided. What kind of a future do you want to create for yourself? The question is whether or not we should develop autonomous lethal weapons. When it comes to job automation, what would you like to see happen? What advice do you have for today's youth in terms of a career? Do you prefer a jobless society where everyone enjoys a life of leisure and machine-produced wealth, or do you prefer new jobs to replace the old ones? Would you like us to create superintelligent life and spread it throughout our universe in the future? What will happen if we allow intelligent machines to take over our lives? What will the future hold for humans and intelligent machines? Humanity's role in the future in the age of AI is unclear. Which meaning would you prefer, and how can we make that happen? Please add your voice to the discussion!


Read more:


Artificial Intelligence