Dangers of AI
From things like Siri to self-driving cars, artificial intelligence is rapidly taking over the tech industry. Many sci-fi movies portray artificial intelligence as evil robots with human-like characteristics, when in reality, AI can be anything from Google’s search algorithms to an autonomous weapon. Artificial intelligence today is commonly known as narrow AI (or weak AI) because it is designed to perform a narrow task. “Weak artificial intelligence (weak AI) is an approach to artificial intelligence research and development with the consideration that AI is and will always be a simulation of human cognitive function, and that computers can only appear to think but are not actually conscious in any sense of the word.” Weak AI is one of the most basic forms of AI because it only does one specific task, such as play chess or Siri. Siri is a good example of narrow intelligence. Siri operates within a limited pre-defined range of functions. There is no actual intelligence or self-awareness despite being a sophisticated example of weak AI. Siri brings several narrow AI techniques to the capabilities of an iPhone. However, the goal of many researchers is to create general AI (AGI or strong AI). While narrow AI outperforms humans at whatever its specific task is, like playing chess or solving equations, AGI would outperform humans at nearly every task. Strong AI is a term used to describe a certain mindset of artificial intelligence development. Strong AI's goal is to develop artificial intelligenc
e to the point where the machine's intellectual capability is functionally equal to a human's. There are many experts who doubt that AGI will ever be possible, and there are also many who question whether it would be desirable. Stephen Hawking, for example, warned: "It [strong AI] would take off on its own, and re-design itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded." Most researchers agree that a superintelligent AI is unlikely to have any human emotions such as love or hate and that there is no reason to expect AI to become intentionally kind or malevolent. When considering how AI might become a risk, experts think two scenarios most likely: the AI is programmed to do something devastating or the AI is programmed to do something beneficial, but it develops a destructive method for achieving its goal, which can happen whenever someone fails to match the AI’s goals with ours, which is very difficult. If you ask an obedient intelligent car to take you to the airport as fast as possible, you might end up there being chased by helicopters and covered in vomit. If a super intelligent system is tasked with a very hard bioengineering project, it might wreak havoc with our ecosystem as a side effect, and view human attempts to stop it as a threat. Autonomous weapons are artificial intelligence systems that are programmed to kill. In the hands of the wrong person, these weapons could easily cause many casualties. This all could lead to an AI arms race which could become an AI war that also results in many casualties. This risk is one that’s present even with narrow AI but grows as levels of AI intelligence and autonomy increase.