The introduction of artificial intelligence (AI), or the ability of smart technology to make decisions without any human input, has recently transformed engineering. Artificial intelligence has paved the way for breakthroughs such as autonomous cars, flying drones, and computer-aided diagnosis. On Jan. 4, Mark Zuckerberg publicly challenged himself to take AI a step further when he declared to build an AI system that would control everything at home—music, lights, temperature, and even work—helping him visualize data and organize information. However, with the rapid growth of AI, concerns regarding the ethical use of such machines have also been increasing.
“It is an undeniable fact that technology with AI has already deeply penetrated into the fabric of human lives,” said John Kim (12), Robotics Club president. “From the ubiquity of palm-sized smartphones and laptops that can interact with users, to other smart devices at homes that perform context-dependent chores, AI is already common around us. With further development, I believe that AI can provide solutions to both mental hardships and physical adversities that we could not address before.”
According to Eric Schmidt, Google Executive Chairman, AI could help solve the world’s “hardest problems” such as ¬sickness, by helping scientists pinpoint the causes of such diseases. For example, the University of Texas is developing a pen-sized device for detecting skin cancer. Schmidt also declared that the pairing of computers and robots would free humans from dangerous work in manufacturing factories. Despite the scientific progress that has been made possible with AI, many are also concerned about what effects an increased AI presence will have on society.
“While AI has been developing, society has also been reshaping,” said Iris Jeong (10), debater. “The nature of jobs, especially, has been changing. Machines are expected to take a large percentage of jobs available—from white-collar to agriculture related jobs. I also doubt that humanity will only use AI to help others, especially since AI, if used in the military or war, withholds the power and abilities to eliminate the human race or at the very least, kill millions of people at once since AI is more powerful than nuclear bombs.”
Likewise, prominent figures such as Stephen Hawking and Steve Wozniak warned against the use of AI in the military. Such figures have asserted that AI is humanity’s biggest existential threat and that it could spell the end of the human race. According to the Guardian, autonomous weapons have been described as the third revolution in warfare besides gunpowder and nuclear arms, which will cause more casualties on the battlefield than ever before. Moreover, Nick Bostrom, an Oxford University philosopher, argued that self-improving AI could enslave or exterminate humans if it wanted to, and that such machines cannot be controlled once it starts killing.
“Obviously, there is a slight possibility that AI will start dictating the human race along with a looming possibility that one day people will start using AI in the military when given the chance,” said Yoon Lim, Model United Nations member. “Such a menacing situation can be prevented if we do not develop such autonomous weapons at all and instead, focus on beneficial AI that can help those in need.”