The debate over the use of artificial intelligence in warfare is heating up, with more than 3,000 Google employees recently signing an open letter to CEO Sundar Pichai requesting the termination of Project Maven, a partnership with the pentagon which uses the firm’s AI to make drone strikes more accurate.
Following Google's corporate restructuring under the conglomerate Alphabet Inc. in October 2015, Alphabet dropped the tech giant's “Don't Be Evil” mantra from its code of conduct.
Project Maven was started last year, focusing on computer vision which autonomously extracts objects of interest from moving or still imagery. It’s an area of AI which is often discussed an implemented by technology companies today, think about auto-tagging and object recognition software for pictures on social media sites, but Project Maven takes the technology into the world of war.
The letter was circulated internally after Google confirmed that its open-source machine learning software TensorFlow was being used by the Pentagon for a pilot project. The Pentagon first announced Project Maven last May partially as a way to relieve military and civilian intelligence analysts inundated with enormous volumes of video surveillance data.
** Signed by 3100 employees of Google the letter petitions CEO Pichai to officially end the company’s involvement in the project while also drafting a company policy which would state Google or its contractors would never build warfare technology. It is a clear message from the employees; we do not want to help the government kill people.**
Google maintains that its participation in the project was "non-offensive" top brass at the company have came out insuring the public that the technology would not be used in operating drones or launching weapons. (however this still leaves a lot of hazy definitions.) These two use cases have been explicitly mentioned, but there are numerous other ways in which AI can aid destruction. The technology will be used to detect vehicles and other objects, track their motions, and provide results to the Department of Defense, but this could indirectly aid potential aggression and military action. This is not sitting well with the Google employees.
“This plan will irreparably damage Google’s brand and its ability to compete for talent,” the letter reads. “Amid growing fears of biased and weaponized AI, Google is already struggling to keep the public’s trust. By entering into this contract, Google will join the ranks of companies like Palantir, Raytheon, and General Dynamics.”
This is a complicated and sensitive issue, compounded by a complete lack of regulation or ethical principles surrounding the development of artificial intelligence what could go wrong ?
AI could be the technology which helps humanity, though innovation, and by providing more abundance for the planet But an "Artificially intelligent" military or state actor, is certainly the type of idea that will make people nervous.
An internal revolt is terrible scenario for Google; as one of the world’s largest and powerful tech companies, mostly because it has one of the best work forces; top level software engineers are hard to find, if Google's new leadership manages to p*ss these guys off, Google, the company who's former motto was "don't be evil" will no longer be at the forefront of the industry.
Isaac Asimov’s “Three Laws of Robotics.”
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.