Rava is an online news portal providing recent news, editorials, opinions and advice on day to day happenings in Pakistan.
The Google Maven Project was risky from the beginning. That one of the most powerful technology companies in the world was involved in a military project to improve the accuracy of the attacks of the US Department of Defense had too many ethical implications.
But last Friday Diane Greene, a senior executive of the company, said the firm will not renew the contract, which ends in March 2019.
And is that the controversy had increased in recent months. More than 3,000 of his employees asked him in an open letter in early April to cancel the project.
In her they expressed their fears before what they considered the first great step in the use of the artificial intelligence forlethal purposes :
“We believe that Google should not participate in the business of war,” they said. They also said that this initiative ignores the “ethical and moral responsibility”. And in mid-May a dozen employees resigned in protest of the Maven project.
It seems that Google took it into account, because, according to Greene, the company will not extend its contract because of the strong “reactions”, negative for the company.
An image problem
Despite the statements of Greene – who is responsible for the business area in the cloud – some have doubts.
Kate Conger, a journalist at the technology news site Gizmodo, told the BBC that she believes Google will continue to work in the military environment despite the controversy.
Greene said the contract with the Pentagon will give Google about $ 9 million – a relatively small sum – but many think it could involve much greater cooperation in the future.
The Maven Project includes the use of machine learning and engineering talent to distinguish people and objects in drone videos .
It also involves the creation of open source software (the one that is manufactured under a license that allows its use, modification and redistribution) and machine learning algorithms.
The ultimate goal is to create a sophisticated system with which entire cities can be monitored .
According to a report published in Gizmodo that has three internal Google sources, the main problem is that the company’s leadership in the development of these technologies came into deep conflict with military interests.
A series of internal emails that also had access to the American newspaper The New York Times suggests that some executives saw the contract as a huge opportunity, while others were concerned c or mo could be perceived involvement in such activities if They made public .
But that was not all.
“In order to carry out the Maven Project, Google Cloud (the cloud service system) was facing a challenge,” the Gizmodo report reads.
“The company would need to use images collected from military drones to build its machine learning models, but it lacked official authorization from the government to be able to store sensitive data of that type in its cloud,” they added.
That authorization, known as FedRAMP , establishes the security standards for cloud services contracted by the government. But Google did not have it, so it had to rely on other geospatial images to carry out the initial phases of the project.
However, the report continues, at the end of March of this year, the director of Security, Trust and Privacy of Google Suzanne Frey announced that the company had obtained a “provisional” authorization .
And that authorization was not only essential for the Maven Project, but also for obtaining new government contracts.
Google secured the contract in September and to date has ceded more than ten jobs for this project. Among the competing companies are IBM, Amazon and Microsoft .
One of the points of the contract established that the name of the collaborating company was not publicly mentioned without their permission.
The legal document says that “Maven is a great government program that will result in greater security for citizens and nations through the rapid identification of evils such as extremist violence activities or human rights abuses.”
It also ensures that “it will bring improvements in security and protection throughout the world .”
But Google has recently faced certain problems related to data privacy that could have an impact on its image and be aggravated by this collaboration.
The company plans to launch a document next week on its ethical principles in the use of artificial intelligence in which it is expected to clarify its position in this regard.