Google’s AI commandments: Google has seven rules for AI – and there will never be a ‘Google Gun’
The potential threat of artificial intelligence crops up from time to time, but how seriously it should be taken depends on who you ask.
Elon Musk, the CEO of Tesla and SpaceX, believes AI to be more dangerous than North Korea, while others dismiss the threat as overblown. DeepMind’s own co-founder once called such fears “unsubstantiated hype from people who are smart in their own domains, but don’t work in AI,” but at the very least there seem to be questions to consider about how we implement it in society.
Now Google – which itself owns the aforementioned DeepMind – has provided some clarity with its own rules for AI research. A blog post from the desk of CEO Sundar Pichai seeks to reassure those who fear that Google is falling into the Jurassic Park trap – they are considering both what can and should be done.
“How AI is developed and used will have a significant impact on society for many years to come,” Pichai writes. “As a leader in AI, we feel a deep responsibility to get this right. So today, we’re announcing seven principles to guide our work going forward. These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions.”
For brevity, these are the seven headlines – though Pichai goes into more detail on all of them on the blog.
- Be socially beneficial
- Avoid creating or reinforcing unfair bias
- Be built and tested for safety
- Be accountable to people
- Incorporate privacy design principles
- Uphold high standards of scientific excellence
- Be made available for uses that accord with these principles
At a glance, you may note these sound a little platitudinous – although, as I say, he does go into more detail on the blog itself. All the same, these rules still leave plenty of room for individual interpretation: what a “high standard” of scientific excellence is will vary from person to person, for example. It brings to mind Google’s original motto: “don’t be evil,” which was ultimately dropped a few years ago, and most likely not because it was holding back world domination plans, but because it was just too subjective a principal to govern a multi-billion dollar company.
However, there are a few concrete commitments after the initial list is completed, and this is where things get interesting. Pichai assures readers that the company will not pursue AI technology that causes overall harm – specifically weapons; tech that violates international law and human rights; and “technologies that gather or use information for surveillance violating internationally accepted norms.”
That’s important, but for many it will not be enough, with many Google employees deeply unhappy about the company’s controversial Project Maven work with the US military. Google has announced it won’t be renewing the contract, but Pichai was keen to emphasise that the rules for Ai would not preclude the company from working with the military in future on non-combat work. “We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas,” Pichai writes. “These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue.”
Will this settle nerves of those that fear the future of AI? Probably not, but it’s a good conversation starter for the industry at large from a company the size of Google which has plenty of involvement already, from Google Assistant to medical applications.
“This approach is consistent with the values laid out in our original Founders’ Letter back in 2004,” Pichai concludes. “There we made clear our intention to take a long-term perspective, even if it means making short-term tradeoffs. We said it then, and we believe it now.”