The Google philosophy has changed in recent years from a web focus, to mobile and now to Aritifical Intelligence (AI), to help guide their future in this space Google CEO Sundar Pichai has today announced new ethical standards for the company when it comes to using AI.
Google has been moving forward with applying AI to their suite of products which as led to fun innovations such as the Auto Awesomes in Google Photos, as well as in more serious areas such as health and conservation which they showed off at their recent AI Stories event here in Sydney. They’ve even had a brush with divisive applications of AI such as Project Maven for the Defence Department.
In his blog post today, Mr Pichai has laid out the principles which Google will use to guide their use of AI on projects:
- Be socially beneficial.
- Avoid creating or reinforcing unfair bias.
- Be built and tested for safety.
- Be accountable to people.
- Incorporate privacy design principles.
- Uphold high standards of scientific excellence.
- Be made available for uses that accord with these principles.
- Primary purpose and use: the primary purpose and likely use of a technology and application, including how closely the solution is related to or adaptable to a harmful use
- Nature and uniqueness: whether we are making available technology that is unique or more generally available
- Scale: whether the use of this technology will have significant impact
- Nature of Google’s involvement: whether we are providing general-purpose tools, integrating tools for customers, or developing custom solutions
These are fairly comprehensive guidelines for the use of AI. They will definitely be construed as overly broad by some in the community such was the criticism of ‘Don’t be Evil’ clause in their code of conduct.
While they’ve addressed what they WILL use AI for, they’ve also looked at what they won’t be applying their AI to including:
- Technologies that cause or are likely to cause overall harm. Where there is a material risk of harm, we will proceed only where we believe that the benefits substantially outweigh the risks, and will incorporate appropriate safety constraints.
- Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
- Technologies that gather or use information for surveillance violating internationally accepted norms.
- Technologies whose purpose contravenes widely accepted principles of international law and human rights.
These have been obviously made in response to the controversy surrounding their involvement in Project Maven, a program run out of the Pentagon in the US for the Department of Defense that was aimed at improving imagery used by drones.
We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas. These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue. These collaborations are important and we’ll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe.
Google is looking at AI as a long term route to solve many problems on multiple fronts, but they also recognise they’re not operating in this field alone. To that end they’ve also said they’re going to work with stakeholders ‘to promote thoughtful leadership in this area, drawing on scientifically rigorous and multidisciplinary approaches’.
AI can be great, but it can also lead to unwanted outcomes – as hundreds of dystopic movies and books have laid out. Google seems to want to set the principles for their use of AI early, but also recognise that some of these may need to change, or be added to. We’ll just have to wait and see how all this pans out.