How to Prevent Dumb AIs From Negatively Impacting Our Way of Life
Capitalizing on global demand, companies like IBM, Google, Microsoft, Apple and Amazon have started to offer commercial versions of powerful AI developer tools that enable anyone equipped with minimum coding skills to produce smart applications that 'outproduce' and 'outthink' us.
For example research labs such as the Aukland's Bio-Engineering Institute whose researchers are working on 'software that learn and interacts with others' have harnessed AI to create highly interactive humanoids living behind a pane of glass. In the example below, this cute avatar baby has learned its first words late last year and is expected to continue to mature at a fast pace.
Meanwhile in England, Google's DeepMind team just announced that their brainchild has now the "capability to remember the steps it took to generate a success by storing them into memory". The innovation will allow DeepMind to improve without having to start from scratch every time. Their researchers think that this innovation will revolutionize AI by accelerating the learning process.
According to experts like data scientist Jeremy Howard, we are getting closer to computers outthinking us. Innovations like the ones references above are clearing the path for the birth of an Artificial General Intelligence.
However before we get to Artificial General Intelligence, experts are worried about goal driven AIs aggressive stance and potentially harmful strategic capabilities.
In a recent DeepMind test where two AIs were put in a competitive environment with the goal to collect a limited supply of apples, Google's researchers quickly discovered that "AI would not hesitate to use its strategic capabilities to eliminate its adversary to accomplish its stated goal".
Extrapolate this test in the context of a real world scenario where national security interests are at stake and where a dumb AI has for goal to win a war at all costs and you are taunting the destruction of civilization itself.
To avoid the pitfalls of being solely focused on winning, experts are demanding that companies and Governments evolve their thinking to include other dimensions such as measuring the impact of an action against other goals of ethical nature such as morality, values, laws, rights and wrongs, principles, ideals, standards of behavior, value system and virtues to dictates a conscience and drive healthy outcomes. Deliberate actions based on ethical rules would allow for much safer, inclusive solutions.
Don Seidman who spoke earlier this year to Time/Fortune Magazines about AI thinks that "in a world where AI is 'outproducing' and 'outthinking us' we have to embrace the final frontier and systematize the forces that bear on behavior". He is urging companies and Governments to develop ethical AI operating systems that take into considerations goals of ethical nature to prevent calamities such as wars, unemployment, environmental disasters and irrational self-destructive behaviors.
At the end of the day, companies and Governments have the fiduciary responsibility to create useful and well behaved AI solutions. Whether it takes the shape of a digital person, a self-driving car, a phone or becomes a common service to manage human resources, we want to ensure that we create experiences that are safe and follow the law. To achieve this necessary outcome, they will need to place ethics compliance at the center of their design strategy. Once deliberately introduced, it will help technologists better understand the potential ramifications of the AI-powered software they are creating. As suggested above, if they systematize wanted behaviors, they would produce truly smart AIs ready to operate safely in the wild.