Debates about Artificial Intelligence (AI) raise a wide range of operational and ethical concerns. However, the urgency to produce results from AI sometimes means that ethical reviews arrive after the technology is already in production. Assuming we are looking for controls, how do these get introduced?
Building a framework while the work is in progress
In March of this year, the European Group on Ethics and New Technologies (EGE) published a statement entitled “Artificial Intelligence, Robotics and ‘Autonomous’ Systems.'”
The EGE took a broad look, with the understanding that development and deployment was already underway. The EGE arrived at three “important and hard moral questions”:
- How can we make a world with interconnected AI and ‘autonomous’ devices safe and secure and how can we gauge the risks?
- Will humans be part of ecosystems of ‘autonomous’ devices as moral ‘crumple zones’, inserted just to absorb liability or will they be well placed to take responsibility for what they do?
- How should our institutions and laws be redesigned to make them serve the welfare of individuals and society and to make society safe for this technology?
The authors add two governance concerns:
- There are questions regarding democratic decision making, including decision making about institutions, policies and values that underpin all of the questions above.
- There are questions about the explainability and transparency of AI and ‘autonomous’ systems.
Noting that ‘Autonomous Systems’ are critical elements, already in production, the EGE suggests three categories of specific interest:
- Self Driving Cars,
- Lethal Autonomous Weapons Systems (LAWS), and
- Bots (e.g., Siri, Alexa, Corsana)
The EGE is calling for “a wide-ranging and systematic public engagement and deliberation on the ethics of AI, robotics and ‘autonomous’ technology and on the set of values that societies choose to embed in the development and governance of these technologies.”
There’s no place like home…
Teja agrees that AI is moving at warp speed. He writes: “Many of the AI entrepreneurs I work with believe there will be more advances in AI in the next five years than there have been in the last 30.” Teja notes that the availability of quantum computers will provide the platform required for complex AI systems
This is good new for Canada. Teja, notes:
As a place with a long history of working with artificial neural networks, Canada has the home-team advantage and is pressing it to the fullest, pouring hundreds of millions of dollars into the sector and relaxing immigration rules to attract the best engineers.
Canada has talent, data, and a 5 time-zone world
Teja notes that Canadian universities – specifically Universities of Toronto, Montreal, Waterloo, and Edmonton – “have been investing heavily in AI research for years, and Canada now has one of the most significant concentrations of AI talent anywhere.”
The talent also retains contact with academics, noting: “creating partnerships with universities that enable technology-focused researchers to operate with one foot in industry and one in academia.”
Beyond talent, Teja notes that Canada has access to a large, rich trove of data, driven by a diverse population. Since the population of Toronto has a significant proportion of foreign born residents, multicultural data sets are readily available to train AI platforms for wider use.
With these elements put together, we have a microcosm of the world, available in five time-zones. The result? Teja concludes: ” Ultimately, we need to ensure that the future of AI is both ethical and multicultural. To me, that sounds an awful lot like Canada.”
Can we walk and chew gum at the same time?
We have seen rushed implementations in the past. However, the application of AI to control other sophisticated systems raises the bar. It is unlikely we will get to vet all new developments, but the principles from EGE and conscious direction that Teja argues seems to be good starts.