Artificial intelligence (AI) has its strengths but also has its weaknesses
For Justice Dr Jamal Al Sumaiti, director-general of the Dubai Judicial Institute (DJI), uncertainty and unsolved questions still surround the work of artificial intelligence (AI) even on the global scale.
“Even if an ideal and perfect AI could be invented, would it be an entity with legal liability? Would we be able to treat it like if it were a persona with legal responsibility and bring it to legal accountability when its work goes wrong?” Dr Al Sumaiti raised these questions in an exclusive interview with Khaleej Times on the sidelines of the ‘Shaping the future of Judicial Knowledge’ workshop. The first edition of the workshop was organised by the DJI in cooperation with the United Nations Crime and Justice Research Institute under the theme ‘Artificial Intelligence Today and Beyond’.
Justice Al Sumaiti noted that the journey might take quite long to reach the day when a robot could replace a judge, a prosecutor or even an investigator but said “we have already begun exploring the possibilities and risks involved”.
The two-day workshop was inaugurated by Dr Al Sumaiti in presence of Taresh Eid Al Mansouri, director of the Dubai Courts and vice-chairman of the DJI.
It saw the participation from elite speakers, legal personnel and experts in the field of AI.
‘AI has strengths but also weaknesses’
Artificial intelligence (AI) has its strengths but also has its weaknesses. And thus stems the importance of the law to set up a framework of the right mechanism to attribute the responsibility of actions controlled by AI, said an expert.
At the first edition of a workshop organised by the Dubai Judicial Institute (DJI), Minesh Tanna, Simmons and Simmons, UK, explained why it is important to attribute legal liability when a work by artificial intelligence goes wrong.
“Since the AI acts autonomously, it makes it difficult to attribute responsibility to individuals or entities. Hence, there is a need for foreseeability of the responsibility if the AI work goes wrong. An example of that is a self-driving vehicle that hits a woman crossing the street while the vehicle was driving below the speed limit and in the presence of a safety driver in the car, who was not looking at the road at the time of accident. The question is whom to hold responsible – the AI or the human? Ultimately, whether we should hold humans liable for the AI actions is a legal and social choice,” Tanna explained.
He stated that any solution in law should be consistent with the aims of the rule of law.