Engaging with the tech community is not “a nice to have” sideline for defence policymakers – it is “absolutely indispensable to have this community engaged from the outset in the design, development and use of the frameworks that will guide the safety and security of AI systems and capabilities”, said Gosia Loy, co-deputy head of the UN Institute for Disarmament Research (UNIDIR).

Speaking at the recent Global Conference on AI Security and Ethics hosted by UNIDIR in Geneva, she stressed the importance of erecting effective guardrails as the world navigates what is frequently called AI’s “Oppenheimer moment” – in reference to Robert Oppenheimer, the US nuclear physicist best known for his pivotal role in creating the atomic bomb.

Oversight is needed so that AI developments respect human rights, international law and ethics – particularly in the field of AI-guided weapons – to guarantee that these powerful technologies develop in a controlled, responsible manner, the UNIDIR official insisted.

Flawed tech

AI has already created a security dilemma for governments and militaries around the world.

The dual-use nature of AI technologies – where they can be used in civilian and military settings alike – means that developers could lose touch with the realities of battlefield conditions, where their programming could cost lives, warned Arnaud Valli, Head of Public Affairs at Comand AI.

The tools are still in their infancy but have long fuelled fears that they could be used to make life-or-death decisions in a war setting, removing the need for human decision-making and responsibility. Hence the growing calls for regulation, to ensure that mistakes are avoided that could lead to disastrous consequences.

“We see these systems fail all the time,” said David Sully, CEO of the London-based company Advai, adding that the technologies remain “very unrobust”.

“So, making them go wrong is not as difficult as people sometimes think,” he noted.

A shared responsibility

At Microsoft, teams are focusing on the core principles of safety, security, inclusiveness, fairness and accountability, said Michael Karimian, Director of Digital Diplomacy.

The US tech giant founded by Bill Gates places limitations on real-time facial recognition technology used by law enforcement that could cause mental or physical harm, Mr. Karimian explained.

Clear safeguards must be put in place and firms must collaborate to break down silos, he told the event at UN Geneva.

“Innovation isn’t something that just happens within one organization. There is a responsibility to share,” said Mr. Karimian, whose company partners with UNIDIR to ensure AI compliance with international human rights.

Oversight paradox

Part of the equation is that technologies are evolving at a pace so fast, countries are struggling to keep up.

“AI development is outpacing our ability to manage its many risks,” said Sulyna Nur Abdullah, who is strategic planning chief and Special Advisor to the Secretary-General at the International Telecommunication Union (ITU).

“We need to address the AI governance paradox, recognizing that regulations sometimes lag behind technology makes it a must for ongoing dialogue between policy and technical experts to develop tools for effective governance,” Ms. Abdullah said, adding that developing countries must also get a seat at the table.

Accountability gaps

More than a decade ago in 2013, renowned human rights expert Christof Heyns in a report on Lethal Autonomous Robotics (LARs) warned that “taking humans out of the loop also risks taking humanity out of the loop”.  

Today it is no less difficult to translate context-dependent legal judgments into a software programme and it is still crucial that “life and death” decisions are taken by humans and not robots, insisted Peggy Hicks, Director of the Right to Development Division of the UN Human Rights Office (OHCHR).

Mirroring society

While big tech and governance leaders largely see eye to eye on the guiding principles of AI defence systems, the ideals may be at odds with the companies’ bottom line.

“We are a private company – we look for profitability as well,” said Comand AI’s Mr. Valli.

“Reliability of the system is sometimes very hard to find,” he added. “But when you work in this sector, the responsibility could be enormous, absolutely enormous.”

Unanswered challenges

While many developers are committed to designing algorithms that are “fair, secure, robust” according to Mr. Sully – there is no road map for implementing these standards – and companies may not even know what exactly they are trying to achieve.  

These principles “all dictate how adoption should take place, but they don’t really explain how that should happen,” said Mr. Sully, reminding policymakers that “AI is still in the early stages”.

Big tech and policymakers need to zoom out and mull over the bigger picture.

“What is robustness for a system is an incredibly technical, really challenging objective to determine and it’s currently unanswered,” he continued.

No AI ‘fingerprint’

Mr. Sully, who described himself as a “big supporter of regulation” of AI systems, used to work for the UN-mandated Comprehensive Nuclear-Test-Ban Treaty Organization in Vienna, which monitors whether nuclear testing takes place.  

But identifying AI-guided weapons, he says, poses a whole new challenge which nuclear arms – bearing forensic signatures – do not.

“There is a practical problem in terms of how you police any sort of regulation at an international level,” the CEO said. “It’s the bit nobody wants to address. But until that’s addressed… I think that’s going to be a huge, huge obstacle.”

Future safeguarding

The UNIDIR conference delegates insisted on the need for strategic foresight, to understand the risks posed by the cutting-edge technologies now being born.

For Mozilla, which trains the new generation of technologists, future developers “should be aware of what they are doing with this powerful technology and what they are building”, the firm’s Mr. Elias insisted.

Academics like Moses B. Khanyile of Stellenbosch University in South Africa believe universities also bear a “supreme responsibility” to safeguard core ethical values.

The interests of the military – the intended users of these technologies – and governments as regulators must be “harmonised”, said Dr. Khanyile, Director of the Defence Artificial Intelligence Research Unit at Stellenbosch University.

“They must see AI tech as a tool for good, and therefore they must become a force for good.”

Countries engaged

Asked what single action they would take to build trust between countries, diplomats from China, the Netherlands, Pakistan, France, Italy and South Korea also weighed in.

“We need to define a line of national security in terms of export control of hi-tech technologies”, said Shen Jian, Ambassador Extraordinary and Plenipotentiary (Disarmament) and Deputy Permanent Representative of the People’s Republic of China.

Pathways for future AI research and development must also include other emergent fields such as physics and neuroscience.

“AI is complicated, but the real world is even more complicated,” said Robert in den Bosch, Disarmament Ambassador and Permanent Representative of the Netherlands to the Conference on Disarmament. “For that reason, I would say that it is also important to look at AI in convergence with other technologies and in particular cyber, quantum and space.”

 

Source of original article: United Nations (news.un.org). Photo credit: UN. The content of this article does not necessarily reflect the views or opinion of Global Diaspora News (www.globaldiasporanews.com).

To submit your press release: (https://www.globaldiasporanews.com/pr).

To advertise on Global Diaspora News: (www.globaldiasporanews.com/ads).

Sign up to Global Diaspora News newsletter (https://www.globaldiasporanews.com/newsletter/) to start receiving updates and opportunities directly in your email inbox for free.