Artificial Intelligence: AI in Defence

Yasmin

Yasmin Underwood

AI in Defence:

In what specific areas within the defence industry is Artificial Intelligence currently being utilized?

A wide range of potential applications for AI are being investigated for defence use. The UK MoD’s 2022 Defence Artificial Intelligence strategy paper has outlined most of them. Understandably, there is a particular interest in protecting people by using AI to automate tasks that we often consider to be too “dull, dirty or dangerous” for humans. This includes things like low interaction, high repetition jobs, missions that might involve potentially harmful material and explosive ordnance disposal.

From your perspective, what is the most transformative potential of AI in defence?

So far, I’ve only worked on a small selection of the AI applications which could potentially be utilised for defence, most of which are at the research and development stage. That said, personally (and perhaps somewhat controversially), I think that it will be the potential for the dramatic reduction in the loss of human life in war.

Understanding AI and Autonomy:
For the benefit of our readers, could you elucidate the difference between AI and autonomy?

The definition of AI used in defence comes from the UK MoD Defence Artificial Intelligence Strategy, which classifies AI as “a family of general-purpose technologies, which may enable machines to perform tasks normally requiring human or biological intelligence”.
Autonomy when referring to technology, is a platform or system usually containing AI-based components that allow it to operate independently, making decisions and taking actions without direct human intervention.

Are there any common misconceptions about AI’s role in defence that you’d like to address?

My personal pet peeve is the misconception that AI is already super advanced and will inevitably lead to the development of killer robots.
Most computer scientists will tell you it’s challenging enough to train AI to do a single task efficiently, let alone attempt to programme it to have conscious thought.  In the defence industry The emphasis is on leveraging AI for tasks like reconnaissance, logistics, and decision support to make military operations more precise and reduce the risks faced by human soldiers, not working on some terminator style army to wipe everyone out.

Legal Challenges:
You mentioned the lack of clear legal definitions regarding AI. Could you elaborate on the current ambiguities in the field?

Aside from the issues over responsibility, accountability and bias I think the most significant ambiguity is not having a unified definition of AI. Various countries interpreting AI-related legal concepts differently pose a risk of creating inconsistencies in how AI is globally regulated and treated.These inconsistencies could open loopholes that individuals may take advantage of.

Where does the country currently stand regarding autonomous weapons in the UK?

The UK’s current position is that it does not possess fully autonomous weapons and has no plans to develop them.  However, it is my understanding that the International Relations and Defence Committee is currently concluding its inquiry into the Government’s ambitions for defence in the Defence Command Paper and the Integrated Review, with a special inquiry committee due to submit a report considering the use of artificial intelligence in weapon systems, by the end of November 2023.

Accountability and Transparency:
What is the primary concern that most professionals in the defence industry share about AI?

At the research and development stage, I often hear concerns raised most often, which tend to center around ethical dilemmas and the potential for unintended consequences. I actively participate in a large percentage of the work that examines risk management and mitigation, which is generally a hot topic at industry events.

Where should the blame lie if an AI system malfunctions or leads to an unintended outcome?

This a very complex question to answer.
It is really difficult to pinpoint exactly where the blame should lie.  In my experience, responsibility typically falls on multiple stakeholders, including the AI developers who created the system, the operator or user who deployed it, and the policymakers who set the guidelines for its use. Just like many other aspects of law, we must assign legal accountability for AI on a case-by-case basis.

Can you shed light on the difficulty of assigning legal responsibility in AI-related mishaps?

The ‘black box’ nature of most modern AI systems makes it almost impossible to trace the exact reasoning behind a system’s predictions or decisions. Determining whether responsibility lies with the developer, operator, or the AI itself depends on factors like design, training data, operational learning, and the specific circumstances of the issue. Existing legal frameworks tend to be based on principles which do not take into account the rapidly evolving complexities of AI.

The ‘Ethics by Design’ Approach:
How does the ‘PlayStation mentality’ impact AI’s use in defence?

The general consensus as I understand it, is that having a significant geographical and psychological distance between the operator of an autonomous system and the target creates a level of mental, and emotional detachment. There is a risk that this desensitisation may ultimately undermine the overall goal which is to develop responsible and ethical Artificial Intelligence for defence.

You mentioned the ‘ethics by design’ approach. Could you explain what this entails and its potential challenges?

This is a relatively new concept that different organizations are still defining and exploring. However, it’s a key aspect of my research for my PhD, which examines whether there is an operational benefit to maintaining black box theory. Whether it is possible to move legal responsibility up the chain to the developer, and if so, whether there should be rules of development (ROD) as well as rules of engagement (ROE), and what those ROD might be. It essentially involves embedding ethical principles and considerations into every stage of the system’s design, from conception to deployment.

How does the responsibility gap relate to issues of bias and discrimination in AI?

I don’t think that the responsibility gap specifically relates to issues of bias and discrimination as such. I would say that instead it exacerbates these issues by making it challenging to assign responsibility to individuals or organisations for the biased outcomes, which results in a lack of accountability, and a risk that AI systems will perpetuate discrimination or unequal treatment.

Proposed EU Framework and Other Policies:
The EU has proposed categorizing AI into various risk levels. How might this framework impact the defence industry?

In my opinion, classifying Artificial Intelligence applications into specific risk levels is likely to produce a very mixed bag of results for the defence industry. On one hand, the framework has the potential to stimulate innovation of limited and minimal risk AI applications, which should lead to the development of safer and more reliable AI solutions. Additionally, high-risk AI systems may be subject to deeper levels of scrutiny, necessitating advanced safety measures and ethical considerations which sit nicely with the Ambitious, Safe, Responsible guidelines set out by the MoD AI ethics panel. There is of course a risk that the new framework could potentially slow down the adoption of certain AI technologies in defence, but with the caveat that it would also ensure a more responsible and secure integration of AI into military operations. Ultimately, I think the pros far outweigh the cons!

What are the two primary safeguards introduced by the EU in relation to AI?

Presumption of Causality: If a victim can show someone was at fault for not complying with relevant obligations, and that there is a likely causal link with the AI’s performance, then the court can presume that this non-compliance caused the damage.
By granting access to relevant evidence, victims of AI-related damage can request the court to disclose information about high-risk AI systems. This action should help in identifying individuals who may be held liable and potentially providing insight into what went wrong.

Are there any other significant policy papers or global standards shaping the future of AI in defence?

Yes, to give two examples, there is the UK MoD ‘Ambitious, safe, responsible: our approach to the delivery of AI-enabled capability’ paper, which looks at ethical development of AI in conjunction with the Defence AI strategy. NATO also released an AI strategy paper in 2021

Conclusion:
As we look ahead, what steps must the defence industry, policymakers, and legal professionals take to address the challenges posed by AI?

The defense industry must continue investing in research and development to remain at the forefront of AI technology for military applications. It is imperative to achieve this by collaborating closely with policymakers and legal professionals to ensure they establish robust ethical guidelines that effectively address the challenges posed by AI.

Lastly, for those interested in learning more or getting involved in this intersection of law and AI in defence, what resources or steps would you recommend?

There are many interesting books on the topic of AI and law in defence. Army of None by Paul Scharre and One nation under drones by Capt John E Jackson are two personal favourites. Online courses are also fantastic for a little knowledge boost.
If you’re considering a career in this area, there are universities who offer defence focused degrees such as King’s College and Cranfield, plus defence companies’ internships and graduate schemes.

ABOUT THE AUTHOR

Yasmin Underwood is a defence consultant at Araby Consulting and member of the National

The Association of Licensed Paralegals (NALP) is a non-profit membership body and the only paralegal body recognized as an awarding organization by Ofqual, the regulator of qualifications in England. Through its Centres around the country, it offers accredited and recognized professional paralegal qualifications for those looking for a career as a paralegal professional.

Web: http://www.nationalparalegals.co.uk

Twitter: @NALP_UK

Facebook: https://www.facebook.com/NationalAssocationsofLicensedParalegals/

LinkedIn – https://www.linkedin.com/company/national-association-of-licensed-paralegals/

Exit mobile version