The AI implications on Defence and Security

  Focus - Allegati
  29 febbraio 2024
  8 minuti, 56 secondi

Abstract

This paper explores the integration of artificial intelligence (AI) into military operations, focusing on its applications, challenges, and ethical considerations. AI, with its predictive capabilities and autonomous systems, offers significant strategic advantages in areas such as reconnaissance, surveillance, and cybersecurity. However, the deployment of autonomous weapon systems raises profound ethical concerns regarding decision-making autonomy, biases, transparency, and accountability. Moreover, ensuring compliance with international humanitarian law poses challenges as AWS blur the lines of responsibility and adherence to established principles. The paper aims at analyzing the positive and more efficient technological advancements while providing a set of issues that arise with the further development of such tools.

Authors

Jaohara Hatabi - Senior Researcher, G.E.O. - Politica

Alessandro Moretti - Junior Researcher, G.E.O. - Politica

Introduction

Artificial intelligence (AI) undoubtedly represents one of the most revolutionary frontiers of contemporary technology. Designed to emulate human intelligence through sophisticated algorithms and computational models, AI has permeated multiple sectors, from healthcare to industrial production, from finance to entertainment. Through machine learning, artificial neural networks, and natural language processing, AI demonstrates an extraordinary ability to analyze data, recognize patterns, and make autonomous decisions. Naturally, a tool with such great potential has also been studied and evaluated for possible applications in the military field. AI could become one of the pillars on which the armies of the future will be based, given its predictive capabilities and its greater accuracy compared to more 'traditional' systems.

AI in Defence and Security: An Overview

Artificial intelligence is now increasingly integrated into military operations and offers not inconsiderable capabilities and strategic advantages. One of the main applications of AI in the military domain to date is Autonomous Weapon Systems (AWS), which can perform various tasks without the need for direct human intervention.

In a study published by RAND and written by Andrew Lohn and Timothy R. Gulden, it is highlighted how AI can improve military operations by performing tasks such as reconnaissance, surveillance, and target identification. For example, unmanned aerial vehicles (UAVs), through the use of algorithms, can autonomously gather information, analyze data and identify potential threats on the battlefield. This capability not only reduces the risk to human personnel but also increases the efficiency and effectiveness of military operations.

Furthermore, AI-powered predictive analysis plays a crucial role in military decision-making processes. By analyzing vast amounts of data in a very short time, AI algorithms can provide commanders with valuable information and predictions. For instance, predictive analysis can help predict enemy movements, identify emerging threats and optimize resource allocation.

From a strictly operational perspective, AI enables the development of autonomous weapon systems that can independently select and engage targets based on criteria predefined by the programmer. While these systems offer potential advantages in terms of speed, accuracy and response time, they also raise obvious ethical and legal issues.

A further important field of application of Artificial Intelligence is cybersecurity. AI technologies, in fact, enhance cybersecurity capabilities, helping to defend networks and systems from cyber threats and attacks. Machine learning algorithms can detect anomalous patterns, identify potential vulnerabilities and respond to cyber incidents in real time. Some analysts and cybersecurity experts are already discussing how AI can strengthen cyber defense strategies by enabling proactive threat hunting, automated incident response and adaptive security measures. As cyber threats continue to evolve and proliferate, AI-powered cybersecurity solutions play a critical role in protecting military assets by ensuring their robustness and resilience. In conclusion, the application of AI in the military domain encompasses a wide range of capabilities, from autonomous systems to predictive analysis, autonomous weapon systems to cybersecurity. While offering significant benefits in terms of efficiency, effectiveness and strategic advantage, the integration of AI inevitably also raises important ethical and legal considerations.

Ethical and Legal Challenges

Autonomous Weapon Systems pose significant challenges within the framework of international humanitarian law (IHL), as well as raise ethical concerns. The ethical implications of deploying AWS in armed conflict demand careful consideration by the international community.

Some of the more pressing ethical concerns refer to:

  • the ability of machines to make life-and-death decisions since they have the capability to select targets, engage in combat, and potentially cause fatalities.
  • Potential biases and consequent discrimination – based on factors such as race or gender – stemming from the training data of AI algorithms.
  • Transparency: some AI models operate as “black boxes” that make it challenging to understand their decision-making processes. Ethical guidelines demanding transparency and explainability might prove useful in order to understand how and why an AWS arrived at a particular decision.
  • Human agency and accountability are essential in order to hold human operators responsible for their actions, however, when machines are involved, ensuring accountability becomes more complex, especially when they operate independently, making decisions without direct human intervention.
  • The humanitarian impact can be particularly challenging, for this reason, it is essential to ensure a balance between military objectives and civilian protection.

It can be said that ethical concerns and compliance with international law go hand in hand: the former must align with IHL principles, uphold human rights, and minimize harm.

The compliance with IHL serves as a critical benchmark for assessing the acceptability of AWS. However, the extent to which existing IHL rules provide limits on the development and use of AWS remains a subject of debate and exploration. According to the definition provided by the International Committee of the Red Cross (ICRC), an autonomous weapon system is defined as follows:

Any weapon system with autonomy in its critical functions—that is, a weapon system that can select (search for, detect, identify, track or select) and attack (use force against, neutralize, damage or destroy) targets without human intervention.

In this regard, the ICRC suggested that States determine the limits to human control needed for the AWS to carry out attacks, while complying with IHL and keeping in mind ethical considerations. While AWS are not specifically regulated by IHL treaties, their use by a commander or operator must respect some core principles on the conduct of hostiles: rules of distinction, proportionality, and precautions in attack. Delving into the details of each aspect, the distinction principle refers to recognizing and differentiating military objectives and civilian objects, combatants and civilians, and active combatants and those hors of combat. The rule of proportionality refers to the ability to determine whether the attack may cause incidental civilian casualties and damage civilian objects, which would be excessive in relation to the military advantage anticipated.
Lastly, the principle of precaution entails canceling or suspending an attack in case it becomes apparent that the target is not a military objective or that the attack itself might violate the rule of proportionality.

The set of the above-mentioned rules creates obligations for human combatants in the use of AWS to carry out attacks; they are responsible for respecting them and they will be held accountable for any violations. Legal limits and debates revolve around human–machine interaction, existing IHL rules, state responsibility, and individual criminal responsibility. The United Nations and the International Committee of the Red Cross emphasize that machines autonomously targeting humans cross a moral line, advocating for prohibiting AWS by international law. In summary, the development and use of AWS require a delicate balance between technological advancement and adherence to established legal norms. In this regard, in 2019, United Nations Secretary-General António Guterres stated that “machines with the power and discretion to take lives without human involvement are politically unacceptable, morally repugnant and should be prohibited by international law.” He subsequently called on Member States to agree on a legally binding instrument, by 2026, that will prohibit lethal autonomous weapon systems that function without any human control or oversight.

Conclusions

In conclusion, the integration of artificial intelligence into military operations presents a paradigm shift in warfare, offering unparalleled capabilities in terms of efficiency, effectiveness, and strategic advantage. However, this advancement is not without its challenges, particularly in the realms of ethics and legality. The deployment of autonomous weapon systems (AWS) raises profound ethical concerns regarding the delegation of life-and-death decisions to machines, potential biases in decision-making processes, transparency issues, and questions of human agency and accountability. Moreover, ensuring compliance with international humanitarian law (IHL) presents a complex task, as AWS blur the lines of responsibility and adherence to established principles of distinction, proportionality, and precaution. While the development and use of AWS hold promise for military operations, there is a pressing need for international consensus on legal frameworks governing their deployment. The proliferation of machines with autonomous lethal capabilities without human oversight is morally challenging and warrants urgent action to establish binding international regulations. Therefore, as AI continues to evolve, it is imperative to navigate the delicate balance between technological advancement and the preservation of ethical and legal norms to ensure a safer and more secure future for all.



Classificazione delle fonti:

1

Confirmed

Confirmed by other independent sources; logical in itself; coherent with other information on the topic

2

Presumably true

Not confirmed; logical in itself; coherent with other information on the topic

3

Maybe true

Not confirmed; reasonably logical in itself; coherent with some other information on the topic

4

Uncertain

Not confirmed; possible but not logical in itself; no other information on the topic

5

Improbable

Not confirmed; not logical in itself; contradicts with other information on the topic

6

Not able to be evaluated

No basis to evaluate the validity of the information

Trustworthiness of the source

A

Trustworthy

No doubt about authenticity, reliability or competence; has a history of total trustworthiness

B

Normally trustworthy

Small doubts about authenticity, reliability or competence, nevertheless has a history of valid information in a majority of cases

C

Sufficiently trustworthy

Doubts about authenticity, reliability or competence; however, has supplied valid information in the past

D

Normally not trustworthy

Significant doubt about authenticity, reliability or competence, however has supplied valid information in the past

E

Not trustworthy

Lack of authenticity, reliability or competence; history of invalid information

F

Not able to be evaluated

No basis to evaluate the validity of the information

Bibliografia

Riproduzione Riservata Ⓡ



Condividi il post