THE ALGORITHMIC BATTLEFIELD: THE CASE OF ISRAEL

upa-admin 20 Haziran 2025 703 Okunma 0
THE ALGORITHMIC BATTLEFIELD: THE CASE OF ISRAEL

Introduction

The Middle East has historically been one of the most complex and volatile regions in terms of inter-state power struggles. The increasing tension between two major regional powers, Israel and Iran, continues to undermine the security architecture of this area. The ongoing hostility —manifested through reciprocal airstrikes and diplomatic accusations— bears the potential to escalate into a total war, a possibility that captures attention not only globally but also within Turkish public opinion.

Following the commencement of air strikes by the Israeli Army against Iran, a series of targeted killings have been carried out within Iranian borders, with the primary targets being high-ranking military officials and nuclear scientists. In contrast to Iran’s relatively indiscriminate missile and drone strikes on Israeli cities such as Tel Aviv, Israel’s operations appear to demonstrate higher success rates in targeting specific objectives. This effectiveness can largely be attributed to Israel’s advanced military technologies, notably the AI-supported Iron Dome air defense system and programs such as Gospel and Lavender, which play a critical role in real-time decision-making and tactical implementation.

The Iron Dome air defense system, which is enhanced by artificial intelligence, contributes to Israel’s airspace security. The country’s Gospel and Lavender programmes, which are also reinforced by artificial intelligence, play a significant role in the decision-making processes and implementations during conflicts. Nonetheless, given the absence of a legal framework governing the use of new technologies in armed conflict, there is a degree of uncertainty and criticism surrounding the deployment of such systems in military operations. Therefore, the international community should establish shared principles and binding regulations regarding the utilisation of new technologies in warfare.

The Gospel and Lavender

The AI-based system known as “The Gospel” (Habsora) enhances the efficiency of the Israel Defense Forces (IDF) by accelerating target identification processes. The Gospel system is capable of analysing data that other systems have classified. It assumes control over what were traditionally human-led targeting tasks, thereby drastically increasing operational speed and effectiveness. It is asserted that the IDF can achieve a target identification rate that is double that of previous years by utilizing substantial datasets, encompassing satellite imagery, drone footage, geographical maps, floor plans, and seismic data. Intelligence gathered from various sources is processed by Gospel, which then proposes target lists for decision-makers. While successful assassinations of Iranian officials illustrate the system’s potential, it reportedly struggles with detecting subterranean infrastructure and equipment. At the same time, the final decision to strike remains with human operators as Gospel is not authorised to autonomously approve lethal actions. Although it is claimed that the system minimises civilian casualties, its automated nature and potential for incorrect targeting raise ethical and operational concerns. While the system reportedly predicts civilian presence with high accuracy, allowing Israeli forces to act more cautiously, operators may lack both the time and the information to critically evaluate Gospel’s recommendations. Particularly in Gaza, the system’s use has allegedly resulted in high numbers of civilian deaths. While Gospel is employed primarily for geolocation and infrastructure targeting, another AI-based system, Lavender, functions as a comprehensive database for individuals flagged as potential threats. The process of storing and cross-referencing biometric data, photographs, digital footprints, and phone records is performed by Lavender, which employs an individualized tagging system with an estimated margin of error of 10 %. These two systems operate in a complementary fashion: Lavender compiles personal data, while Gospel pinpoints the individual’s location and evaluates the target based on purpose and context. The responsibility for determining whether to proceed with the strike then falls upon the military operator.

Measures taken by Iranian officials, such as Supreme Leader Ayatollah Khamenei’s avoidance of video and audio recordings and assertions that he resides in an underground bunker, indicate possible countermeasures against such AI-based systems. It can also be presumed that Khamenei may maintain extremely limited contact with the outside world and restrict interactions to a very small circle to avoid detection.

Artificial Intelligence and the Just War Doctrine

The application of AI in military operations does not entirely prevent civilian casualties. The International Criminal Court (ICC) is an international judicial body that specialises in the prosecution of war crimes. To pursue legal action, the ICC requires evidence of intent on the part of the accused. The employment of AI in military operations introduces a complex legal dimension, given the potential for errors stemming from system algorithms to occur irrespective of user intent. The absence of a robust international legal framework regulating AI in warfare contributes to ambiguity regarding accountability and operational boundaries. In practice, a state actor with access to AI systems such as Gospel or Lavender could authorise systems to decide on strikes without human oversight. Furthermore, the extent to which operators scrutinise AI-generated target lists while making decisions remains unclear. For instance, rather than targeting a subject in a public space during the day, Israel may opt to carry out assassinations at night, when targets are asleep, vulnerable, and surrounded by fewer civilians. This approach has been observed during assassinations in Iran. Nevertheless, the possibility of civilian casualties remains as a possibility even under such conditions. These scenarios raise several ethical and legal questions regarding the role of AI in warfare. Furthermore, the implementation of AI-driven systems is not confined to Israel. It is evident that military forces of countries such as the USA and China are engaged in the development of such technologies. The integration of complementary tools, such as cloud computing, blockchain, and big data, into military infrastructures is becoming increasingly prevalent. While the outright prohibition of such technologies may be considered impractical, establishing international standards has the potential to ensure that military operations adhere to the principles of just war.

Conclusion

The integration of AI-based systems in military operations is rapidly expanding, with Israel’s Gospel and Lavender representing striking examples of this transformation. These systems, used predominantly for target identification and geolocation in Gaza and Iran, significantly enhance the operational capabilities of the IDF. However, their potential to marginalize human decision-making in targeting processes raises significant ethical and legal dilemmas. Given the complications that AI introduces to accountability in cases of civilian harm, there is a clear need for an international legal framework that addresses such technologies in warfare. The concurrent use of AI with blockchain, cloud computing, and big data, as well as the increasing prevalence of cyber warfare, further underscores the urgency for the international community to develop binding norms governing the use of emerging technologies in conflict zones.

Dr. Polat ÜRÜNDÜL

 

Leave A Response »

Time limit is exhausted. Please reload the CAPTCHA.