The Israeli military has been leveraging artificial intelligence (AI) to enhance the precision of its strikes against militants in the ongoing conflict with Hamas, spanning five months. Despite these technological advancements, the rising death toll in Gaza, which includes a significant number of civilians among the reported 30,000 casualties, has sparked debate over the effectiveness and ethical implications of AI in warfare.
Toby Walsh, a leading scientist at the University of New South Wales AI Institute in Australia, posed a critical viewpoint to AFP, suggesting that the high civilian casualties raise questions about either the Israeli Defense Forces’ (IDF) indifference to collateral damage or the actual efficiency of the AI technologies they claim to use.
While the specifics of militant casualties within the Gaza toll remain undisclosed by the health ministry, Israel asserts that its military operations have neutralized 10,000 terrorists since the onset of hostilities in early October, following a significant attack by Hamas.
The integration of AI-driven technologies, such as drones and advanced targeting systems, into military operations has raised ethical concerns. Despite the Israeli military’s claims of targeting only combatants and efforts to minimize civilian harm, their reliance on AI for targeting decisions has not been publicly detailed.
Israel has been promoting its use of AI for military targeting since a conflict in May 2021, which was described as the “first AI war.” Aviv Kochavi, the military chief at that time, highlighted the AI’s role in identifying daily targets at a scale previously unachievable, marking a significant shift in operational capabilities.
The escalation in Gaza commenced with a Hamas attack on October 7, resulting in substantial Israeli casualties. Subsequent reports from the Israeli military highlighted the AI system named Gospel’s role in identifying over 12,000 targets in a brief period, aiming for precise strikes against Hamas infrastructure while minimizing collateral damage. However, critiques suggest that the AI system, described as a “mass assassination factory” by a former Israeli intelligence officer, might not be as infallible as portrayed, with concerns about the quality and assumptions of the data it processes.
Experts caution against overreliance on data-driven targeting, arguing that the complexity and ethical implications of automated warfare necessitate a critical assessment of the technology’s current capabilities and limitations. The debate underscores ongoing tensions between the potential for AI to reduce civilian casualties and the risk of exacerbating conflicts through depersonalized, data-driven decision-making processes.
As military forces worldwide explore AI’s potential, the balance between technological innovation and ethical responsibility remains a pivotal challenge, highlighting the importance of human oversight in automated warfare systems.