Israel’s use of AI in Gaza is coming under closer scrutiny

Do the humans in Israel’s army have sufficient control over its technology?


FOR OVERIDFAI a decade military experts, lawyers and ethicists have grappled with the question of how to control lethal autonomous weapon systems, sometimes pejoratively called killer robots. One answer was to keep a “man in the loop”—to ensure that a human always approved each decision to use lethal force. But in 2016 Heather Roff and Richard Moyes, then writing for Article 36, a non-profit focused on the issue, cautioned that a person “simply pressing a ‘fire’ button in response to indications from a computer, without cognitive clarity or awareness”, does not meaningfully qualify as “human control”.That nightmarish vision of war with humans ostensibly in control but shorn of real understanding of their actions, killing in rote fashion, seems to have come to pass in Gaza. This is the message of two reports published by , a left-wing Israeli news outlet, the most recent one on April 3rd. The Israel Defence Forces () have reportedly developed artificial-intelligence () tools known as “The Gospel” and “Lavender” to “mark” suspected operatives of Hamas and Palestinian Islamic Jihad, two militant groups, as targets for bombing, according to Israeli officers familiar with the systems.

  • Source Israel’s use of AI in Gaza is coming under closer scrutiny
  • you may also like

    • by DUBAI AND JERUSALEM
    • 07 25, 2024
    Israel and the Houthis trade bombs and bluster