If you want to see some truly horrifying consequences of relying on AI to make decisions that humans should be forced to make, +972 Magazine has some amazing reporting about "Lavender", the system being used by the IDF to decide which Palestinians should be targeted for assassination:
https://www.972mag.com/lavender-ai-israeli-army-gaza/
It's a long read and a very unsettling one. Some excerpts:
The Lavender software analyzes information collected on most of the 2.3 million residents of the Gaza Strip through a system of mass surveillance, then assesses and ranks the likelihood that each particular person is active in the military wing of Hamas or PIJ. According to sources, the machine gives almost every single person in Gaza a rating from 1 to 100, expressing how likely it is that they are a militant.
“At its peak, the system managed to generate 37,000 people as potential human targets,” said B. “But the numbers changed all the time, because it depends on where you set the bar of what a Hamas operative is. There were times when a Hamas operative was defined more broadly, and then the machine started bringing us all kinds of civil defense personnel, police officers, on whom it would be a shame to waste bombs. They help the Hamas government, but they don’t really endanger soldiers.” One source who worked with the military data science team that trained Lavender said that data collected from employees of the Hamas-run Internal Security Ministry, whom he does not consider to be militants, was also fed into the machine. “I was bothered by the fact that when Lavender was trained, they used the term ‘Hamas operative’ loosely, and included people who were civil defense workers in the training dataset,” he said.
The sources said that the approval to automatically adopt Lavender’s kill lists, which had previously been used only as an auxiliary tool, was granted about two weeks into the war, after intelligence personnel “manually” checked the accuracy of a random sample of several hundred targets selected by the AI system. When that sample found that Lavender’s results had reached 90 percent accuracy in identifying an individual’s affiliation with Hamas, the army authorized the sweeping use of the system. From that moment, sources said that if Lavender decided an individual was a militant in Hamas, they were essentially asked to treat that as an order, with no requirement to independently check why the machine made that choice or to examine the raw intelligence data on which it is based.
There's a lot more horrifying detail in the article, not all of it directly related to the use of AI--such as the "acceptable" threshold of civilian casualties when targeting an operative and the absolute minimal reconnaissance performed to ensure that targets were even present in residential buildings that were bombed (at night, when their entire families were present). Of course, the IDF has refused to officially confirm much of the content of the article, but it certainly does more to explain their kill counts and wanton destruction of infrastructure than any official explanations ever have.