The machine bears no responsibility.
AI that deploys weapons cannot be held accountable for its actions, creating a serious ethical and legal problem. Unlike a human, who can be prosecuted for war crimes or errors, a machine remains merely a tool, with responsibility falling to its creators, operators, or commanders. However, in complex systems with high autonomy, determining who is at fault for an error—whether the programmer, algorithm developer, or commander issuing a general order—is extremely difficult. This blurs accountability and can lead to impunity in the event of tragedies, undermining trust in justice.
The lack of accountability in machines also affects the moral aspects of warfare. Human soldiers, aware of the consequences of their actions, may experience remorse or fear of punishment, which sometimes restrains them from excessive cruelty. AI, devoid of emotions and moral compass, operates solely based on predefined parameters, without considering the consequences. This can lead to situations where a machine executes orders that a human would deem immoral or illegal. For example, AI might strike a densely populated area if it aligns with its algorithm, even though a human operator in a similar situation might refrain from attacking.