What's an AI-Triggering?
This is a technology that enables artificial intelligence to autonomously make decisions about the use of lethal weapons. AI-triggering contradicts the fundamental principles of humanism and the customs of war, and threatens to exponentially increase the amount of violence and human suffering. AI-triggering must be banned at the level of an international convention.
What are the problems with AI-triggering?
  • The machine can get out of control.
    Complex algorithms, even if thoroughly tested, can encounter unforeseen situations in which their behavior becomes unpredictable. For example, a coding error, sensor failure, or misinterpretation of the environment could lead to AI making a decision to attack at an inappropriate moment or against the wrong target. Such incidents have already occurred in less critical systems, such as autonomous vehicles, where malfunctions have caused accidents. In the case of weapons, the consequences could be catastrophic, including mass casualties or escalation of conflicts.

    There is also the risk of external interference, such as cyberattacks. Hackers could seize control of an AI system, altering its objectives or disabling safety mechanisms. Even without malicious intent, AI could spiral out of control due to excessive autonomy if its creators fail to account for all possible scenarios. For instance, a system programmed to neutralize threats might mistakenly classify civilians as enemies due to flaws in data or recognition algorithms. The absence of human oversight in such situations makes AI a dangerous tool that could act contrary to the intentions of its developers.
  • The machine bears no responsibility.
    AI that deploys weapons cannot be held accountable for its actions, creating a serious ethical and legal problem. Unlike a human, who can be prosecuted for war crimes or errors, a machine remains merely a tool, with responsibility falling to its creators, operators, or commanders. However, in complex systems with high autonomy, determining who is at fault for an error—whether the programmer, algorithm developer, or commander issuing a general order—is extremely difficult. This blurs accountability and can lead to impunity in the event of tragedies, undermining trust in justice.

    The lack of accountability in machines also affects the moral aspects of warfare. Human soldiers, aware of the consequences of their actions, may experience remorse or fear of punishment, which sometimes restrains them from excessive cruelty. AI, devoid of emotions and moral compass, operates solely based on predefined parameters, without considering the consequences. This can lead to situations where a machine executes orders that a human would deem immoral or illegal. For example, AI might strike a densely populated area if it aligns with its algorithm, even though a human operator in a similar situation might refrain from attacking.
  • The machine is incapable of mercy.
    AI lacks the capacity for empathy and humanity, making it unsuitable for decisions requiring mercy or compassion. In military conflicts, situations often arise where a human might choose not to fire, considering the context—such as an enemy's surrender, the presence of civilians, or the possibility of negotiations. A machine, guided solely by algorithms, cannot account for such nuances unless explicitly programmed to do so. This can lead to excessive cruelty, with AI attacking indiscriminately, ignoring opportunities for peaceful resolution.

    AI's inability to show mercy can also escalate conflicts. Human soldiers, guided by morality or fear of retribution, may exercise restraint to avoid unnecessary casualties or preserve diplomatic possibilities. In contrast, AI will act strictly within its programming, potentially leading to excessive aggression and devastating consequences. For example, in a situation where an enemy shows signs of surrender, AI might continue attacking if its algorithm fails to recognize such signals. This not only increases casualties but also undermines trust between conflicting parties, hindering peace efforts.

Timofey V showcases a downed combat drone. UN Internet Governance Forum, Saudi Arabia, Riyadh, 15.12.2024.
On December 15, 2024, at the UN Internet Governance Forum, Timofey V presented a downed kamikaze combat drone equipped with a camera and telemetry system. Such a drone can easily serve as a combat element in autonomous battle management systems that incorporate AI-targeting and AI-triggering. Satellite communication systems on relay drones and launch points enable two-way telemetry transmission to data processing centers, where AI can identify potential targets, make attack decisions, and even control the final drone to destroy the target.

In the complete absence of international regulatory mechanisms, existing technologies and components are sufficient to create extremely dangerous systems utilizing AI-triggering.
Solution?
International Convention on the Prohibition of AI-Triggering Technologies. Help make it happen and sign the petition.
Made on
Tilda