⦁ The perfect storm is coming. And there is no forecast model for it.
⦁ Artificial-Intelligence (AI)-driven and directed weapon systems are revolutionising warfare and battlegrounds. It’s the next arms race.
The search is now for the “safety lever” before one puts the finger on the trigger.
⦁ There are far too many questions about AI in the military domain.
⦁ The first global summit on Responsible Artificial Intelligence in the Military Domain(REAIM), organised by the Netherlands government, was held at The Hague on February 15-16.
⦁ It’s a platform for all stakeholders to discuss key opportunities, challenges and risks associated with military applications of AI.
⦁ It’s the first global attempt to prevent the proliferation of lethal autonomous weapons (LAWS) and insert ethics, responsibility, accountability and the moral factor into a rapidly developing weaponization technology that has potential for cataclysmic damage.
⦁ The aim is for nations to sign up for a Nuclear Non-Proliferation Treaty-kind of agreement.
⦁ REAIM 2023 concluded in a call to action to the world.
⦁ Delegations from 80 countries participated in the summit.
⦁ India hasn’t signed the Call to Action — at least not yet — though China and US have.
⦁ AI has the potential to revolutionise the way wars are fought and won.
⦁ But it also poses significant risks.
⦁ To prevent abuses, we need to establish international guidelines.
⦁ It has been established that AI is as ground-breaking as nuclear technology.
⦁ It is crucial we take action now.
⦁ “Together, we must seek common ground, starting with two basic questions: what is AI and who is responsible for its actions,” he pointed out.
⦁ “In Ukraine we are unfortunately already seeing the influence of new technology, including drone and cyber attacks.
⦁ We are also witnessing how Russia is violating international human laws in the most gruesome way.
⦁ AI is a double-edged sword, especially in weapon systems.
⦁ Can such a system be left to take its own decisions on pulling the trigger?
⦁ A human must be in the loop in the use of force, specifically in the offensive part.
⦁ We must also know when the algorithm can take a decision when we are on the defensive side and the enemy is moving fast and using AI.
⦁ The risk of focusing too much on the “reliability” of a system and made a strong case for conformation with humanitarian laws.
⦁ We need to ensure meaningful human control in the use of force.
⦁ Fully automated weapon systems should be prohibited.
⦁ There should be a strict regulation of all autonomous weapon systems that have the potential of mass destruction. We need to keep human control over AI.
SOURCE: THE HINDU, THE ECONOMIC TIMES, PIB