The United Nation’s Convention on Certain Conventional Weapons (CCW), also known as the Inhumane Weapons Convention, is about to take the first small step toward an international ban on killer robots.

Activists have been agitating for years for a preliminary ban on fully autonomous lethal weapons, which the US Defense Department defines as weapons that "once activated, can select and engage targets without further intervention by a human operator." That includes, say, drones that drop bombs automatically when they reach their targets.

Experts will discuss the issue for four days

That example is hypothetical, of course — no known military lets drones decide when to kill. The US military has even declared that robotic weapons must always have a human "in the loop," at least for now. But the technology for killer robots arguably already exists. The Israeli military employs missile-defense systems that fire automatically, the South Korean military owns sentry robots that can be set to fire if a person fails to provide the right password, and drones could theoretically be programmed to drop bombs based on geolocation.

The CCW has convened a four-day informal meeting of experts in Geneva to discuss how much autonomy is conscionable in robots that have the capacity to kill humans. The meeting agenda includes presentations on the definition of autonomy, existing robotic weapons, and the challenge of accountability when a robot exercises lethal force, among other topics. There will also be breakout discussions on international humanitarian law, the ethical implications of killer robots, and more.

After the meeting CCW chair, Ambassador Jean-Hugues Simon-Michel of France, will prepare a summary for the convention’s next gathering in November. At that time, the convention may choose to continue exploring the proposal of a ban or escalate to the next phase of an international arms agreement.

It's the first small step in the process toward an international ban

Military tools like lethal drones and underwater robotic weapons are proliferating, and they require increasingly less human supervision. The UN is taking this threat seriously; last year the Human Rights Council recommended that countries suspend development of killer robots until an "internationally agreed upon framework" is established.

The meeting will also include a debate between Ronald Arkin, a roboticist and ethicist at the Georgia Institute of Technology who has collaborated with the Pentagon on various robotics systems, and Professor Noel Sharkey, a professor at the University of Sheffield and a co-founder of the International Committee for Robot Arms Control (ICRAC).

"Autonomous weapons present a new danger to humanity that needs to be stopped before states become too invested in the technology," Sharkey writes in an email to The Verge. "We are hoping that this week at the CCW will begin a process towards a legally binding agreement to ban weapons that once activated, select their own targets and attack them without human intervention."

On the other side, Arkin believes using robots in war can reduce human casualties by keeping humans out of battle and making lethal force more precise. He hopes to raise awareness of how technology can reduce civilian suffering in modern warfare.

"This is an informal meeting so I don’t expect anything concrete to result," Arkin tells The Verge in an email. "I have also often said that the discussion my research engenders is as important if not more important that the research itself. That goal seems already accomplished based on the fact that this meeting is even being held."