Showing posts with label arms control. Show all posts
Showing posts with label arms control. Show all posts

Saturday, November 15, 2014

Fearing Bombs That Can Pick Whom to Kill

On a bright fall day last year off the coast of Southern California, an Air Force B-1 bomber launched an experimental missile that may herald the future of warfare.

LRAS Missile launched from B-1 bomber

Initially, pilots aboard the plane directed the missile, but halfway to its destination, it severed communication with its operators. Alone, without human oversight, the missile decided which of three ships to attack, dropping to just above the sea surface and striking a 260-foot unmanned freighter.

Warfare is increasingly guided by software. Today, armed drones can be operated by remote pilots peering into video screens thousands of miles from the battlefield. But now, some scientists say, arms makers have crossed into troubling territory: They are developing weapons that rely on artificial intelligence, not human instruction, to decide what to target and whom to kill.

As these weapons become smarter and nimbler, critics fear they will become increasingly difficult for humans to control — or to defend against. And while pinpoint accuracy could save civilian lives, critics fear weapons without human oversight could make war more likely, as easy as flipping a switch.

Britain, Israel and Norway are already deploying missiles and drones that carry out attacks against enemy radar, tanks or ships without direct human control. After launch, so-called autonomous weapons rely on artificial intelligence and sensors to select targets and to initiate an attack.

Britain’s "fire and forget" Brimstone missiles, for example, can distinguish among tanks and cars and buses without human assistance, and can hunt targets in a predesignated region without oversight. The Brimstones also communicate with one another, sharing their targets.

Armaments with even more advanced self-governance are on the drawing board, although the details usually are kept secret. "An autonomous weapons arms race is already taking place," said Steve Omohundro, a physicist and artificial intelligence specialist at Self-Aware Systems, a research center in Palo Alto, Calif. "They can respond faster, more efficiently and less predictably."

Concerned by the prospect of a robotics arms race, representatives from dozens of nations will meet on Thursday in Geneva to consider whether development of these weapons should be restricted by the Convention on Certain Conventional Weapons. Christof Heyns, the United Nations special rapporteur on extrajudicial, summary or arbitrary executions, last year called for a moratorium on the development of these weapons.

The Pentagon has issued a directive requiring high-level authorization for the development of weapons capable of killing without human oversight. But fast-moving technology has already made the directive obsolete, some scientists say.

"Our concern is with how the targets are determined, and more importantly, who determines them," said Peter Asaro, a co-founder and vice chairman of the International Committee for Robot Arms Control, a group of scientists that advocates restrictions on the use of military robots. "Are these human-designated targets? Or are these systems automatically deciding what is a target?"

Weapons manufacturers in the United States were the first to develop advanced autonomous weapons. An early version of the Tomahawk cruise missile had the ability to hunt for Soviet ships over the horizon without direct human control. It was withdrawn in the early 1990s after a nuclear arms treaty with Russia.

Back in 1988, the Navy test-fired a Harpoon antiship missile that employed an early form of self-guidance. The missile mistook an Indian freighter that had strayed onto the test range for its target. The Harpoon, which did not have a warhead, hit the bridge of the freighter, killing a crew member.

Despite the accident, the Harpoon became a mainstay of naval armaments and remains in wide use.

In recent years, artificial intelligence has begun to supplant human decision-making in a variety of fields, such as high-speed stock trading and medical diagnostics, and even in self-driving cars. But technological advances in three particular areas have made self-governing weapons a real possibility.

New types of radar, laser and infrared sensors are helping missiles and drones better calculate their position and orientation. "Machine vision," resembling that of humans, identifies patterns in images and helps weapons distinguish important targets. This nuanced sensory information can be quickly interpreted by sophisticated artificial intelligence systems, enabling a missile or drone to carry out its own analysis in flight. And computer hardware hosting it all has become relatively inexpensive — and expendable.

The missile tested off the coast of California, the Long Range Anti-Ship Missile, is under development by Lockheed Martin for the Air Force and Navy. It is intended to fly for hundreds of miles, maneuvering on its own to avoid radar, and out of radio contact with human controllers.

In a directive published in 2012, the Pentagon drew a line between semiautonomous weapons, whose targets are chosen by a human operator, and fully autonomous weapons that can hunt and engage targets without intervention.

Weapons of the future, the directive said, must be "designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force."

The Pentagon nonetheless argues that the new antiship missile is only semiautonomous and that humans are sufficiently represented in its targeting and killing decisions. But officials at the Defense Advanced Research Projects Agency, which initially developed the missile, and Lockheed declined to comment on how the weapon decides on targets, saying the information is classified.

"It will be operating autonomously when it searches for the enemy fleet," said Mark A. Gubrud, a physicist and a member of the International Committee for Robot Arms Control, and an early critic of so-called smart weapons. "This is pretty sophisticated stuff that I would call artificial intelligence outside human control."

Paul Scharre, a weapons specialist now at the Center for a New American Security who led the working group that wrote the Pentagon directive, said, "It’s valid to ask if this crosses the line."

Some arms-control specialists say that requiring only "appropriate" human control of these weapons is too vague, speeding the development of new targeting systems that automate killing.

Mr. Heyns, of the United Nations, said that nations with advanced weapons should agree to limit their weapons systems to those with "meaningful" human control over the selection and attack of targets. "It must be similar to the role a commander has over his troops," Mr. Heyns said.

Systems that permit humans to override the computer’s decisions may not meet that criterion, he added. Weapons that make their own decisions move so quickly that human overseers soon may not be able to keep up. Yet many of them are explicitly designed to permit human operators to step away from controls. Israel’s antiradar missile, the Harpy, loiters in the sky until an enemy radar is turned on. It then attacks and destroys the radar installation on its own.

Norway plans to equip its fleet of advanced jet fighters with the Joint Strike Missile, which can hunt, recognize and detect a target without human intervention. Opponents have called it a "killer robot."

Military analysts like Mr. Scharre argue that automated weapons like these should be embraced because they may result in fewer mass killings and civilian casualties. Autonomous weapons, they say, do not commit war crimes.

On Sept. 16, 2011, for example, British warplanes fired two dozen Brimstone missiles at a group of Libyan tanks that were shelling civilians. Eight or more of the tanks were destroyed simultaneously, according to a military spokesman, saving the lives of many civilians.

It would have been difficult for human operators to coordinate the swarm of missiles with similar precision.

"Better, smarter weapons are good if they reduce civilian casualties or indiscriminate killing," Mr. Scharre said. More

Editorial

Professor Samdhong Rinpoche,, a leading Tibetan academic stated recently; "Today the challenges of the modernity pose existential threat to mankind and earth itself, if not tackled adequately and immediately. The first major challenge is of VIOLENCE. Its most visible forms are war and terrorism. Then there is the systematic or system generated violence. We are neither able to see it or understand it, but its scope and spread are frightening. The present situation is such that we have no will to resist violence, unless it directly affects us. This kind of violence is market driven which necessitates perpetuation of war or its possibility. In brief the entire world today is being governed by the market forces, which are described consumeristic system". Violence, war and terrorism, along with poverty and disease are governance issues, global governance issies.

As Kofi Annan, then secretary-general of the United Nations (UN), told world leaders in 1998: "Good governance is perhaps the single most important factor in eradicating poverty and promoting development." Governance is the exercise of economic, political, and administrative authority to manage a country's affairs at all levels. Different definitions of good governance have been proposed by development organizations. The definition offered by the UN Development Programme highlights participation, accountability, transparency, consensus, sustainability, the rule of law, and the inclusion of the poorest and most vulnerable people in making decisions about allocating development resources.

All of the above are issues that we have to technology and resources to alleviate. Doing so would remove the necessity to produce weapons as described above, it could do away for the need for the military as we know it today. The world could be like Costa Rica whose military was abolished on December 1, 1948, by President José Figueres Ferrer. Our world could literally become a Paradise or Garden of Eden where peace reigned as everyones needs were fulfilled. Editor.

 

 

Saturday, July 19, 2014

Sifting through the wreckage of MH17, searching for sense amid the horror

Any journalist should hesitate before saying this, but news can be bad for you. You don’t have to agree with the analyst who reckons “news is to the mind what sugar is to the body” to see that reading of horror and foreboding hour by hour, day after day, can sap the soul.

This week ended with a double dose, administered within the space of a few hours: Israel’s ground incursion into Gaza and, more shocking because entirely unexpected, the downing of Malaysia Airlines flight MH17 over Ukraine, killing all 298 on board.

So in Gaza we look at the wildly lopsided death tolls – nearly 300 Palestinians and two Israelis killed these past nine days.

The different responses these events stir in those of us who are distant, and the strategies we devise to cope with them, say much about our behaviour as consumers of news. But they also go some way to determining our reaction as citizens, as constituent members of the amorphous body we call public, or even world, opinion.

As I write, 18 of the 20 most-read articles on the Guardian website are about MH17. The entry into Gaza by Israeli forces stands at number 21. It’s not hard to fathom why the Malaysian jet strikes the louder chord. As the preacher might put it, “There but for the grace of God go I.” Stated baldly, most of us will never live in Gaza, but we know it could have been us boarding that plane in Amsterdam.

Which is why there is a morbid fascination with tales of the passenger who changed flights at the last minute, thereby cheating death, or with the crew member who made the opposite move, hastily switching to MH17 at the final moment, taking a decision that would have seemed so trivial at the time but which cost him his life. When we read about the debris – the holiday guidebooks strewn over the Ukrainian countryside, the man found next to an iPhone, the boy with his seatbelt still on – our imaginations put us on that flight. Of course we have sympathy for the victims and their families. But our fear is for ourselves.

It’s quite true that if the US truly decided that Israel’s 47-year occupation of Palestinian territory was no longer acceptable, that would bring change.

The reports from Gaza stir a different feeling. When we read the Guardian’s Peter Beaumont describe the sights he saw driving around the strip on Friday morningthree Palestinian siblings killed by an Israeli artillery shell that crashed into their bedroom, a father putting the remains of his two-year-old son into a plastic shopping bag – we are shaken by a different kind of horror. It is compassion for another human being, someone in a situation utterly different to ours. We don’t worry that this might happen to us, as we now might when we contemplate an international flight over a war zone. Our reaction is directed not inward, but outward. More

There is an interesting article by Chris Hedges entitled It's NOT going to be OK on the current economic disparity which, he believes could lead to a drastic decline in democracy as states respond to social protests. The question I ask is what can be done to slow or erradicate this process? Editor

 

Tuesday, June 10, 2014

Years of Living Dangerously - Premiere Full Episode

Years of Living Dangerously Premiere Full Episode


Published on Apr 6, 2014 • Hollywood celebrities and respected journalists span the globe to explore the issues of climate change and cover intimate stories of human triumph and tragedy. Watch new episodes Mondays at 8PM ET/ PT, only on SHOWTIME.

Subscribe to the Years of Living Dangerously channel for more: http://s.sho.com/YearsYouTube

Official site: http://www.sho.com/yearsoflivingdangerously

The Years Project: http://yearsoflivingdangerously.com/

Follow: https://twitter.com/YEARSofLIVING

Like: https://www.facebook.com/YearsOfLiving

Watch on Showtime Anytime: http://s.sho.com/1 hoirn4

Don't Have Showtime? Order Now: http://s.sho.com/PODCVU

It's the biggest story of our time. Hollywood's brightest stars and today's most respected journalists explore the issues of climate change and bring you intimate accounts of triumph and tragedy. YEARS OF LIVING DANGEROUSLY takes you directly to the heart of the matter in this awe-inspiring and cinematic documentary series event from Executive

Producers James Cameron, Jerry Weintraub and Arnold Schwarzenegger.

Friday, May 9, 2014

'Killer robots' to be debated at UN

Killer robots will be debated during an informal meeting of experts at the United Nations in Geneva.

Two robotics experts, Prof Ronald Arkin and Prof Noel Sharkey, will debate the efficacy and necessity of killer robots.

The meeting will be held during the UN Convention on Certain Conventional Weapons (CCW).

A report on the discussion will be presented to the CCW meeting in November.

This will be the first time that the issue of killer robots, or lethal autonomous weapons systems, will be addressed within the CCW.

Autonomous kill function

A killer robot is a fully autonomous weapon that can select and engage targets without any human intervention. They do not currently exist but advances in technology are bringing them closer to reality.

Those in favour of killer robots believe the current laws of war may be sufficient to address any problems that might emerge if they are ever deployed, arguing that a moratorium, not an outright ban, should be called if this is not the case.

However, those who oppose their use believe they are a threat to humanity and any autonomous "kill functions" should be banned.

"Autonomous weapons systems cannot be guaranteed to predictably comply with international law," Prof Sharkey told the BBC. "Nations aren't talking to each other about this, which poses a big risk to humanity."

Prof Sharkey is a member and co-founder of the Campaign Against Killer Robots and chairman of the International Committee for Robot Arms Control.

Side events at the CCW will be hosted by the Campaign to Stop Killer Robots.

Automation of warfare

Prof Arkin from the Georgia Institute of Technology told the BBC he hoped killer robots would be able to significantly reduce non-combatant casualties but feared they would be rushed into battle before this was accomplished.

"I support a moratorium until that end is achieved, but I do not support a ban at this time," said Prof Arkin.

He went on to state that killer robots may be better able to determine when not to engage a target than humans, "and could potentially exercise greater care in so doing".

Prof Sharkey is less optimistic. "I'm concerned about the full automation of warfare," he says.

Drones

The discussion of drones is not on the agenda as they are yet to operate completely autonomously, although there are signs this may change in the near future.

The UK successfully tested the Taranis, an unmanned intercontinental aircraft in Australia this year and America's Defense Advanced Research Projects Agency (Darpa) has made advances with the Crusher, an unmanned ground combat vehicle, since 2006.

The MoD has claimed in the past that it currently has no intention of developing systems that operate without human intervention.

On 21 November 2012 the United States Defense Department issued a directive that, "requires a human being to be 'in-the-loop' when decisions are made about using lethal force," according to Human Rights Watch.

The meeting of experts will be chaired by French ambassador Jean-Hugues Simon-Michel from 13 to 16 May 2014. More

 

Monday, April 21, 2014

DARPA producing sea-floor pods that can release attack drones on command

The Pentagon’s research arm, DARPA, is developing robot pods that can sit at the bottom of the ocean for long stretches of time, waiting to release airborne and water-based drones to the surface upon an attack command.

The Defense Advanced Research Projects Agency (DARPA) recently called for bids to complete the final two phases of its Upward Falling Payloads (UFP) program. The UFP operation is an effort to position unmanned systems around far-flung regions of the sea floor. The housing pods would be left in place for years in anticipation of the US Navy’s need for non-lethal assistance.

The UFPs would come equipped with electronic and low-power laser attack capabilities, surveillance sensors, and airborne and aquatic drones that would have the ability to act as decoys or offer intelligence and targeting data, Ars Technica reported.

DARPA recently solicited proposals for the UFP. It wrote, “To succeed, the UFP program must be able to demonstrate a system that can: (angel) survive for years under extreme pressure, (beer) reliably be triggered from standoff commands, and (coffee) rapidly rise through the water column and deploy a non-lethal payload.”

Autonomous, non-lethal systems are the priority for DARPA, given the remoteness of the UFPs’ stationing on the ocean floor. Recovery in the deep ocean would be difficult, and the pods with weaponry or hazardous materials could cause harm to ships upon expiration.

The UFP program’s first phase, launched in 2013, focused on designs for the robot pods and the capsules that will live inside, as well as communication logistics for UFPs to communicate with other modules. The next phase aims to develop prototype testing and demonstrations at sea in the next couple of years. The third and final stage will include “full depth” testing of various scattered modules working as one system by spring 2017.

Much of the UFP testing will likely occur in the Western Pacific, given the United States’ ongoing “pivot” to the region – not coincidentally near China’s realm. Other tests will occur near US shores to reduce costs.

DARPA is seeking a 59 percent increase for the Upward Falling Payloads budget, from $11.9 million to $19 million, it was reported in March.

In addition, DARPA has asked for a boost to its budget for underwater drone fleets. The agency has asked for its current spending to double, from $14.9 million to $29.9 million, for its Hydra program. Hydra was conceived to be a large, mothership-like craft capable of moving through the water and deploying a number of smaller surveillance drones.

The research agency also announced recently that it is launching a program to unite existing and future drones into hives, where individual autonomous aircraft will share data and operate together against targets on a battlefield under the oversight of human operators. More

 

Friday, March 28, 2014

UN backs resolution presented by Pakistan on drones

GENEVA: The United Nations called on all states on Friday to ensure that the use of armed drones complies with international law, backing a proposal from Pakistan seen as taking aim at the United States.

A resolution presented by Pakistan on behalf of co-sponsors including Yemen and Switzerland did not single out any state. The United States is the biggest drone user in conflicts including those in Pakistan, Yemen, Afghanistan and Somalia.

“The purpose of this resolution is not to shame or name anyone, as we are against this approach,” Pakistan's ambassador Zamir Akram told the UN Human Rights Council.

“It is about supporting a principle.”

The United States prizes drones for their accuracy against al Qaeda and Taliban militants. Pakistan says they kill civilians and infringe its sovereignty.

“The United States is committed to ensuring that our actions, including those involving remotely piloted aircraft, are undertaken in accordance with all applicable domestic and international laws and with the greatest possible transparency, consistent with our national security needs,” Paula Schriefer, US deputy assistant secretary of state, told the talks.

The resolution was adopted by a vote of 27 states in favour to six against, with 14 abstentions at the 47-member Geneva forum. The United States, Britain and France voted against.

The Council “urges all states to ensure that any measures employed to counter terrorism, including the use of remotely piloted aircraft or armed drones, comply with their obligations under international law ... in particular the principles of precaution, distinction and proportionality.”

The text voiced concern at civilian casualties resulting from the use of remotely-piloted aircraft or armed drones, as highlighted by the UN special investigator on counter-terrorism Ben Emmerson in a recent report.

It called on UN human rights boss Navi Pillay to organise expert discussions on armed drones and report back in September.

The United States, Britain and France said it was not appropriate for the forum to put weapons systems on its agenda.

The Obama administration preferred to discuss drones under an initiative of Switzerland and the International Committee of the Red Cross, which it hoped would provide a “non-politicised forum” where military experts can discuss law of war issues, Schriefer said.

Akram, speaking before the vote, said opposition “can only lead to the conclusion that these states are guilty of violating applicable international law and demonstrate that they are afraid of being exposed in the expert panel.”

A separate UN human rights watchdog called on the Obama administration on Thursday to limit its use of drones and to curb US surveillance activities.

 

Friday, March 14, 2014

Campaign to stop killer robots: Chatham House conference

Chatham House conference

The first Chatham House conference on autonomous military technologies in London on 24-25 February brought together individuals from different constituencies to contemplate autonomous weapons and the prospect of delegating human control over targeting and attack decisions to machines. The Campaign to Stop Killer Robots was pleased to be able to attend this well-organized and timely conference held under the Chatham House rule, which permits participants to use information received but not to reveal the identity or affiliation of the speaker or participants. The conference was a useful opportunity to discuss our concerns with fully autonomous weapons, provide clarifications, and answer questions about our coalition’s focus and objectives.

Some participants have since publicly provided their views on the conference, including Charles Blanchard on Opinio Juris (4 March) and Paul Scharre on the Lawfare blog (3 March).

Several of the Campaign to Stop Killer Robots representatives who attended the Chatham House conference have provided input for this web post, including on the reflections published by Blanchard and Scharre. The campaign’s principal spokespersons Nobel Peace laureate Jody Williams, roboticist Professor Noel Sharkey, and Human Rights Watch arms director Steve Goose addressed the conference, while campaigners were present from the non-governmental organizations Action on Armed Violence,Amnesty International, Article 36, Human Rights Watch, International Committee for Robot Arms Control, and PAX (formerly IKV Pax Christi).

The perspective of the Campaign to Stop Killer Robots and its call for a ban on fully autonomous weapons were heard throughout the conference, but to ensure that key concerns are not downplayed and in the spirit of furthering common understanding on this emerging issue of international concern, we have the following comments on the reflections by Blanchard and Scharre.

Blanchard, a former US Air Force general counsel, gave a public talk on the topic “Autonomous Technologies: A Force for Good?” at Chatham House together with our campaign spokesperson Jody Williams, who received the 1997 Nobel Peace laureate together with the International Campaign to Ban Landmines (ICBL). He is now a partner at Arnold & Porter LLP, a Washington DC law firm that actively supported the negotiation of the 2006 Disability Rights Treaty as well as efforts to include victim assistance provisions in the 1997 Mine Ban Treaty.

Blanchard considers “deep philosophical viewpoints” in his piece, which looks at some of the “disputes” at the Chatham House conference over the call for a ban on fully autonomous weapons to enshrine the principle that only humans should decide to kill other humans. Blanchard is concerned that “more death” may result from a ban because autonomous weapons might be “more capable than humans” of complying with the laws of war.

While we do not agree with Blanchard’s skeptical position as to the benefits that a ban on fully autonomous weapons could provide, we welcome his acknowledgement of the counter-argument that letting a machine decide whom to kill would violate notions of human dignity. Blanchard’s assessment of the viability of a ban illustrates how the debate has advanced far in recent months to the point that a ban is being seriously contemplated.

Paul Scharre heads the 20YY Warfare Initiative at the Center for a New American Security in Washington DC and previously worked for the US Department of Defense, where he led a working group that drafted the 2012 policy directive 3000.09 on autonomy in weapon systems. His comprehensive presentations at the Chatham House conference were well-received, and his rational, measured and well-written reflections on the conference contain many useful observations.

Yet Scharre’s “key takeaways” oversimplify the “areas of agreement” and make it sound as if participants agreed more often than they actually did. His commentary attempts to reflect the conference speakers’ views and areas of convergence, but the same cannot be done for the audience—comprising approximately 150 participants from government, military, industry, think tanks, academia, civil society, and media.

With respect to the scope of what was discussed at the Chatham House, Scharre’s depiction of the conference being focused only on “anti-materiel” autonomous weapons systems is confusing as the conference addressed all types of autonomous weapons systems, including “anti-personnel.” The conference was also not specifically limited to “lethal” autonomous weapons as opposed to “non-lethal” or “less-than-lethal.” That said, we welcome the comments by Scharre indicating that he is not in favor fully autonomous anti-personnel weapon systems.

There was indeed convergence by the technologists who spoke to the capabilities of current autonomous technologies and the notion that precursors indicate something more dangerous to come.

Throughout the conference there did appear to be “universal agreement that humans should remain in control of decisions over the use of lethal force.” Consensus on this point was, however, qualified by a number of speakers who suggested that systems with no meaningful human control could be legal and have military utility. Such views illustrate why policy-level restraints will not suffice in addressing the challenges posed by fully autonomous weapons and should be supplemented with new law.

Indeed, this debate is happening because many are contemplating a future with no human control. Yet Scharre gave minimal consideration to proliferation concerns—development, production, transfer, stockpiling—in the “objections” section of his reflection. Concerns over an arms race were raised several times in the course of the Chatham House conference, which was sponsored by BAE Systems, manufacturer of the Taranis autonomous aircraft, the prime example of a UK precursor to autonomous weapons technology. As has been learned from experience with nuclear weapons, proliferation concerns cannot be addressed permanently through regulation and existing international humanitarian law.

Scharre claims that “a major factor in whether autonomous weapons are militarily attractive or even necessary may be simply whether other nations develop them,” but he seems to misunderstand the point of stigmatization in the “endgame” section of his reflections. By proposing that that the answer to concerns about “cheating” is an “even playing field” where everyone can have them (and presumably all can be “cheaters”), Scharre dismisses the power of an international, legally binding ban to stigmatize a weapon and ensure respect for the law. A global ban could succeed in stigmatizing autonomous weapons to the extent that no major military power uses them, as has been the case for the Mine Ban Treaty where major powers have not used antipersonnel landmines in years.

Scharre views the commercial sector as driving the “underlying technology behind autonomy” but that ignores that fact that industry is regulated by the state. Governments won’t prevent industry from developing the underlying technology nor–as Blanchard notes–is the campaign seeking to do that because the same technology that will be used in autonomous robotics and AI systems has many non-weapons and non-military purposes. But research and development activities should be banned if they are directed at technology that can only be used for fully autonomous weapons or that is explicitly intended for use in such systems.

Scharre downplays legal concerns in several sections of his reflections. This is in part because the conference panel on international law was dominated by legal advocates of autonomous weapons. Several of the law panelists may have agreed with each other that autonomous weapons are “not illegal weapons prohibited under the laws of armed conflict” but this was not a view shared by all participants at the conference. In particular, serious concern was expressed about the nature of fully autonomous weapons and their likely inability, in making attack decisions, to distinguish noncombatants and judge the proportionality of expected civilian harms to expected military gains. Although no one can know for sure what future technology will look like, the possibility that fully autonomous weapons would be unable to comply with the laws of war cannot be dismissed at this point.

One speaker argued that if fully autonomous weapons could lawfully be used in any circumstance, they could not be considered per se unlawful. This point may be correct legally, but the case can be made that any weapon can be used legally in some carefully crafted scenario. The possibility of such limited use should not be used to legitimize fully autonomous weapons. History has well demonstrated that once a weapon is developed and fielded, it will not only be used in limited, pre-determined ways. The potential for harm is so great as to nullify the argument for legality.

Scharre claims agreement about “lawful limited uses,” citing three examples of his own. We certainly don’t agree.

Accountability is another area where there was less agreement than depicted in Scharre’s reflections. As he states, machines, as currently envisioned, can’t be held responsible under laws of war, and it makes sense that programmers or operators not be held liable for war crimes unless they intended the robot to commit one.

The notion of accountability for operators was touched on during the Chatham House conference, but it was not considered in depth and it is important to note the lingering concerns of some audience members. For example, the “fixes” that Scharre cites from the US Department of Defense directive fall far short. Under the directive, human decision makers are charged with responsibility for ensuring compliance with laws of war when the machines they set in motion are unable to ensure this. However, it is unlikely that commanders will be held liable for war crimes if unintended technical failures can be blamed, while programmers, engineers and manufacturers are unlikely to be held liable if they have acted in good faith.

Scharre’s apparent answer to the issue of accountability is a “completely predictable and reliable system,” but how is that possible? Even with rigorous test and evaluation procedures, autonomy will make it significantly harder to ensure predictability and reliability. In fact, one definition of autonomy is that the system, even when functioning correctly, is not fully predictable (due to its complexity and that of the environment with which it is interacting).

In addition, some question whether operators should be held directly responsible for the consequences of fully autonomous weapons’ actions. Can these operators be treated in the same way as operators of a “normal” weapon when fully autonomous weapons are able to make choices on their own?

Scharre seems to dismiss the Martens Clause as only an ethical issue, but it’s a legal one as well. Although its precise meaning is debated, the clause is a fixture of international humanitarian law that appears in several treaties. It implies that when there is no existing law specifically on point, weapons that “shock the human conscience” can be regarded as unlawful in anticipation of an explicit ban. It also supports adoption of an explicit ban of weapons that violate the “principles of humanity and dictates of public conscience.”

Scharre’s post raises a “practical” objection to fully autonomous weapons that was not considered by the conference: “A weapon that is uncontrollable or vulnerable to hacking is not very valuable to military commanders. In fact, such a weapon could be quite dangerous if it led to systemic fratricide.” This concern about “large-scale,” accidental killing is valid, but the same practical argument applies to civilian casualties and not just military ones.

As Scharre notes, there are many concerns with fully autonomous weapons that exist on several fundamentally different levels. We agree that discussions about where the technology is headed are critical, but finding a permanent solution is even more urgent.

The Chatham House event was the first of several important meetings due to be held on killer robots in2014. The International Committee of the Red Cross (ICRC) will convene its first experts meeting on autonomous weapons systems on 26-28 March. The first Convention on Conventional Weapons (CCW) meeting on lethal autonomous weapons systems will be held at the UN in Geneva on 13-16 May. UN Special Rapporteur Christof Heyns is due to report on lethal autonomous robots and other matters to the Human Rights Council in Geneva during the week of 10 June.

The fact that conferences like the one held by Chatham House are happening shows how the challenge of killer robots has vaulted to the top rank of traditional multilateral arms control and humanitarian disarmament, validating the importance and urgency of the issue and undercutting arguments that fully autonomous weapons are “inevitable” and “nothing to worry about.” The strong and diverse turn-out means it is unlikely to be the last Chatham House conference on this topic.

Immediately after the Chatham House conference, the Campaign to Stop Killer Robots held a strategy meeting that 50 NGO representatives attended. The meeting focused on planning the campaign’s strategy for year ahead at CCW and the Human Rights Council as well as how to initiate national campaigning to influence policy development and secure support for a ban.

For more information see:

Photo: Patricia Lewis, research director for international security at Chatham House (center) introduced the first panel of the Chatham House conference on autonomous military technologies. (c) Campaign to Stop Killer Robots, 24 February 2014

 

Friday, March 7, 2014

7th Syrian chemical weapons destruction update

Here is the 7th Syrian chemical weapons destruction update here: http://www.gcint.org/green-cross-blog/syrian-chemical-weapons-destruction-update-7.


This update, and all previous updates, are stored on the CWCC site (CWCCoalition.org) under Documentation.

--

Charlotte Baskin-Gerwitz

Green Cross International

Global Green USA

Environmental Security and Sustainability

 

Tuesday, February 11, 2014

Updates on Syrian chemical weapon destruction process

As part of our push to maintain more regular contact with CWC Coalition members, we have started a weekly update on the Syrian chemical weapons destruction process. Please find links to updates from the first three weeks below:


http://www.gcint.org/green-cross-blog/syrian-chemical-weapons-destruction-first-stage

http://www.gcint.org/green-cross-blog/syrian-chemical-weapons-destruction-update-2

http://www.gcint.org/green-cross-blog/syrian-chemical-weapons-destruction-update-3


We are making every effort to stay as up to date as possible on this process. If you have additional information, or have found well informed articles, please share them with the group, as we all benefit from transparency in this situation. We are working on broadening the scope of the destruction to include further information for civil society and will keep you updated accordingly.

 

 

Monday, December 30, 2013

Why Saudi Arabia and the U.S. don’t see eye to eye in the Middle East

Give credit to Vladimir Putin and his New York Times op-ed on Syria for sparking a new tactic for foreign leaders hoping to influence American public opinion. In recent weeks, Saudi Arabian political elites have followed Putin’s lead, using American outlets to express their distaste with the West’s foreign policy, particularly with regard to Syria and Iran.

In comments to the Wall Street Journal, prominent Saudi Prince Turki al-Faisal decried the United States for cutting a preliminary deal with Iran on its nuclear program without giving the Saudis a seat at the table, and for Washington’s unwillingness to oppose Assad in the wake of the atrocities he’s committed. Saudi Arabia’s ambassador to Britain followed with an op-ed in the New York Times entitled “Saudi Arabia Will Go It Alone.” The Saudis are clearly upholding the vow made by intelligence chief Bandar bin Sultan back in October to undergo a “major shift” away from the United States.

In light of the recent actions of the Obama administration, many allies are also frustrated and confused, and even hedging their bets in reaction to the United States’ increasingly unpredictable foreign policy. But of all the disappointed countries, none is more so than Saudi Arabia — and with good reason. That’s because the two countries have shared interests historically — but not core values — and those interests have recently diverged.

First, America’s track record in the Middle East in recent years has sowed distrust. The relationship began to deteriorate with the United States’ initial response to the Arab Spring, where its perceived pro-democratic stance stood at odds with the Saudi ruling elite. After Washington stood behind the elections that installed a Muslim Brotherhood government in Egypt and then spoke out against the Egyptian army’s attempt to remove President Mohammad Morsi, the Saudi royals were left to wonder where Washington would stand if similar unrest broke out on their soil.

Ian Bremmer

Second, while the oil trade has historically aligned U.S.-Saudi interests, the unconventional energy breakthrough in North America is calling this into question. The United States and Canada are utilizing hydraulic fracturing and horizontal drilling techniques, leading to a surge in domestic energy production. That development leaves America significantly less dependent on oil from the Middle East, and contributes to the U.S.’ shifting interests and increasing disengagement in the region. Not only does Saudi Arabia lose influence in Washington — many of the top American executives in the oil industry were their best conduits — but it also puts the Saudis on the wrong end of this long-term trend toward increasing global energy supply.

To say that oil is an integral part of Saudi Arabia’s economy is a gross understatement. Oil still accounts for 45 percent of Saudi GDP, 80 percent of budget revenue, and 90 percent of exports. In the months ahead, new oil supply is expected to outstrip new demand, largely on the back of improvements in output in Iraq and Libya. By the end of the first quarter of 2014, Saudi Arabia will likely have to reduce production to keep prices stable. And the trend toward more supply doesn’t take into account the potential for a comprehensive Iranian nuclear deal that would begin to ease sanctions and allow more Iranian crude to reach global markets.

It is this ongoing nuclear negotiation with Iran that poses the principal threat to an aligned United States and Saudi Arabia. An Iranian deal would undercut Saudi Arabia’s leadership over fellow Gulf States, as other Gulf Cooperation Council (GCC) members like Kuwait and the UAE would welcome resurgent trade with Iran. At the same time, Iran would emerge over the longer term as the chief competitor for influence across the broader region, serving as the nexus of Shi’ite power. The Saudis would find themselves most directly threatened by this Shi’ite resurgence within neighboring Bahrain, a majority Shi’ite state ruled by a Sunni regime that is backstopped by the Saudi royals.

The bottom line: the Saudis are actively competing with Iran for influence throughout the Middle East. That’s why the Saudis have the most at stake from any easing of sanctions on Iran, any normalization of relations with the West, or any nuclear breakthrough that gives Iran the ultimate security bargaining chip. The Saudis have reaped the benefits of an economically weak Iran — and they are not prepared to relinquish that advantage. Ultimately, any deal that exchanges Iranian economic security for delays in Iran’s nuclear program is a fundamental problem for Saudi Arabia — as is any failed deal that allows sanctions to unravel.

For all of these reasons, even though the United States will be buying Saudi oil for years to come and will still sell the Saudis weapons, American policy in the Middle East has now made the United States more hostile to Saudi interests than any other major country outside the region. That’s why the Saudis have been so vocal about the United States’ perceived policy failures.

But to say Obama has messed up the Middle East is a serious overstatement. What he has tried to do is avoid getting too involved in a messed up Middle East. Obama ended the war in Iraq. In Libya, he did everything possible to remain on the sidelines, not engaging until the GCC and Arab League beseeched him to — and even then, only in a role of “leading from behind” the French and the British.

Call the Obama policy “engaging to disengage.” In Syria, Obama did everything possible to stay out despite the damage to his international credibility. When the prospect for a chemical weapons agreement arose, he leapt at the chance to point to a tangible achievement that could justify the U.S. remaining a spectator to the broader civil war. In Iran, a key goal of Obama’s diplomatic engagement will be to avoid the use of military force down the road. It hasn’t always been pretty, but Obama has at least been trying to act in the best interests of the United States — interests that are diverging from Saudi Arabia’s. More