At the turn of the century, I devoted significant time to examining the privatization of force by the US government, particularly the role of private military companies in Afghanistan and Iraq. This study also led me to investigate the potential future impact of Unmanned Ground Vehicles (UGVs) on public perceptions across different audiences, including local combat zones, the broader region, the US, and potential adversaries.1
Recently, I was reminded of several articles and papers I wrote on this topic when a colleague inquired about his PhD research on unmanned systems. After digging through my archives, I shared some of my work with him and thought others might find it interesting as well.
I uploaded one of those papers, writting around April 2008, to Academia.edu (and ResearchGate and SSRN), available here. This paper includes the results of an unscientific survey on de-escalation—which I framed as “compassion” at the time—and vulnerability to adverse information warfare.2 It examined whether the level of autonomy in lethal UGVs influences perceptions of incidents involving insurgents and bystanders. Additionally, it explored respondents’ expectations of a negative impact on the perceived commitment of the US if UGVs replaced, rather than merely augmented, troops providing security in villages or towns.
Below is a version of that paper dated July 2008. Like the one linked above, it is also titled the Unintended Consequences of Unmanned Warfare. I can’t recall where it was published or presented, but it likely appeared somewhere. This has been lightly edited for clarity and most of the footnotes have been removed.
The Unintended Consequences of Unmanned Warfare (July 2008)
Success in modern conflict relies more on the ability to influence rather than the ability to annihilate. Whether through conventional or unconventional means, influence generated by action is, plain and simple, more enduring than bullets or bombs themselves. Precision-guided munitions must be secondary to precision-guided influence that targets the strategic, operational, and tactical information environments.
Today’s unconventional enemy does not need to match America’s warfighting capability if they can effectively shape the perceptions of their target audiences. Insurgents and terrorists increasingly leverage the New Media environment with its 24/7 coverage, blogs, YouTube, and text messaging to shape perceptions around the globe, being attractive to some and intimidating to others. New Media collapses traditional concepts of time and space as information moves around the world in an instant. Unlike traditional media, search engines and the web, in general, enable information- factual or not- to be quickly and easily accessed long after it was created. The increased velocity of information means that careful deliberation by both media and the consumers of media is gone. Today, perception too often trumps fact. By the time the truth comes out, both the media and the audience have moved on.
The unconventional warrior shifts the purpose of physical engagement to incorporate the information effect of words and deeds increasingly. The enemy does this through unconventional means, including improvised explosive devices (IEDs) that have information effects far beyond the point of detonation. The purpose of IEDs is not to kill or maim Americans but to replay images of David standing up to Goliath.
The result is a shift in the fungibility and utility of violent coercion tools, transitioning from bullets and bombs to the warfighters themselves. During the Cold War, the United States relied on technical advantages to counter or surpass the numerical superiority of the Soviets. In the aftermath of the Cold War, lacking a defined enemy, the Defense Department moved from a threat-based posture to one centered on capabilities. This capabilities-based approach can have potentially negative outcomes, as what we can do does not necessarily align with what we should do.
The intoxicating allure of technology poses risks of unintended consequences in the psychological struggle for minds and wills in modern conflict. Unmanned ground vehicles will likely play a prominent role in future conflicts as they offer perimeter security, logistics, surveillance, intelligence, explosive ordnance disposal, and more. While many autonomous and remote-controlled robotic systems are already deployed—including Patriot missile batteries and Phalanx ship defense systems to aerial vehicles like the Predator—the impact of self-propelled unmanned ground combat vehicles operating within the “sea of the people” is underexamined.
To date, reports on unmanned systems overlook the impact that “warbots” will have on shaping the perceptions of key audiences. A decade ago, General Charles Krulak introduced the now widely accepted concept that every soldier and Marine is a “strategic corporal” who can significantly influence larger operational and even strategic elements through tactical and, by definition, local actions.3 The United States military and civilian leadership must discuss, anticipate, and plan for each robot to act as a “strategic corporal” in the global information environment.
This paper investigates the influence robots will have in three interconnected information domains. The first domain pertains to the psychological challenges faced by indigenous populations in conflict and post-conflict zones. The second domain involves the impact on the global information environment and the wider struggle, while the third domain addresses the shift in the calculus of foreign engagement as the American public, Congress, and the Executive Branch perceive a diminished human cost of war (on our side).
This examination has its limits. This paper focuses on armed mobile ground robots, often referred to as Unmanned Ground Vehicles (UGV), that will interact with host populations—the potential for negative consequences from a water-delivering robot is significantly less than from one that engages in violence. The aspect of mobility is crucial because static robots, typically sentries, will not represent the United States in the “last three feet,” as will robots patrolling among the people, working alongside or as replacements for American warfighters. Lastly, this paper adopts the general notion that a robot is a machine capable of performing tasks with or without explicit human control. In other words, the robot may be a machine endowed with varying degrees of “intelligence” and autonomy, or it may simply serve as a mechanical proxy for a human around the corner, behind a hill, or on another continent.
Domain I: The Local Struggle for Minds and Wills
Military force alone offers only temporary solutions to complex strategic problems. Regardless of doctrine—terrorist, insurgent, or counterinsurgent—success hinges on influencing the population. In today's interconnected world, where information spreads rapidly through diverse media, everyone involved participates in the battle for public opinion. As Clausewitz observed, war is an extension of politics. However, modern conflict transcends traditional political engagement, evolving into a global campaign for strategic influence over all parties. Al-Qaeda's Ayman al-Zawahiri recognized this, emphasizing in his 2005 letter to Abu Musab al-Zarqawi that “more than half of this battle is taking place in the battlefield of the media.”
Today’s conflict represents a struggle for minds and wills. Merely entering someone’s thoughts is insufficient; it is crucial to influence their will to act. Convincing supporters to act, or refrain from acting, against the adversaries’ interests is vital. The phrase “hearts and minds” has come to represent only part of the struggle, seemingly focusing on enhancing America’s likability or popularity and addressing the question, “Why do they hate us?” This concept has been separated from its counterinsurgency roots, where Teddy Roosevelt’s aphorism better captured the intent: “Speak softly and carry a big stick.” Influencing support for American national interests is as crucial as dissuading support for our adversaries’ agendas.
To this end, the structure of American military operations should be inverted. Instead of starting with an operational view accompanied by an information effects annex, all actions should commence with a targeted information effect and be supported by an operations annex. The center of gravity today is not a single point but rather an informational ecosystem in which the support systems targeted and relied upon by both insurgents and counterinsurgents exist and propagate. These spheres of influence encompass physical (sanctuary), financial (resources for purchasing), moral (support from religious leaders), social (networks of friends and family), and recruitment. Today, the effectiveness of information campaigns across these spheres will more frequently determine victory than the precision of bullets and bombs aimed at a target.
It is unclear how effectively robots will interface with indigenous populations. The word “robot” originates from Karel Čapek’s 1921 play R.U.R. (Rossum's Universal Robots) and is derived from a term meaning slave labor or drudgery. In the play, years of attempts to create an artificial human succeed only when “nonessential” human features are removed—feelings, appreciation for truth and beauty, and the fear of death. Removing these cognitive and emotional attributes—traits typically associated with humans—resulted in obedient robots that performed manual labor and, when they wore out, turned themselves in for recycling.4
In some situations, removing human traits from robots in human-robot interfaces is helpful. Studies show that both Alzheimer patients and children with social deficiencies fare better with robots than with humans for certain engagements. In both cases, the subjects prefer interacting with and receiving instruction from robots rather than humans. There are likely two reasons for this: robots do not judge, and the interactions are simplified with fewer options.
Robots, whether remote-controlled or autonomous, further depersonalize engagement by increasing the mechanical distance between the shooter and the target. Similarly, a sniper’s scope or a video camera observing someone planting an IED depersonalizes and diminishes the humanity of the target. This distance makes killing easier by removing the “feel for the street” and other atmospheric elements. The outcome is a greater likelihood of more lethal actions, or “heavy-handed” non-lethal measures when personal interaction may be preferable.
But in a cross-cultural engagement, can a robot be a strategic corporal? Will it be able to transmit and receive verbal or non-verbal cues that are essential for effective perception management? In Iraq, Afghanistan, and future post-disaster zones, personal interaction with indigenous people can provide substantial long-term benefits. Can robots bridge cultural divides and transform from fierce warriors with just a few simple gestures, as Lieutenant Colonel Chris Hughes did in Iraq?
In 2003, Lt. Col. Hughes faced a rapidly deteriorating situation in Najaf, Iraq. Choosing an option few would have considered, let alone risked, he instructed his men to smile, point their weapons to the ground, and take a knee. This allowed his soldiers to withdraw from a developing Iraqi-on-Iraqi encounter that they were not prepared to handle. The human-to-human interface was enhanced by cross-cultural non-verbal communication, effectively preventing what could have been a major flashpoint.
Another example from the same period involves a captain who was asked by a town leader to don the robes of sheikh because he was the new chief. It is doubtful that a robot would have received equal treatment. The depersonalization, or dehumanization, of contact- one of the main advantages of robots- can limit the ability to become the strategic corporal on-site and in the moment. Will robots on patrol develop the same “feel for the street” and understanding of the human terrain as a person?
Major General Robert Scales captured the tendency of the United States to emphasize technology and capabilities over addressing the requirements based on actual threats. He noted that “wars are won as much by creating alliances, leveraging nonmilitary advantages, reading intentions, building trust, converting opinions, and managing perceptions—all tasks that demand an exceptional ability to understand people, their cultures, and their motivations.” These are not machine tasks, and they will not be for a very long time; they are human tasks. The inability of robots to perform such tasks stems from the simplicity of their designed robot-to-human interface. Consequently, the advantages of robotic simplicity must be weighed against the ensuing loss of effectiveness in the critical tasks in the final three feet of engagement.
Removing or significantly reducing the human element from our side of the engagement risks negating the lessons learned about the importance of personal contact with local populations, lessons that were paid for with blood and treasure in Iraq and Afghanistan. Mapping the human terrain not only becomes, at least implicitly, unnecessary in the sterility of a robot-human interface but also impossible.
In 2007, Lieutenant General Raymond Odierno issued guidance emphasizing the importance of engaging with the local population and the necessary “feel” for the streets. This guidance instructed Coalition forces to “get out and walk” and noted that up-armored Humvees limit “situational awareness and insulate us from the Iraqi people we intend to secure.”
Effective counterinsurgency necessitates building trust and legitimacy within the local population, while simultaneously managing perceptions globally. This involves countering enemy propaganda, convincing neutrals of the enemy's destructive ideology, and persuading enemy supporters to abandon their cause—effectively isolating the insurgents. Neglecting either local or global support systems jeopardizes the mission. Focusing solely on external factors leaves locals vulnerable to insurgent violence, which erodes confidence in the government's ability to protect them. The Taliban’s phrase, “The Americans have the wristwatches, but we have the time,” highlights this, sowing doubt about long-term Western commitment.
The Center for Strategic and Budgetary Assessment’s 2007 report on Mine Resistant Ambush Protected vehicles (MRAPs) further develops this argument. It emphasizes that the objective of counterinsurgency is not to outfight the enemy but to “outgovern” them, asserting that MRAPs and armored vehicles hinder this effort by limiting engagement with the local population. The separation created between the indigenous people and the warfighter “actually assists the enemy in accomplishing his objectives.”
In his 1968 book Counterinsurgency Warfare, David Galula emphasized the importance of commitment and the consequences of limited action, citing the example of the Chinese Nationalists during the failed July 1953 attack on Mainland China. Anticipating an uprising, or at least support, the Nationalists launched a raid on the Chinese mainland, but were soundly defeated, partly by those they had hoped would remain neutral due to a shared animosity toward the Communists. However, the locals recognized who was truly committed to watching over them and who would be present the next day: the Communists.
In a web survey conducted by this author, respondents had strong reactions to encountering a robot in a conflict environment. In cases of accidental civilian deaths, they felt that autonomy would significantly negatively impact U.S. media and, to a slightly lesser extent, the Western European press. Respondents indicated that a fully autonomous robot would evoke only slightly greater sympathy than having any control over the robot. Overall, they believed that using robots instead of American troops would suggest a diminished commitment to the mission and reflect a typical American tactic of throwing money at a problem. Overwhelmingly, 82% of respondents felt this handoff would be perceived as a decreased commitment by the U.S. While the survey was not scientific, the results and associated discussions indicated a perception that robots could be as much a liability as a benefit in unconventional warfare. The use of robots must not raise any doubts about the importance of the mission to sacrifice our men and women, lest the local populace wonder why they should risk their own. It would quickly be interpreted as a sign that the United States was not entirely behind its mission and was afraid to send its own into harm’s way. Very quickly, the propaganda of our deeds could be spun to reflect that the United States was unwilling to risk American lives for the mission or the host population. Deploying robots will not only risk the critical connection with the indigenous population, but it could have the blowback effect of increasing the value of targeting Americans, as our warfighters are an increasingly rare commodity, thus expanding the propaganda value and emotional impact of an attack. The result may be more than replaying improvised explosive device attacks on YouTube. Still, an escalation of spectacular attacks designed to influence an American public only partially mobilized and with a correspondingly heightened sensitivity to shocks, as well as to increase extra-regional sympathy for the insurgents in a modern propaganda contest of mano e mano.
Domain II: Global Information Environment
The second domain is the discourse in global media, both formal (e.g., television and newspapers) and informal (e.g., blogs, YouTube), which includes foes, allies, and “swing-voters,” such as the American public. The purpose extends beyond merely justifying actions to contain and manage failures. Regarding the former, there is ongoing work to formulate rules of engagement for robots designed around a Western practice of warfare. The noble pursuit of “Lawfare,” aimed at discovering the truth through careful legal parsing and analysis to justify ends and means, falters in the modern information environment. The collapse of traditional concepts of time and space caused by the global information environment hinders the careful post-facto deliberation that is typically allowed in the traditional application of the Laws of War.
Justifying actions based on what can be accomplished according to Western notions creates, counterintuitively, a model of engagement that is overly permissive and ultimately harmful to a mission where, as Lieutenant General James Mattis put it, “ideas are more important than rounds.” Certain acts that are justifiable under international law could backfire if the information effects are not anticipated, planned for, and managed effectively. However, since the end of the Cold War, the United States has demonstrated a weakened capacity in this area.
The United States cannot afford technological failures or induced failures (e.g., hacking) that kill civilians. We may attempt to shift the blame for an action onto an agent, whether a computer or a contractor, possibly achieving short-term success. However, the principal will still face the consequences in the modern information environment, especially if the adverse event reinforces previous impressions.
Previous incidents of “technical failure” or even “out of control” contractors in Iraq will reflect on the United States, regardless of whether blame is accepted. Deniable accountability is a myth where influence matters, and perceptions can trump facts.
Below are two examples of technological failures that resulted in significant civilian deaths. Both illustrate the ability to frame perceptions favorably despite information suggesting an alternative reality. The first example began on September 5, 1983. A mistake by the Soviet Air Defenses was exploited by the United States, leveraging its global stature into a significant propaganda coup. Global messages were coordinated and disseminated. The shootdown of Korean Air 007 by the Soviet Union was, according to President Ronald Reagan, an “act of barbarism … [of] inhuman brutality.” The President went to great lengths, including declassifying communications intercepts, to attack the very essence and reputation of the Soviet Union. The language of the administration and the American media was clear: the leadership of the Soviet Union intentionally and knowingly ordered the attack on the passenger jet. Premier Yuri Andropov was even described in one report as having shot the plane down himself. In the United States, Congress even reversed its stance on the MX missile and nerve gas, authorizing the production of both.
These events occurred even as information emerged indicating that the U.S. position was deliberately misleading. However, the facts could not challenge the prevailing narrative put forth by the White House, which capitalized on the public’s perception of the Soviet Union.
The second example is when the Aegis guided missile cruiser USS Vincennes, a ship designed for convoy duty in the open Atlantic, shot down Iran Air 655 in 1988 in the Persian Gulf. The American response was to humanize the actions and shift the blame onto the Iranians. Accordingly, the incident’s narrative differed from that of the downing of Korean Air 007. Instead of the “loved ones” and “innocent human beings” aboard Korean Air, Iran Air carried only “travelers” and “civilians.” In the United States, blame was quickly and skillfully shifted from the cruiser to the airliner. The civilian aircraft failed to respond to requests for identification despite the USS Vincennes’ inability to transmit on commercial frequencies. When the warship did use the international distress frequency, it provided the incorrect altitude and used ground speed instead of airspeed, both reducing the chances the Iranian aircraft would understand that it was the target of the query.
American strategic communication was effective, at least within the United States. A Washington Post-ABC poll found that seventy-one percent of Americans believed the shootdown was justified, and seventy-four percent felt that Iran was more to blame than the United States. The audience for the information campaign following the shootdown was primarily the U.S. public and its allies, with little regard for Iran or the Middle East.
In a time-compressed, supersonic conflict environment, trusting a machine to make decisions may not be different from trusting it to execute those decisions. The role of the human in the loop can be reduced to a rubber stamp—a situation that will only be exacerbated by the impending shift to a hypersonic conflict environment, where our decision-making loops will be so constrained that they do not allow for human intervention.
Domain III: Cost of Engagement
One of the most touted benefits of robots is their ability to reduce the exposure and vulnerability of America’s warfighters. In certain applications, such as traditional brute force engagements like clearing a house or conventional warfare, robots provide significant value. The Department of Defense Unmanned Systems Roadmap 2007-2032, approved in December 2007, emphasizes reduced exposure as a key point. Unlike President Clinton’s deployment of cruise missiles against Al-Qaeda in Afghanistan and Sudan, a future president may use remote-controlled and autonomous robots to achieve the same mission with greater precision. However, the true cost of lowering the threshold for kinetic action in an age of instant communication remains unaddressed, in a world where the American public, and arguably Congress, is becoming increasingly distant from the realities of war. The parallels between outsourcing to machines and outsourcing to private security contractors, which can evade public and Congressional oversight, warrant further exploration.
One of the benefits of outsourcing has been a reduction in the perceived human cost by shifting casualties from our uniformed personnel to contractors. The prospect of a flag-draped coffin has always factored into the political calculus regarding deploying forces overseas. Increasing available firepower compensates for Americans’ smaller commitment and investment in the mission. Unwilling to provide more resources, we aim to maximize the impact of each resource.
Much like arguments suggesting that contracting security and other military personnel diminishes a soldier's commitment to the state, transferring responsibility to robots could similarly affect the value of killing and the warrior ethos. Ownership and accountability for the mission may shift from humans to technology, and, at least in the eyes of policymakers concerned with U.S. domestic issues, this could decrease risk.
Arguments in favor of robots often mirror past arguments for using private military companies: cost-effectiveness, enhanced capabilities, allowing the military to concentrate on core competencies, and political distance. “Dead” robots do not require flag-covered coffins, nor do they need medical care or veterans' benefits.
Insulating the American public from the costs of war is easier with fewer, or even no, boots on the ground. As Thomas Friedman observed, highlighting how the American public was sheltered from the war shortly after 9/11: “You all just go about your business of being Americans, pursuing happiness, spending your tax cuts, enjoying the Super Bowl halftime show, buying a new Hummer, and leaving this war to our volunteer Army.” A whiteboard in Iraq echoed the same: “America is not at war; the Marines are at war; America is at the mall.”
In a democracy, lowering the threshold for mobilization generally indicates that commitment is less profound, relying on limited information and broader generalizations. It also results in less margin for error and a decreased tolerance for marginal activities, while enhancing the influence of “the temperamental atmosphere of a Gallup poll.”
Mobilizing a democracy for war, or justifying a military strike after the fact, is challenging and encourages an inclination to seek emotional support rather than objective reasoning. While limiting sacrifices by the American public has its advantages, when the situation becomes tough, the public often exhibits little emotional depth or personal commitment to the cause. This effect may result in heightened sensitivity to spectacular attacks.
Faced with deploying an increasingly smaller force, the Executive Branch may find the path of least resistance in removing Congress and the media from the debate by utilizing robots to help bypass casualty sensitivity in the media and public discussions. Augmenting or replacing American forces with robots reduces the cost of unilateral action more significantly than observed with private security companies.
The increasingly scarce direct links between warfighting and both political elites and the general public reflect a return to the past. With fewer Americans who know someone currently serving or even directly impacted by the conflicts after 9/11, there is a redevelopment of a distinct and professional warrior class in the United States that is proficient in the conduct of war, reminiscent of the professional mercenary soldiers of the past. The modern All Volunteer Force (AVF) is significantly detached from the contemporary political and social spheres of power in the United States, leading to suggestions that non-veteran civilians may be more “interventionist” while simultaneously imposing greater constraints on the use of military force.5 At the same time, the American citizen-soldier is becoming an endangered species, as service members and their families turn inward to focus on their own support networks. It is likely that robots will accelerate this trend.
The use of private security companies in Iraq can teach us lessons. While the myth of deniable accountability is conjured up to manage perceptions in Congress and the media, the impression that matters—and the impression that is often ignored—is the local population and its regional and global support groups. In these information realms that matter, contractors are America’s public diplomats. It will be the same for robots.
Conclusion
Without promoting American holistic efforts in irregular warfare, including counterinsurgency, robots offer a solution that is more beneficial to the American electorate, which has a short-term memory, than to the national security of the United States. Robots can deliver significant advantages by increasing kinetic effects without raising the political liability associated with deploying personnel. However, this sanitizing comes at a cost.
Reliance on robots to meet kinetic requirements without addressing the informational effects overlooks the hard lessons learned in Iraq. While firepower can be used effectively to dissuade, it must be cautiously employed. The insights captured in the U.S. counterinsurgency manual emphasize the detrimental effects of excessive defense and offense. Their application must be carefully considered and calibrated, with the appropriate training, techniques, and procedures established before deployment.
Today, America’s warfighters represent the “last three feet” of contact with local populations and the world. Creating perceptions of America through direct interaction and indirect awareness via media or word of mouth is critically important. This requires an appropriate interface, especially when the suitability of riding in an up-armored Humvee is being questioned, as mentioned in the section on counterinsurgency.
Robots can provide real value. In 1921, Giulio Douhet wrote, “Victory smiles upon those who anticipate the changes in the character of war, not upon those who wait to adapt themselves after the changes occur.” Success in future conflict will depend on technological superiority and sociological, cultural, and informational adaptability in the struggle for minds and wills. Weapons and tactics must align with this; otherwise, we risk merely fighting the last war, specifically, the initial two years of the Iraq War. The question is, how well are we planning? Are we allowing technology to lead us, or are we directing technological solutions that align with the future of conflict?
While some look to autonomous robots in the future, that future is already here. In many areas, authority has already been ceded to computers, sometimes with severe consequences. From Aegis cruisers to Patriot missiles to navigation systems and Blue Force trackers, computers already dominate the American warfighter, not the other way around.6
Just as strategic communicators, public diplomats, and information operators must engage in the “take-offs” of any policy and not just the “crash landings,” they must also be involved in the evolution of unmanned warfare. The information effects of integrating armed robots into the inventory must be considered from the outset, including the impact on information campaigns and operations in the modern media environment. Augmenting or replacing a warfighter with a robot is not equivalent to swapping out the M-16 carbine for the M-4. The uniformed warfighter, whom the robot will replace in the last three feet, reflects America’s commitment to the mission and helps shape local and global opinions that garner or undermine support for the mission. Regardless of any real or perceived autonomy, robots will also represent, reflect, and shape those opinions. The informational impact of robots is substantial, but little research has been conducted on the subject. Failing to recognize the informational effect of unmanned systems designed to operate within the sea of people and participating, willingly or not, in the struggle for the minds and wills of men and women, will have tragic unintended consequences.
In an era where local and global perceptions matter, the United States must anticipate the information effects of the new systems it fields. The future is not like yesterday. Whether or not war will be completely turned over to robots is beyond the scope of this paper, but what is certain is that robots will be involved in some manner in the “last three feet,” and training, techniques, and procedures must be prepared for this.
*** END ***
I also wrote several papers, including a magazine article and academic papers presented at conferences, on the accountability of PMCs. I argued that PMC were not above the law, as was asserted at the time. There were several potential restraints on PMC behavior and means of punishment, including criminal courts, but these were exercised at the discretion of the contracting agencies, i.e., governments. It was a choice to not hold them accountable, not a defect in the system. To prove my point, I showed how UN military peacekeepers were actually above the law, a point the UN Secretary General affirmed. The peacekeepers were, in effect, contractors: they were hired and paid by the UN by order of the Security Council and on behalf of the Security Council; combined, the permanant Security Council members contributed less than 5% of overall peacekeeping troops; and, a somewhat shifting list of six countries countributed over 50% of peacekeepers in any given year, and got paid to do so and had little national interest in the PKOs they were involved in. UN Peacekeepers were not bound by the Laws of War, the Law of Armed Conflict, or any other standards. Market forces available to governments to control PMC behavior was absent in the UN case as the Security Council was beholden to limited pool of countries willing to provide soldiers, many of whom had to be provisioned and nearly all had to be transported to the operations, often using private military companies to do so. I’ll post that paper later.
In these papers, I frequently referred to the 2003 “taking the knee” event by Lt. Col Chris Hughes and his men in Najaf, Iraq.
Charles Krulak, "The Strategic Corporal: Leadership in the Three Block War," Marines Magazine, January 1999.
George A. Bekey, Autonomous Robots: From Biological Inspiration to Implementation and Control, Intelligent Robotics and Autonomous Agents (Cambridge: MIT Press, 2005), 444-46. There is more to the story, however. The leading lady gets the scientists to add more human-like traits to the robots, and as a result they revolt and kill all humans but one.
Peter Feaver and Christopher Gelpi, Choosing Your Battles: American Civil-Military Relations and the Use of Force (Princeton: Princeton University Press, 2004), Charles A. Stevenson, Warriors and Politicians: U.S. Civil-Military Relations under Stress, Cass Military Studies (New York: Routledge, 2006).
This has created a comfort with computers that has proven limiting. Consider Lieutenant General Paul van Riper’s actions in a 2003 wargame when he outsmarted the network-warfare centric Blue Force by using motorcycle messengers, among other tactics. See http://www.slate.com/id/2080814/