Review: Richard Overy’s The Bombing War: Europe 1939-1945

Richard Overy, The Bombing War: Europe 1939-1945 (London: Penguin, 2013).  

Richard Overy is one of the leading historians of World War II alive today, and while he has written on a number of topics associated with that conflict, the fighting in the air is his area of special expertise. While The Bombing War is not as comprehensive as some of his other works, such as the The Air War, 1939-1945 (1980), it is one of his most powerful books. For those interested in the topic of strategic bombing during World War II, The Bombing War is indispensable. It balances the meticulous research and broad vision that only an expert of Overy’s caliber can produce.

One of Overy’s purposes in writing The Bombing War is to provide “the first full narrative history of the bombing war in Europe” (xxiv). This narrative, he argues, is more complete than previous efforts because a) it covers all of Europe, b) it integrates bombing into the “broad strategic picture” (xxiv), and c) it links the narratives of those who did the bombing with those who were bombed. Overy’s other main objective consists of “re-examining the established narratives on the bombing war” which have been shaped, especially in the British and American cases, by official histories (xxv-xxvi). (The United States The Army Air Forces in World War II, which consisted of seven volumes, was published between 1948 and 1958, while Britain’s four-volume equivalent, The Strategic Air Offensive against Germany, appeared in 1961). Overy has conducted this re-examination by studying the “private papers of individuals and institutions” as well as parts of the official record that “were originally closed to public scrutiny because they raised awkward questions” (xxvi). At 642 pages of small, densely printed text, The Bombing War is long (maybe overlong), but it never loses sight of two related theses. First, strategic bombing during the war never lived up to the hype of its proponents; there was a big discrepancy between promise and achievement. Second, strategic bombing, as practiced during the conflict, was a bludgeon that did not achieve enough to justify the enormous collateral damage that it inflicted on both lives and property.

Overy’s story begins with a discussion of World War I and the interwar period. Here, he focuses on two major developments that helped make strategic bombing possible during World War II. The massive mobilization of World War I as well as the rhetoric that followed afterwards led everyone to assume that the next war would be “total” and that civilians would naturally be targets in this conflict. This discourse meshed well with assumptions among airmen and statesmen that urban conurbations of the modern era were particularly susceptible to dislocation from aerial bombing. Based on little evidence, those who contemplated the course of air war in the future believed that industry was vulnerable to destruction and that civilians living in big cities would panic easily. These attitudes, however, did not make strategic bombing during World War II inevitable; Overy argues that it was only events during the war that made such a thing possible.

Among the many limits that prevented airmen from immediately and deliberately dropping bombs indiscriminately on civilians in 1939 was the fact that many air forces believed that their primary mission consisted of supporting the army in a ground-attack role. And indeed, Overy argues that two incidents widely seen as initiating “terror” bombing during the war—the Luftwaffe’s bombardments of Warsaw and Rotterdam—were not that at all. In both cases, he claims that German aircraft sought out enemy ground forces that happened to be ensconced in or near urban areas. These two attacks resulted in large numbers of civilians being killed. The air assault against Rotterdam proved especially tragic since German and Dutch forces were then negotiating the surrender of the city but could not get word to the Luftwaffe fast enough to halt the air attack.

The first real strategic bombing campaign took place over the skies of Britain between 1940 and 1941. Overall German strategy was muddled from the start, constantly shifting from one objective to the next. On the eve of the Battle of Britain, Hitler could not decide whether to encourage the British to enter negotiations, invade southern England and dictate a settlement, or use ships, submarines, and aircraft to impose a blockade on British ports. As Overy puts it, “Hitler opted for all three possibilities, and achieved none of them” (68). Whatever the case, all three required the Luftwaffe to play an important role and demanded a heavy commitment from Hitler’s airmen. Forces, however, were frittered away as “the German offensive hovered between trying to gain air superiority against the RAF, preparation for invasion, contributing to the blockade by sea of British trade, degrading Britain’s industrial war potential and vague expectations of a crisis afflicting the enemy’s morale” (611). The failure to fix on an appropriate target and destroy it (along with the inability to match ends with means) accounted in large part for the frustration of German aims. This frustration occurred in spite of Britain’s weaknesses in civil defense (which were not made good until the latter part of 1941) and huge deficiencies in the RAF’s night-fighting capacity.

Although, as Overy points out, each strategic bombing campaign of the war differed in a number of ways, the German attack on Britain was emblematic in that it was planned and launched on the fly; almost no research or preparation for such an effort had been performed during the pre-war period (which accounts for the strategic confusion). This problem would also plague Allied campaigns throughout the conflict. The German campaign was also important in that it stretched notions of what was considered permissible during the war. The British in particular subjected the German campaign to very close scrutiny. In some cases, RAF’s Bomber Command learned important lessons (e.g. dense concentrations of incendiaries mixed with high explosive bombs were particularly useful in destroying large parts of towns). In others, the British misconstrued with the Luftwaffe had been up to (e.g. they assumed Germans were engaged in mere terror bombing). In still others, the RAF totally missed the boat (e.g. the British ramped up their bombing of German cities in the hope of demoralizing civilians and dislocating the economy without pausing to think that the Germans had failed to do the very same thing in the very same way).

With these observations in mind, it should come as no surprise that Overy is extremely critical of Bomber Command’s own effort against Germany and occupied Europe. Initially, the RAF’s campaign was too piecemeal, light, inaccurate, and scattered to have much effect. Starting in late 1941, however, the British more or less decided on the area bombing of German cities in an attempt to demoralize, dehouse, and decimate German civilians (which is what they thought the Germans had attempted to do to them). Although Britain’s political and military leadership always felt ambivalent about this decision, the appointment of Sir Arthur Harris as the head of Bomber Command in February 1942 gave the force an aggressive and intractable advocate who was fully committed to the air war against German civilians to the exclusion of all else. Nonetheless, progress was stymied by a number of shortcomings. There was a lack of appropriate, heavy four-engined bombers (as late as 1942, the number of Avro Lancasters was limited). The British were also plagued by “the slow development of target-finding and marking, [and] the dilatory development of effective electronic aids, marker bombs and bombsights.”  And then there was “the inability to relate means and ends more rationally to maximize effectiveness and cope with enemy defenses”—a problem that had also hampered the Germans (300). Despite its ineffectiveness, Bomber Command was allowed to persist in its campaign which swallowed a very large proportion of available British resources (about 7% of total British man-hours during the conflict)—no small victory for Harris and his subordinates who sought to safeguard their bailiwick.

The entry of the United States into the war did not change the British situation a great deal. The Americans made clear that they would not divert bombers from their factories to supply the British. Not surprisingly, considering the many demands placed on the United States, it took the Americans some time to organize, equip, and train a large bomber force that could exercise any influence in the European theater. The Allies made much fuss about a “combined offensive” and “round-the-clock” bombing (Americans during the day, British at night), which seemed to suggest that their bombers acted in concert. The truth of the matter was that their campaigns operated merely in parallel and did not reinforce each other at all. The Americans did not think much of bombing cities for the sake of depressing German morale. They were more interested in employing daytime precision attacks and destroying specific targets that would slow down German production (although Overy admits that when visibility was limited, American blind bombing was just as indiscriminate as anything Bomber Command did). Overy intimates that although American forces experienced difficulty in finding the bottlenecks that could bring the German economy to a halt, they expressed a much more thoughtful and sophisticated approach to bombing than Harris ever did. Bomber Command continued its nocturnal attempt to destroy city after city in the hope that the cumulative destruction would eventually end the war somehow.

In the end, Overy argues, Allied strategic bombing did not end the war, but it did influence the manner in which Germany was defeated. In early 1944, American forces finally made a commitment to using the bombing campaign as a means of destroying the Luftwaffe in the skies over Germany. The delay in reaching this decision was not determined by technology; it was also a matter of placing commanders in the European theater who shared that vision. By that date, Carl Spaatz (commander of US strategic air forces), Jimmy Doolittle (Eighth Air Force), and William Kepner (VIII Fighter Command) occupied the key American positions in Europe and agreed that it was necessary to combine “the indirect assault on air force production and supplies through bombing with the calculated attrition of the German fighter force through air-to-air combat and fighter sweeps over German soil” (361). Initially spearheaded by P-47s with drop tanks (the P-51s came later), fighter loosely accompanying American bombers sought out German aircraft, leading to huge air battles with massive casualties on both sides. It was a campaign of attrition for which the Germans were ill-suited. Two major developments occurred as a result. First, the Germans redistributed resources—personnel, fighter aircraft, and anti-aircraft guns—to the homeland on a large scale to counter this threat. These were resources that could not be deployed on other fronts to support German ground forces (including anti-aircraft weapons which could double as anti-tank guns). Second, having forced the Germans to concentrate their aircraft in Germany, the Americans proceeded to destroy the Luftwaffe, shooting down enormous numbers of planes and killing their pilots. By mid-year, the Americans had achieved air supremacy over France and Germany. And then strategic bombing lurched forward on a much larger scale than ever before; three-quarters of the total tonnage of bombs dropped on Germany fell between September 1944 and May 1945. The Allies persisted in heavy bombing largely because they were worried that the Germans might suddenly produce new weapons that could turn the tide (the V-weapons as well as the Messerschmidt Me 262 jet fighter certainly gave them reason to think this way). They also hoped that more bombing could bring the war to a swifter end—the British thinking that obliterating more cities would tip Germany over the edge while the Americans believed that the destruction of oil and transportation targets would undermine the German war effort. Still, German productivity reached its height in the last three months of 1944, when bombing was extraordinarily heavy. Allied victory eventually came at an extremely high cost to victor and vanquished, but the impact of bombing was only one of several factors that defeated the Axis powers.

Many readers familiar with the topic will have seen parts of this narrative before, but Overy presents a version of the story that is very much his own in which a number of key arguments, great and small, are modified. Overy’s book is particularly interesting when it comes to discussing civil defense and the impact of the war on civilians, something that most histories of strategic bombing do not study in a systematic way. The Bombing War stresses the degree to which different circumstances obtained in different countries. For instance, civil defense in Britain was characterized by friction between the voluntarist tradition of a free society and the centralizing tendency of the state. In Germany and the Soviet Union, however, the party saw civil defense mainly as a means of political and social mobilization. Whatever the case, the experience of civil defense was similar to that of the bomber forces in that its preparations were incomplete upon the war’s outbreak; capacity and sophistication generally grew as the war continued. It is hard to make generalizations about bombing’s impact on the various peoples of Europe, though, as every country was different. Overy points out that a good case could be made that bombing helped topple Mussolini in 1943, but he proceeds to argue that the collapse of the Fascist regime had more to do with its overall inability to cope with the various stresses of modern war. In cases where the state or party was more or less equal to the challenges of fulfilling civilians’ needs (e.g. Britain and Germany), heavy bombing generally did not enhance or undermine the population’s will to resist. If anything, it made civilians more reliant on the authorities which reduced the potential for dissent. The picture Overy paints of civilian populations under sustained air attacks is one of anxiety, exhaustion, and deprivation. Moreover, these populations were highly mobile as they left destroyed urban areas in search of shelter, food, and working utilities. It is not surprising that people in such a position would turn to the state for succor.

Conquered territories, particularly in western Europe, found themselves in a unique position. Generally hostile to the German occupation, they initially supported the Allied bombing of military targets. The RAF hoped that a campaign in these regions would damage German military installations (e.g. submarine pens) and slow down production in factories that had worked on German contracts. Later, in preparation for the cross-Channel invasion, the Allies sought to destroy most of northern France’s transportation infrastructure (and once troops had landed in Normandy, heavy bombers were used for ground support). In these regions, the British always saw bombing as a propaganda act that could demoralize collaborators and give resistance a boost. Unfortunately, once the RAF began bombing France and the Low Countries without restriction in February 1942, opinion in these countries turned against the British initiative. Just as they were in Germany, Allied bombings tended to be inaccurate and destructive, resulting in many civilian casualties (almost 60,000 French civilians were killed by Allied bombs). In the conclusion of his chapter on the bombing of occupied Europe, Overy notes, “Bombing was a blunt instrument as the Allies knew full well, but is bluntness was more evident and more awkward when the bombs fell outside Germany” (606).

Not surprisingly, Overy concludes that strategic bombing as practiced during World War II was a crude, wasteful, and illegal strategy. Moreover, it was a failure on its own terms. It sought to win the war singlehandedly by destroying the enemy economy, demoralizing the enemy population, and deracinating the enemy’s political system. In all of these areas, the impact of bombing was limited. Strategic bombing’s main contribution to Allied victory—the destruction of the Luftwaffe—was almost incidental. The obsession with the “weight and scale” of attacks, rather than accuracy, paved the way for post-war nuclear arsenals that sought to do the same thing but on a much larger scale. This approach to strategic bombing would prove a dead-end; precision-guided munitions, Overy argues, were the “way forward” (613). We can be thankful, then, that “profound changes in available weapons, the transformation of geopolitical reality and post-war ethical sensibilities have all combined to make the bombing war between 1939 and 1945 a unique phenomenon in modern European history, not possible earlier and not reproducible since” (633).

Furthermore, I consider that the myth of the unemployable History major must be destroyed.

Hugh Dubrulle

NOTE: This essay reviews the Penguin UK version of Overy’s book, not the Penguin USA edition (entitled The Bombers and the Bombed: Allied Air War over Europe 1940-1945). The latter was heavily edited and is much shorter than the former. The reviewer recommends that you purchase the British version.

Very Short Reviews: Karen Armstrong’s _Fields of Blood: Religion and the History of Violence_

Fields of Blood

Since many people associate religion with the contemporary conflicts we have witnessed across much of the globe since 9/11, it seemed to make sense that this blog review Karen Armstrong’s Fields of Blood: Religion and the History of Violence. In other words, One Thing after Another read the book so you don’t have to.

Karen Armstrong, Fields of Blood: Religion and the History of Violence (New York: Anchor Books, 2014).

  1. Armstrong asserts that her primary motive in writing this book consists of refuting an assertion repeated to her relentlessly “like a mantra” by people from all walks of life: “Religion has been the cause of all the major wars in history.”
  2. Attempting to disprove this assertion makes it unclear who this book is for; scholars do not make these kinds of generalizations in academic forums, and laypeople who do make these kinds of generalizations are unlikely to read an overlong book larded with so much detail that the thesis is occasionally lost.
  3. Along the way, Armstrong does remind her readers of some important, well-established truths: religion is difficult to define; until the emergence of the modern age, people could not really make a distinction between religion and politics; over time, religious traditions have been interpreted in a variety of ways and therefore have no true “essence” (although she undermines this argument by claiming from time to time that a religious tradition was not implicated by the violent acts of its adherents because they were not acting according to the “true” spirit of that tradition); and most faiths have experienced an ambivalent relationship with violence.
  4. Armstrong’s main argument is that the responsibility for the great majority of violence lies with the state and that in the contemporary period, war is the product of imperialism or the strains of modernization; religion has been distorted by these forces and often reflects rather than instigates them.
  5. So far from being the problem, she argues, religion is the solution: “Somehow we have to find ways of doing what religion—at its best—has done for centuries: build a sense of global community, cultivate a sense of reverence and ‘equanimity’ for all, and take responsibility for the suffering we see in the world.”
  6. One of the main problems with this book is that it is too broad (it starts with the Sumerians and proceeds to the present), which means that Armstrong often ventures into areas where she has no experience or background; to name just one of many examples, she claims there is little evidence that humans fought one another before the advent of agriculture and civilization—but since Laurence Keeley wrote War before Civilization (1996), scholars (backed by mounting archaeological evidence) have increasingly taken the view that our hunter-gatherer ancestors were pretty violent.
  7. As other reviewers have pointed out, her history inclines toward an economic and social determinism that tends to be superficial and poorly explained; culture does not display much autonomy in her narrative. (See The Economist: http://www.economist.com/news/books-and-arts/21636708-secularism-or-religion-more-authoritarian-trouble-and-strife)
  8. It is not clear whether Armstrong’s sources influenced or express her stance, but her notes and bibliography are idiosyncratic and often do not reflect the latest literature in the periods or topics she studies.
  9. There are important contradictions in her argument; to name perhaps the most important one, if, as she states, religion could not be distinguished from politics up until the modern period, and political motives generally inspired warfare, it would seem that religion is still culpable.
  10. Or, to look at the same problem from another angle, as Mark Juergensmeyer writes in his Washington Post review of Armstrong’s work, “Religion — in the sense of what theologian Paul Tillich called ‘the repository of symbols’ — has also had long relationships with grandiose power, violence and blood. So religion is not totally off the hook.” (See the Washington Post: https://www.washingtonpost.com/opinions/book-review-fields-of-blood-by-karen-armstrong/2014/10/23/a098e374-4d90-11e4-aa5e-7153e466a02d_story.html)

Hugh Dubrulle

Point-Counterpoint: Masur versus Dubrulle on the Biggest Disasters in U.S. Military History

Custer's Last Stand

Some weeks ago, on the History Department Facebook page, we posted an article by George Dvorsky on the “Eight Biggest Disasters in U.S. Military History.” As expected, the post generated some discussion, much of it critical of the list. Professors Dubrulle and Masur thought a discussion of this flawed list would provide a good opportunity to offer their own thoughts on what does and does not constitute an American military disaster. In doing so, they hoped their ideas would show something about how historians attack a question.

The original post offered the following criteria in determining what the biggest military disasters were: “For the purposes of this list, therefore, a ‘military disaster’ will be defined as a historically significant episode in which the U.S. military endured any of the following problems: protracted mission failure, an inability to thwart enemy action, or a breakdown in command and control structure. It can also include an embarrassing, lopsided, or unexpected defeat.”

Using this standard, Dvorsky’s list was as follows:

The American Invasion of Canada (1812)
The Capture of Harper’s Ferry (1862)
The Battle of Antietam (1862)
The Pancho Villa Expedition (1916-1917)
The American Defense of the Philippines (1941-1942)
The Battle of Kasserine Pass (1943)
The Bay of Pigs Invasion (1961)
The American Disbanding of the Iraqi Army (2003)

Let’s start with Professor Masur’s thoughts. . . .

Professor Masur

I’m not sure that I am equipped to provide my own list of America’s “top military disasters.” I’m not a military historian, and as a result I would say that I am not particularly well-versed on specific details of America’s military conflicts. Moreover, I tend to focus on America in the twentieth century, meaning my knowledge of earlier American military affairs is a bit sketchy. That’s too bad, because the earlier discussion highlighted how many Civil War battles would be good candidates for this list. Finally, while my own research deals with an American military conflict (the Vietnam War), it is a conflict that is often studied without a primary focus on the sorts of military engagements that might make up a list of this nature.

Before offering a list, I’ll try to explain the general rules or guidelines I am using for determining what is a “military disaster.”

  • The result of a decision or action that was made, primarily or in large part, by members of the military. This rules out, e.g., the decision to commit American support to South Vietnam and eventually escalate and Americanize the conflict. It also rules out the decision to invade Iraq in 2003. These two decisions would likely rank among the biggest foreign policy mistakes since World War II, and they of course had significant repercussions for the military. But the decisions themselves were not, in my view, military disasters.
  • The decision had significant negative repercussions for the United States, and the negative consequences can be persuasively seen as outweighing any positive outcomes that may have resulted from the decision. This might mean that the decision resulted in significant American casualties, but it could also mean that the decision had economic repercussions or in some way undermined America’s strategic interests. Both the Vietnam War and the second Gulf War would meet the this standard.
  • The negative consequences of the disaster can be reasonably traced to the decision itself. The failure to convincingly defeat Germany in World War I may have created conditions that contributed to the rise of Hitler and the Nazi Party. But so many other factors emerged in the years after World War I that it would be hard for me to consider this a direct result of World War I.

There are a couple of military disasters that popped into my head, but for a variety of reasons I decided to leave them off the list.

The Tet Offensive (1968)
Historians have written countless pages on the Tet Offensive, devoting a significant portion to debating whether or not the battle was a defeat for the United States and its South Vietnamese allies. The consensus today seems to be that the battle was not militarily crippling for either U.S. forces or the Army of the Republic of Vietnam (ARVN—the South Vietnamese armed forces who were allied with the U.S.). In fact, the National Liberation Front or “Viet Cong” suffered terrible losses in the fighting. At the same time, the battle did contribute to growing American discontent with the prolonged military effort. U.S. forces may have erred in not being more prepared for the attack, but because the U.S. reacted quickly and repelled the offensive it was not, in my estimation, a military disaster.

Pearl Harbor (1941)
This is an interesting candidate. A number of people commenting on the original piece noted that Pearl Harbor would be an obvious choice. While it was a disaster for the United States, an intriguing counterargument could be made that Pearl Harbor was a far greater military disaster for Japan. Professor Dubrulle can correct me if I’m wrong, but I believe that some Japanese observers at the time anticipated that Pearl Harbor would spell the eventual doom for Japan’s expansion in the Pacific. This raises a semantic or philosophical point: can the same battle be a disaster for both sides? I can see the argument for yes, but for the purposes of this discussion I’ll go with “no” and therefore keep Pearl Harbor off my list.

So with all of that out of the way, what would I include?

Little Bighorn (1876)
I know next to nothing about the serious scholarship on Little Bighorn, so my view is rooted almost entirely in the way the battle is perceived in the popular imagination. But come on—Custer and his men getting annihilated by the Dakota Indians? Of course that has to be on the list.

The Decision to Push North of the 38th Parallel and Approach the Chinese Border in the Korean War (1950)
This makes sense because it so clearly falls at the feet of the military commander, General Douglas MacArthur. His decision to press the advantage against the North Koreans was arrogant and reckless. Moreover, he stubbornly refused to consider the consequences of his decision. His decision arguably prolonged the war, leading to widespread American casualties. And it is worth remembering that the victims of his decision were not entirely or even primarily Americans—Chinese, North Korean, and South Korean troops all suffered heavy losses, and the war had disastrous consequences for Korean civilians.

Westmoreland’s Attrition Strategy in Vietnam (1964)
This was, as far as I know, a decision made by William Westmoreland, the American commander in Vietnam from 1964-68. Most historians admit that it was a terrible mistake. Interestingly, this is one of the few issues that “orthodox” Vietnam War historians (that is, historians who tend to think the American intervention was a mistake) and “revisionist” Vietnam War historians (those who think that it was a justifiable and necessary war that could have been won) tend to agree upon. The orthodox historians view it as evidence of America’s inability to understand the conflict in Vietnam, particularly its political and social dimensions. The revisionists argue that Westmoreland’s decision was one of the factors that prevent the United States from prevailing in the conflict—an outcome, they argue, that was within reach.

There are probably more disasters to include. I’ll give honorable mention to two disasters from the Spanish-American War. One was the pacification of the Philippines once the war ended. The pacification effort lasted for years and was far more deadly for both Americans and Filipinos than the war itself. I don’t know enough about it to say whether this was the responsibility of the military or civilian leadership. The Spanish-American War was also notoriously mismanaged. The U.S. prevailed in spite of this mismanagement, but it likely led to the unnecessary death of American soldiers who were improperly outfitted or fed during the conflict.

Now for Professor Dubrulle. . . .

Professor Dubrulle

I’d like to start by stating that I don’t like Dvorsky’s criteria. First, they are vague. What exactly is a “historically significant episode”? Second, “protracted mission failure” and “inability to thwart enemy action” amount to pretty much the same thing—an inability to impose one’s will on the enemy. Third, a “breakdown in command and control structure” seems like an unusual item to include on the list. Is that an essential feature of military disaster? Fourth, “embarrassing, lopsided, or unexpected defeat” could mean many things. Yet perhaps most important of all, this list is something of a catch-all, consisting of very different and inconsistent ideas. (Indeed, the list seems to be inspired by the Wikipedia entry for “List of Military Disasters.”)

At the same time, I don’t believe that Dvorsky has applied his own criteria particularly well. Was the Pancho Villa expedition a “historically significant episode”? Why was the Battle of the Wabash (1791) left out? It was very badly fought, and as a result, a quarter of the U.S. regular army was wiped out by the Western Indian Confederacy. Moreover, a number of Civil War battles could meet Dvorsky’s standard better than Harper’s Ferry and Antietam. And the Bay of Pigs? Really?

The phrase “military disaster” requires a more precise definition. It could mean a) a battle that was badly fought and lost or b) a battle lost that had very bad ramifications. There is an important distinction between the two. For instance, the Fetterman Fight (1866) and the Battle of Little Bighorn (1876) fit in the former category. They were very badly lost, but their ramifications were somewhat limited. Pearl Harbor, however, definitely falls in the second category. Arguments could be made for both definitions of “military disaster,” but my preference would be for the second one because battles  meeting this standard possess greater historical significance.

These considerations bring me to Matt’s thoughts. I know I shouldn’t have read his contribution before writing my own (that’s a bit like cheating), but I couldn’t stop myself. Matt makes a lot of sense to me, but in light of the comments I’ve made above, I’d like to modify one of his criteria—the one concerning “negative repercussions.” It makes sense that we define this phrase by identifying it with existential threats to the United States or, at the very least, extremely difficult (and ominous) political or strategic problems.

Otto von Bismarck supposedly once said, “There is a special providence for drunkards, fools, and the United States of America.” Americans have been lucky or powerful enough to avoid battles that presented existential threats to their nation. Yet we can still create an interesting list of battles based on this criterion.

The Battle of Long Island (1776)
Hardly anybody remembers this battle, but it was the largest of the Revolutionary War and almost led to the end of the American struggle for independence. It was fought in August 1776, shortly after the Second Continental Congress issued the Declaration of Independence. George Washington sought to defend New York City by stationing men on the southern tip of Manhattan and Brooklyn Heights on Long Island (which overlooked Manhattan). The British landed on Staten Island before sending a large force to Gravesend Bay on Long Island (east of where the Americans were). They drove Washington’s force off the Heights of Guan and pushed them into Brooklyn Village, pinning the Americans against the East River. In other words, the Americans were now surrounded—stuck between the East River and the British. At this point, had the British decided to press their advantage and attacked Washington’s disorganized army, they would have captured almost all of it. Instead, they settled in for a siege. This decision gave Washington time to escape to Manhattan. In a daring and risky operation, a regiment of fishermen from Marblehead, Massachusetts, under the command of John Glover, quietly rowed the American forces across the East River at night, practically under the nose of the Royal Navy. Had the British acted with more alacrity, they could have bagged 19,000 Continentals and militia along with Washington himself. The Revolution would have been over right after it had started, and there would have been no United States at all.

The Battle of Antietam (1862)
This battle belongs on the list, but not for Dvorsky’s reasons. During the summer of 1862, the British Cabinet began to think about either recognizing the Confederacy or intervening in the war. Recognition would only come, though, if the Confederacy had pretty much secured its independence beyond a doubt (after the Seven Days’ Battles and the Second Battle of Bull Run, some members of the Cabinet believed it was well on its way to attaining this objective). Lord Palmerston, the British Prime Minister, was of this opinion. Those who favored intervention (like Earl Russell, the Foreign Secretary, and William Gladstone, the Chancellor of the Exchequer) believed the war was a terrible humanitarian disaster for both America and Britain (due to the interruption of commerce, particularly cotton trade, and the potential for a huge slave insurrection) that had no end in sight. They favored British mediation (in conjunction with probably France and Russia) that would probably have led to the independence of the Confederacy. The traditional view of Antietam (which was a tactical draw but a strategic Northern victory) was that it arrested British moves toward recognition or intervention. The North showed that it still had plenty of fight, or so the argument went, and the battle allowed Lincoln to issue the Emancipation Proclamation, which helped make the war about slavery. The North’s willingness to keep fighting, along with the new moral crusade it had embraced, supposedly led the British to reconsider interfering in the war. However, as Howard Jones and a number of other scholars have pointed out, Antietam made some British Cabinet members more inclined to pursue mediation; the draw at Antietam suggested the war would drag on even longer, doing even more harm to both American and British interests. Fortunately, in November 1862, Sir George Cornewall Lewis, the Secretary of State for War, rallied the Cabinet against mediation (which France favored at that point). In all likelihood, mediation would have meant the splitting of the United States.

Pearl Harbor (1941)
I get the problem that Matt is struggling with when it comes to Pearl Harbor. By attacking Pearl Harbor, the Japanese started a war with the United States that it only had a slim chance of winning. Even the Japanese leadership felt this way. We can say, then, that in the long run and from a political point of view, the attack was a terrible Japanese mistake. But in the short run, the attack was a big tactical success and presented the United States with great operational and strategic difficulties. These difficulties hampered American attempts to deal with Japanese advances in the eastern Pacific. Among other things, they doomed the American garrison on the Philippines. The American disaster at Pearl Harbor, however, was mitigated by good luck and some excellent foresight. The Japanese did not catch any of the American aircraft carriers in the harbor, they failed to destroy American oil storage facilities in Hawaii, and of the eight battleships at Pearl Harbor, only two were permanently lost (one never left service, three returned to service in 1942, and two more became available in 1944). Even more important, in July 1940 Congress had passed the Vinson-Walsh Act (otherwise known as the Two-Ocean Navy Act) that funded a dramatic expansion of the U.S. Navy. The vessels funded by this act did not begin to become available until 1942, but the United States did not lose as much time as it might have otherwise in replacing its naval losses. Still, the attack forced the U.S. Navy to fight on its back heel for much of 1942—at the Coral Sea, Midway, and Guadalcanal.

Battle of Bataan (1942)
Pearl Harbor compromised the American defense of the Philippines. The Japanese were determined to take the Philippine archipelago because it sat astride their line of communication with their southeast Asian possessions. Although the American defense of Luzon (conducted by an army that consisted mainly of Filipinos), which eventually centered on the Bataan peninsula, was often marked by great courage, it was not always well led or conceived. Eventually, 15,000 American and 60,000 Filipino soldiers were compelled to surrender. This was the largest surrender of forces under American command ever. It deprived the United States of an important base from which to contest Japanese advances in Asia; the United States would have to work its way across the southern and central Pacific to get at Japan. And it was yet another defeat of Allied power in Asia (French Indochina, Dutch Indonesia, and British Malaysia and Burma were all conquered by the Japanese at this time) that did much to discredit Western colonialism in Asia–a development of world significance.

Tet Offensive (1968)
I will put Tet on my list. Yes, Tet was a military defeat for the Viet Cong and the People’s Army of Vietnam. But if war is a tool by which we seek political objectives, in the long run, Tet contributed in a big way to eventual North Vietnamese victory. As a result of Tet, much of the American public questioned the credibility and honesty of the American government, an attitude that was only augmented by the sudden rise in American casualties and the army’s request for troop increases in Vietnam. The request threatened to put America’s entire manpower policy under stress (it might have required a massive call-up of reservists), increase inflation, exacerbate America’s balance-of-payment problem, and worsen a looming economic crisis. More immediately, Tet shook the confidence of Lyndon Johnson and his advisors. Although nobody could see it clearly at the time, this was the beginning of the end. Of course, there’s defeat, and then there’s defeat. As a result of our loss, we did not have to bow to new North Vietnamese masters (see the Onion headline below). But the American defeat in Vietnam had a big impact on foreign policy, led to a long-running debate in the military about how best to fight little wars, and fundamentally shaped the attitudes of the public.

Onion Vietnam Wins War

The Battle of Bladensburg (1814)
Enjoying control of Chesapeake Bay, the British were interested in launching a series of raids there to tie down American forces and make them unavailable for an invasion of Canada. Major General Robert Ross, relying on support from Vice Admiral Alexander Cochrane’s fleet, decided to launch a raid against Washington, DC and Baltimore. For this task, he had only four battalions of regular infantry, one battalion of Royal Marines, and assorted auxiliaries—a total of 4,300 men. Facing him were something on the order of one regular infantry battalion, some dragoons from the regular army, a small collection of sailors, and over 6,000 American militiamen. To make a long story short, the British assaulted the Americans at Bladensburg and routed the militia which ran through the streets of Washington. The British were able, then, to enter the city and burn most of its public buildings, including the White House (then referred to as the Presidential Mansion) and the Capitol. The strategic results of this action were barren; the British failed to capture Baltimore, they had to retreat to their ships in the bay, and no significant long-term results issued from the burning of Washington. But, oh, the shame of having the young nation’s capital occupied and put to the torch! And after such an inglorious defeat!

Belichick, Football, and Military History


One Thing after Another strives to remain topical, and the following post is a shameless attempt to capitalize on interest in the Super Bowl. According to the following article from the Wall Street Journal, Bill Belichick is a diligent student of history, especially military history.


One Thing after Another would like to think that Belichick has learned something valuable from reading military history. For example, it might provide him with some insight into leadership. At the same time, it might give him an uncanny ability to dismantle opposing football teams. For sure, the study of military history has helped coaches develop creative playmaking and play-calling. Clark Shaughnessy (1892-1970), who coached a variety of football teams but earned fame mainly with Stanford University and the Chicago Bears, is most well-known for replacing the dominant single-wing offensive formation with a resurrection of the old T formation in the 1940s. Innovations associated with Shaughnessy’s T still survive today. For instance, under the T, the quarterback took the snap from under center (instead of having the ball hiked five yards back directly to either the halfback or tailback as was the case with the single-wing). In the T, having the quarterback handing the ball off to a tailback or halfback allowed him to hit holes in the line of scrimmage more quickly and at greater speed. But what also appealed to Shaughnessy was that the T provided opportunities for more options and more deception. Getting the ball under center, the quarterback could do anything with it. He could run it. He could throw to a receiver. He could hand the ball off to a back. He could throw to a back. This last option was something that Shaughnessy really liked. One of the three backs in the T could become a man in motion before the ball was hiked and thus turn into a receiver. Even if the back who acted as the man in motion did not receive the ball, he could draw defenders away from where the play’s center of gravity was going to be. Where did Shaughnessy supposedly get these ideas? A number of historians have claimed that he was heavily influenced by his reading of Heinz Guderian’s Achtung–Panzer! (1937).  Moreover, parallels have been drawn between Shaughnessy’s use of the man in motion and Erich von Manstein’s famous “sickle cut” (Sichelschnitt) plan that laid France low in 1940. Army Group B’s foray into the Low Countries distracted the Allies who sent their most mobile forces northward to counter it. With the Anglo-French line thinned out by this diversion (and deprived of a mobile reserve), Army Group A shot through the Ardennes, cut the Allied line in half, and drove to the coast.

Did Belichick use his knowledge of military history to fake out the Ravens with that formation where an eligible receiver lined up as an offensive lineman, while another offensive player lined up in the slot but declared himself ineligible? No, of course not. Evidence suggests that Belichick borrowed the formation from the Detroit Lions after watching them on tape (and, of course, improving on their play):


However, the creativity and deception associated with this play–hallmarks of Shaughnessy’s coaching as well–could well be inspired by a thorough familiarity with military history.

Undoubtedly, one can draw a number of analogies between war and football. Carl von Clausewitz (1780-1831), the Prussian general who was one of the greatest thinkers about armed conflict the West ever produced, asserted in On War that “war is nothing but a duel on an extensive scale.” He refined this definition by claiming that “war is . . . an act of force to compel our enemy to do our will.”  If boxing or mixed martial arts resemble a duel, football is a duel “on an extensive scale” which employs force to compel the enemy to do our will. In short, football resembles war in a fundamental way. That resemblance has prompted many comparisons. Historians have claimed that football was an outgrowth of the Civil War:


Others commentators have argued that a symbiotic relationship exists between war and football. Each feeds interest in the other, and each becomes a surrogate for the other:


In 2010, the National Interest claimed that Americans’ attitudes toward football shaped their attitudes toward war and not in a healthy way:


This short opinion piece from US News and World Report sought to refute the notion that Americans like football because they are a warlike people:


One Thing after Another does not presume to reach conclusions here about the profundities of the relationship between war and football. However, it would like to point out (yet again) the degree to which the practice of history, which allows one to make useful analogies between the past and the present, can give one an advantage in the most unlikely of areas.

World War I: History versus Memory

2014 WWI Talk

Today in the Dana Center at 4 PM, Professors Meg Cronin (English), Phil Pajakowski (History), Ann Norton (English), and Hugh Dubrulle (History) made a series of presentations to commemorate the centenary of World War I’s outbreak. The program as a whole was entitled, “‘What, Then, Was War?’:” Representing and Remembering World War I. The turnout was very good, and a number of students, faculty, and staff stayed afterwards to discuss the presentations as well as to socialize. If possible, One Thing after Another will try to obtain the presentations of all the participants. For now, it will have to make do with Professor Dubrulle’s comments which are reproduced below.

My paper, which is about the differences between history and memory when applied toward World War I, will do something toward synthesizing much of what we have heard up until now.

What distinction am I making when I use words like “history” and “memory” to mean different things? Perhaps the following anecdote will make some sense of the matter.

Many years ago, a Soviet journalist visiting Paris asked a small boy in a working-class quarter what the child knew about the Paris Commune. The boy responded: “Do you mean what they teach you in school, or what Papa says?”

History is what they teach you in school; memory is what Papa says. If we were to draw a Venn diagram, there would be an overlap between the two because, after all, Papa probably went to school.

However, the differences are significant. History is the interpretation of the past that professional historians create according to the dictates of their discipline. Memory is the popular understanding of the past that is cobbled together by everyday people from their own experiences, movies, literature, stories, family lore, popular history, magazines, pictures, monuments, commemorations, museums, and so on.

Recently, historians have investigated the history of memory—how and why it has changed over time. And indeed, the history of memory has become a hot topic. Interestingly enough, much of this study began with works about World War I and memory. I’m thinking primarily in this case of George Mosse’s Fallen Soldiers: Reshaping the Memory of the World Wars (1991).

Memory’s study of history, of course, has been much less methodical—and it would be unfair of us to criticize because memory is not a discipline the way history is. But it is fair to point out that memory is not created in the same way as history, nor does it serve the same purpose.

A good place to start is by looking at “Blood Swept Lands and Seas of Red” which was installed in the moat at the Tower of London in August to commemorate World War I’s outbreak. The moat has been filled with over 888,000 ceramic poppies—one for each British soldier killed during the conflict. We could say much about this “event”—about how it is an interesting cross between art installation, charity fundraiser, space for “personal reflection,” and tribute to those who served in the war (the Financial Times has described it as a “a very 21st-century blend of spectacle and ‘edutainment’.”). What strikes me forcefully is the extent to which the memory of this conflict dwells on the war dead. There are the 888,000 poppies, one for each fatality. And the poppy itself is associated with the dead on the Western Front, mainly because of the poem, “In Flanders Field” (1915). The Royal British Legion has kept the symbol of the poppy alive since 1921 with its Remembrance Day fundraisers during which they sell commemorative red poppies made of paper.

So the war is remembered as a great tragedy in which the dead feature prominently, mainly as victims. This vision of “crosses, row on row” is yoked to a narrative of the war in which incompetent political and military leaders in all countries inadvertently led Europe into war, conducted that war in a bloody and unimaginative manner, and then subsequently made a hash of the peace. According to this story, a generation of young men, fed on illusions by their elders, were disabused of their notions by trench warfare before they were killed in their hundreds of thousands.

So strong and convincing is the force of this narrative, you might ask, “Is this interpretation really just memory? Isn’t it the verdict of history?” The answer is, “No.” Aside from the fact that history’s verdicts are always temporary, the current state of World War I historiography does not look at all like this picture. I will return to historiography in a minute, but not before I say something about how this memory came to be.

Like history, memory plays out differently in various places. For Russians, the war does not loom quite so large in their memory as elsewhere because it is mere prelude to 1917, Year 1 in their short, Communist 20th century. And in Germany, memory of World War I is muted because the conflict contributes to the difficulty of finding a usable past that includes Hitler, World War II, and the Holocaust.

Memory is like history in other ways; it is contested. Before the conflict had even ended, the memory of WWI was the object of a great battle. We have to realize that many participants were extremely anxious that their version, their understanding, and their narrative of the war would not be forgotten. One of the characters in Henri Barbusse’s Under Fire (1916) worries that later generations would not understand what had happened during the war: “Whatever you tell them they won’t believe you. Not out of malice. .  . but because they just won’t be able to. . . . Nobody’s going to know. Just you. . . . We’ll forget. We’re already forgetting, old man!” And a great many people, like Barbusse, wanted to tell the bitter and ironic story that many remember today. These included a huge collection of figures as diverse as Britain’s war poets (Sassoon, Graves, Gurney, Rosenberg, Owen, Jones, etc.); the German expressionist sculptor, Ernst Barlach; the Italian symbolist poet, Giuseppe Ungaretti; and the German painter/printmaker Otto Dix.

But if we are good historians and we look at the source material, we also have to realize that during and shortly after the war, there was a competing narrative. It recognized the war as a tragedy, but refused to admit that the conflict was futile or purposeless, and frequently expressed an austere patriotism. We see this attitude in the great neo-classical war memorials like Edwin Luytens’ Cenotaph in London or Sir Reginald Blomfield’s Menin Gate (which Siegfried Sassoon described as a “sepulcher of crime”), the tombs of Unknown Soldiers in various countries, and the Tannenberg Memorial. We see it in Ernst Junger’s Storm of Steel (1920) which revels in the triumph of the human spirit against everything the machine age can throw at it. We see it in intellectuals like Adolfo Omodeo who described the war as a kind of education in patriotic self-sacrifice to a liberal Italian state. And we see it in Rupert Brooke’s poetry which not only glorified war, but more important, also outsold all the War Poets put together.

Although there is some dispute among historians about the turning point in the struggle between competing memories of the war, the conventional wisdom has it that the late 1920s and early 1930s proved decisive. The philosopher Benedetto Croce wrote that “all true history is contemporary history” in that the perspectives of historians are very much shaped by their current circumstances. The same is true of memory. The economic volatility of the 1920s became the depression of the 1930s. At the same time, the diplomatic system erected by the Paris settlement of 1919 began to disintegrate. It seemed to many in retrospect that the war had proved itself futile in that it had failed to make Europe a better place. In the late 1920s and early 1930s, then, a spate of novels and autobiographies about the conflict (the “war book boom”) suddenly appeared throughout Europe. Perhaps the most influential and best-selling was Erich Maria Remarque’s All Quiet on the Western Front (1929). Just as important if not more so for their impact, a wave of war films, all taking advantage of brand new sound technology, came out as well. The most prominent include All Quiet on the Western Front (1930), Journey’s End (1930), Westfront 1918 (1930), and Wooden Crosses (1932). The important thing to remember about these works of literature and cinema is that they did not necessarily capture what people thought during the war but what they thought about it ten years later—and that’s a very different thing.

Well might critics—and there were many at the time—complain that by focusing exclusively on the pain and terror of private soldiers, by exaggerating certain elements of the war experience, and by stressing the disillusionment of the rank and file, these works lost track of the big picture which gave the war meaning.  In 1930, Cyril Falls, the military historian, complained that “to pretend that no good came out of the War is frankly an absurdity. The fruits of victory may taste to us as bitter as the fruits of defeat to our late enemies. But how would the fruits of defeat have tasted to us and our Allies? Let any man seriously consider what would have been the situation with a Hohenzollern Germany and a Habsburg Austria dominant in Europe . . . and he will find it hard to deny that some good ‘came of it at last’.”

Of course, the outbreak of World War II seemed to confirm the futility of World War I. Yet it would be mistaken to attribute the survival of our dominant memory of the war to events alone. A tradition of representation has gained momentum in the contemporary era. Each literary or cinematic contribution simultaneously drew sustenance from that tradition while confirming it. Perhaps the two most important works in the English-speaking world that have perpetuated this memory are Alan Clark’s The Donkeys (1961) (as well as the musical and film inspired by the book—Oh! What a Lovely War! [1963]) and Paul Fussell’s The Great War and Modern Memory (1975). As a measure of that tradition, I encourage you to watch the way World War I combat has been treated in film. From All Quiet on the Western Front and Wooden Crosses onward it is incredibly consistent. Check out Paths of Glory (1957), Gallipoli (1981), Legends of the Fall (1994), A Very Long Engagement (2004), The Trench (1999), Joyeux Noel (2005), Passchendaele (2008), and War Horse (2011). We could go on and on. The themes and tropes remain the same. In every case, soldiers are victims, killed in utterly impossible and fruitless assaults for no good reason.

As just one indication of the extent to which this memory of the war as futile act of sacrifice has triumphed, we can point to What Have We Learned, Charlie Brown (1983), a Peanuts cartoon that Charles Schultz produced in anticipation of the 40th anniversary of D-Day. Memory, as everyone will tell you, involves forgetting—meaning that we remember some things at the expense of others. In this particular case, Schultz had Linus recite “In Flanders Fields”—but leaves out the third, patriotic stanza that encourages the reader “to take up our quarrel with the foe.” And then there is Rowan Atkinson’s Blackadder Goes Forth, the fourth season of the Blackadder sitcom series (1983-1989). Here the futility of the war and the stupidity of generals are absolutely central to the plot. When historians seek to criticize what they see as the caricature of the conflict that memory has produced, they often refer to the “Blackadder version” of the war.

And what of historians, that is, the people who make history for a living? The historiography of World War I has responded to a variety of stimuli over the decades, including political events, the release of previously unavailable documents, and different kinds of readings. And yet, except for the intervention of certain outsiders (for example, Niall Ferguson and his The Pity of War [1998]), historiographical debate has been confined to fairly familiar ground. That is not to say that historians of World War I are parochial; in fact, in the last twenty years, they have done an excellent job of delineating the global connections that truly made the conflict a world war. Still, debate revolves around questions that would have sounded familiar to scholars decades ago. Who should assume responsibility for the war’s outbreak? How and how well was the war conducted? And what were the war’s most important consequences and legacies?

How have these questions been answered?

Military historians have stressed the extent to which new weapons and techniques eventually formed the basis for modern combined arms tactics that in turn gave armies the capacity to launch assaults that could disrupt the enemy on the operational level, even if he employed a defense in depth. Learning how to deal with mass armies and new technology was very much a “two steps forward one step back” process, but the armies of 1918 bore very little resemblance to the once that went to war in 1914: they had far more firepower (and laid it down far more accurately), they deployed many more specialized troops, they used more flexible tactics, and their command, control, and communication  were far more sophisticated.

Unlike memory, which has compared the origins of World War I to a senseless accidental bar fight, diplomatic historians see something much more complex but comprehensible. If Europe’s leaders made mistakes and misjudgments, they often acted from entirely understandable motives, and the diplomacy of the period reflected their will. For that reason, the great majority of scholars agree that certain states, as measured by their intentions and actions (Germany, Austria-Hungary, and, to a lesser extent, Russia) bear much more responsibility for starting the war than others (such as France and especially Britain). Hand in hand with this judgment is the belief that World War I mattered, and not merely because of its unintended consequences, which included, at the very least, the destruction of four empires.  It arrested, if only for a short time, a deliberate German bid for the domination of Europe—a domination that would not have been particularly pleasant.

The foregoing seems to indicate that the findings of historians are somewhat less judgmental than those of memory. That is not because historians, in general, are any less judgmental than anybody else; they can and should judge. Yet historical judgment emerges from a discipline that encourages careful study and an empathetic spirit. This approach often culminates in measured verdicts. For all of the similarities between the two, it is these specific qualities that set history apart from memory and ensure that what we learn in school is different from what Papa says.

Woo-hoo! Time to Celebrate the Bicentennial of the Congress of Vienna!

Congress of Vienna

A peace that ended the greatest war that Europe had ever seen? Check. A settlement that imposed large reparations on the vanquished and deprived them of territory? Check. Talks in which the victors sought to erect a diplomatic system as well as a forum for discussion that would contain the defeated and preserve the peace? Check. With all the recent talk about commemorating the centennial of World War I’s outbreak, we must be referring to the Paris peace settlement of 1919, right? Wrong! Instead, this month, we celebrate the bicentennial of the Congress of Vienna which convened in September 1814. History Today has published an interesting article about this momentous event:


To be precise, the Congress of Vienna, which “met” from September 1814 to June 1815, was not exactly what we’d call a congress. Nor was it conference in which all the participants met at once in one location. Rather it was a series of meetings between various diplomats representing the great powers of Europe—Britain, Austria, Prussia, Russia, and later France—who sought to pick up the pieces after Napoleon’s political demise (Napoleon first abdicated in April 1814, went into exile, returned, lost Waterloo, abdicated again, and went into exile for good in July 1815). After almost 25 years of war, the great powers sought to erect a stable diplomatic system that would contain France and ensure that Europe could enjoy a sustainable peace. Recognizing the extent to which domestic and foreign policy were related, the great powers also hoped to promote peace by quelling the revolutionary forces associated with France, such as liberalism and nationalism.

Gordon Craig (1913-2005), perhaps the most important English-speaking historian of Germany of his generation, claimed the Vienna settlement was based on three principles: “compensation for the victors, legitimacy, and balance of power.” While he conceded in the next sentence that this description was perhaps a little crude, it is easy to remember.

By “compensation,” Craig did not mean money. He meant territorial compensation. As the map of Europe was redrawn, each great power believed that nobody should receive more territory than anybody else. If Austria lost territory in one region, it should receive compensation for that loss elsewhere. If Russia received additional territory, so should everybody else. It was all associated with a concept of balance. Napoleon had thrown this idea out the window. He compensated himself with territory and redrew the map of Europe to suit his own needs without giving anything to anybody else. Now it was time to return to a different principle.

By “legitimacy,” Craig meant a particular kind of legitimacy—the right to rule. In the contemporary age, we might say a government possesses legitimacy if it has the support of its people. The idea that a government relied on the consent of the nation to govern, however, was considered a revolutionary idea in 1815 Europe. Rather, legitimacy referred to pre-revolutionary rights and privileges. A monarch had the right to rule a particular territory according to tradition and precedent. This was what legitimacy was all about. Often, however, the great powers neglected to observe this concept, either in the pursuit of compensation or the balance of power.

By “balance of power,” Craig meant a state of affairs in which the great powers operated in a kind of equilibrium that prevented any one of them from becoming too powerful. In 1815, the main state that everybody feared was France (it had singlehandedly made trouble for the previous quarter century), and the territorial settlement was created with an eye toward containing that country. Yet all the great powers were suspicious of one another, and so each sought to check all the others.

The Congress of Vienna has often received a good press from historians, and it has frequently been compared favorably with the Paris settlement of 1919. The few successful revolutions (Greece and Belgium) that occurred while the Congress system remained in place were more or less sanctioned by the great powers. Under this system, the great powers also managed to see off the Revolutions of 1848. The arrangements of 1815 more or less survived until the mid-1850s, when the Crimean War (1853-1856) between France, Britain, and Sardinia, on the one side, and Russia, on the other, led to a series of unforeseen events that completely disrupted the machinery erected in Vienna forty years before. The Crimean War eventually led to the estrangement of Austria and Russia, the expansion of France, the unification of Italy, and eventually the creation of Germany.

Saying that the Congress system collapsed because X did Y, and A did B, of course, describes how it fell apart but does not explain why. Ghervas’ article points to the failure of the great powers to include the Ottoman Empire in their deliberations. There is some merit to this argument since the retreat of the Ottomans from Europe (as well as their weakness elsewhere) produced a power vacuum that led to much conflict between the great powers. To dwell on the weakness of the system, however, allows us to neglect the necessity of good will in perpetuating that system. For sure, a corrupt system will pervert the best of wills, but perverted wills can corrupt even the best systems. By the 1850s, some powers had lost interest in upholding the system (Britain), proved incapable of defending it (Austria), or deliberately sought to overturn it (France and Prussia). Pointing out this problem is, perhaps, another way of saying that the interests of the great powers were bound to diverge. The memory of the great war that had bound them together receded farther and farther in the past. Instead, national interests returned to the forefront. Moreover, changes in the relative strength of the great powers, changes in their character, and changes in the overall circumstances within which they operated, all conspired to bring the Congress system down.

So has it ever been with the demise of various diplomatic systems—they are the victims of these kinds of transformations. Yet in light of this fact, the destruction of the Congress system seems especially piquant in that it sought to arrest change and revolution.  By the middle of the nineteenth century, the most intelligent conservatives understood the futility of attempting to hold back changes in Europe. Among them, Otto von Bismarck, Minister President of Prussia and later the first Chancellor of a united Germany, came to see that “for things to remain the same, everything must change” (to quote Giuseppe Tomasi di Lampedusa’s The Leopard, a work of historical fiction dealing with Italian unification and one of the greatest Italian novels of the 20th century). This “white revolutionary,” as Lothar Gall described Bismarck (white is the color most commonly associated with conservatism, so a white revolutionary would be a conservative revolutionary, something of a paradox), participated in the destruction of the Congress system for the sake of creating a unified Germany national state that could better preserve conservatism at home.  In so doing, Bismarck created the so-called “German problem”—that is, an extremely powerful state that perpetually threatened to dominate the European continent. The formation of this state is the bridge between the destruction of the Congress system and our own time.

“Inventories of War”: From the Battle of Hastings to Counterinsurgency in Helmand Province

Somme Kit 1916

As part of its commemoration of World War I, the Daily Telegraph‘s web site posted the following which shows graphically how the “typical” English soldier’s kit evolved from that of an Anglo-Saxon housecarl who fought at the Battle of Hastings in 1066 to a present-day sapper in the Royal Engineers stationed in Helmand province, Afghanistan.


Such a striking photo essay provides an opportunity for thinking about history, especially the extent to which we resemble our ancestors.

It is hard to make generalizations about soldiers or the trajectory of history based on these kits because they are not exactly comparable. An archer who fought at Agincourt (1415) was not the equivalent of a Yorkist man-at-arms at Bosworth (1485). And the medieval knights who fought at the siege of Jerusalem in 1244 undoubtedly enjoyed much higher social status than the fusiliers who fought under the Duke of Marlborough at Malplaquet (1709). Still, some of the continuities are striking, and Thom Atkinson, who put this collection together, repeatedly points out the similarities between soldiers from different periods. For example, he writes, “While the First World War was the first modern war, as the Somme kit illustrates, it was also primitive. Along with his gas mask a private would be issued with a spiked ‘trench club’ – almost identical to medieval weapons.” In the next frame, he writes, “The Anglo-Saxon warrior at Hastings is perhaps not so very different from the British “Tommy” in the trenches [during World War I].” The caption for the Yorkist man-at-arms at Bosworth states, “‘There’s a spoon in every picture. . . . I think that’s wonderful. The requirement of food, and the experience of eating, hasn’t changed in 1,000 years. It’s the same with warmth, water, protection, entertainment.” Later, while commenting about the private’s kit at Malplaquet, he writes, “Watching everything unfold, I begin to feel that we really are the same creatures with the same fundamental needs.” Moreover, it’s not merely what Atkinson writes but how he writes it. He implicitly compares the Yorkist man-at-arms to the Royal Marines who helped win the Falklands’ war against Argentina in 1982 by stating, “From the cumbersome armour worn by a Yorkist man-at-arms in 1485 to the packs yomped into Port Stanley on the backs of Royal Marines five centuries later, the literal burden of a soldier’s endeavour is on view”–as if to say there is a kind of correspondence between one and the other. Even when Atkinson discusses differences, they morph into similarities. While writing about the kit of the trained caliverman who prepared to repulse the Spanish Armada in 1588, Atkinson claims, “The similarities between the kits are as startling as the differences. Notepads become iPads, 18th-century bowls mirror modern mess tins; games such as chess or cards appear regularly.” In other words, almost every item today has some sort of medieval or early modern antecedent. And, indeed, the various kits are presented as part of a single evolution. The “bolt-action Lee-Enfield” rifle that was the standard weapon of the infantryman in 1916 becomes the precursor of the “laser-sighted light assault carbine” of the sapper in Afghanistan in 2014. Likewise, “the pocket watch of 1916 is today a waterproof digital wristwatch.”

In this context, it seems like a good idea to refer to the thoughts with which John Lynn, one of America’s leading military historians, opens Battle: A History of Combat and Culture (2003). Lynn pays tribute to the ways in which warfare has remained constant over time. The soldier, he argues, has always been the perpetrator and victim of “havoc and suffering.” Fear, discomfort, danger, and death have ever been the lot of the soldier who is required to display “endurance, self-sacrifice, and heroism.” For those reasons, we have a tendency, Lynn claims, to see this “universal soldier” as an “unchanging agent of pillage, destruction, and death,” an “eternal, faceless killer.” When we look at soldiers through the ages from this perspective, we convince ourselves that “only weapons and tactics have changed, not the men who have wielded them.” This perspective seems to be precisely the one that Atkinson has chosen. Atkinson’s survey of kit seems to indicate that all soldiers are more or less the same, regardless of the era in which they lived; only the weapons are different.

Yet, as Lynn argues, the soldier is not universal in either time or space. Every soldier is the product of a distinct culture that believed different things and tried to live up to different values. Lynn stresses that “one culture’s bravery is another’s bravado and one’s mercy is another’s meekness.” Just because the Anglo-Saxon housecarl who fought under King Harold at Hastings had a spoon doesn’t mean he was at all like the lance corporal who was dropped over Arnhem with the 1st Parachute Brigade in 1944. While both men may have subscribed to a code of honor, those codes would have been extremely different. Each, of course, was generated by a society that had very little in common with the other. That they were both soldiers makes them part of a guild of sorts, but it is certainly not enough to make them the same

This matter points to a larger issue with which all historians struggle constantly. There is a fundamental consistency in human nature. Across the ages, we have worked, we have loved, and we have played. And when we study the work, love, and play of people from the past, we see something of ourselves in our forbears. When we see that the Anglo-Saxon housecarl had a spoon, we delight in the discovery because we, too, have spoons, and we feel a kind of kinship. Some years ago, the blog master went to the Museum of Science in Boston to see an exhibit on Roman artifacts recovered from Pompeii. He was stunned at how modern-looking Roman plumbing was, particularly the spigots, and he felt a closeness to the Romans that he had never sensed before. Yet we cannot make the mistake of thinking that housecarls and Roman plumbers were just like us. Their work, love, and play (which oftentimes was very different from ours) did not signify the same things to them as our work, love, and play signify to us.

The job of the historian, then, is almost impossible. It does not consist of pointing out how earlier peoples were like us. Rather, the historian seeks to translate these earlier peoples to contemporary readers and students. The impossibility of the task has to do with the act of translation. The Anglo-Saxon had a spoon much like ours, but the food the Anglo-Saxon ate, as well as the way in which eating fit in his peculiar culture and society, is almost incomprehensible to us. The historian must somehow bridge the gap between this incomprehensible world and ours, but using our language and our ideas–tools that are not always well suited to the job. In other words, scholars are in the business of rendering the alien familiar, and that is a hard row to hoe.