Historiography

History in the Age of Trump: Immigration (Part I)

Part I

Note: The Donald Trump presidency has already caused historians and other observers to look to the past for parallels and guidance. Some commentators have emphasized that Trump’s policies bear striking similarity to earlier periods in American and European history. Others have emphasized that Trump’s administration has broken with longstanding traditions in American political life. This series will attempt to place Trump’s presidency in a historical perspective in a way that contributes both to our understanding of past events and current affairs.

**Links in this essay will open PDF copies of New York Times articles from the 1920s. Links should open in a new window.

The images are striking: immigrants stuck in limbo, having arrived in the New York but detained and denied entry due to new, stricter immigration regulations. Those affected include men who risked their lives fighting for the United States who now find that they are unwelcome in the country they defended. In one case, a woman from the Middle East arrives in the U.S. to be reunited with her husband, a religious cleric who had come to the country legally more than a year earlier. The woman and their young daughter are taken into custody and then ordered to return home, prompting a frantic legal battle over their future.

ellis-registry

Holding area at Ellis Island.

These stories do not describe events that took place in the past week—they describe conditions in 1924, just after Congress passed legislation that dramatically reduced the number of immigrants eligible for entry into the United States. The new law created bottlenecks at American ports, including Ellis Island. Critics of the law were dismayed to note that soldiers who had fought in World War I but later left the country found themselves stranded, uncertain of when they could return. Other opponents complained that the law unfairly targeted certain ethnic groups. Italians, who had made up a large percentage of immigrants to the United States since the early 1900s, saw their numbers slow to a trickle. Religious minorities also suffered under the new law; the family mentioned in the opening paragraph were Jews from Palestine.

immigration-cartoon

On this blog, we try not to overstate the link between past and present. Immigration restrictions in 2017 are not the same as in 1924; America now is very different from America then. Nevertheless, President Trump’s executive order has drawn attention to America’s historic position as a beacon for immigrants, along with its equally long history of trying to exclude “undesirables.” Trump’s critics are right: his executive order is un-American, a betrayal of our core principles. At the same time, it is also quintessentially American, a modern manifestation of the nativist tendencies that have always existed in this country.

Part II of this post explores the fears that immigrants in the 1920s were violent radicals who threatened the American way of life. It will also consider how that history relates to current attitudes, and provide another illustration of how past events can be misconstrued in a modern context.

Martin Luther King, Jr., the Comics, and Biography

curtis

This past Martin Luther King Day, in the comic strip Curtis, the title character asks at the dinner table—“Makes me wonder how history would have played out if Dr. King was never born, or never assassinated?” His family’s response is dumbstruck silence. Many historians might have been hard pressed to respond cogently to the fictional eleven-year-old’s question as well.

We have long debated whether great people shape history through their actions or if broader impersonal forces shape historical events and the participants. Martin Luther King, for example, was not the only civil rights leader, and undoubtedly other leaders would have pushed the civil rights agenda forward in the 1950s and 1960s without him. Yet, through his soaring rhetoric, King put his indelible mark on the movement. The story of King’s life has consequently become for many Americans the story of the Civil Rights movement in the mid-twentieth century.

Though many academic historians have shied away from biography recently, the lives of great men and women are still the primary way that most people learn about the past. People like biography because it enables readers to form mental pictures of the events or actions described and thereby allows readers in a sense to walk in another’s shoes. Biography essentially makes history more accessible and real for readers than jargon-laden academic texts do. In the process, biography provides a good introduction to the politics, economics, social hierarchies, and morality of various times and places that facilitates more mature historical analysis. Biography effectively opens the door to greater historical awareness.

Biography does not need to be just a parade of great men and women either. Many projects are underway today to write biographies or biographical sketches of regular people. Such projects open the door to innovative pedagogical collaboration between teachers, students, and public history organizations. For instance, Saint Anselm students in Professor Salerno’s American Women’s History (HI 359) recently prepared biographical sketches for a national database on militant suffragists arrested in demonstrations during World War I.

Renewed interest in biography might not quell historians’ ambivalence with the genre or put to rest long-standing debates regarding causation (that is, the relative weight of individual action vs. impersonal forces). Still, more appreciation of biography by professional historians will allow us to participate more fully in public debates—even with fictional characters in the funnies.

Curtis’s creator, Ray Billingsley, of course, was not really interested in historians’ debates when he penned his strip. Rather, he rightly wanted to highlight how different American history would have been without Martin Luther King—or how the world would have changed had he lived longer.

Gladwell’s Revisionist History is Neither Revisionist Nor History

gladwell-revisionist-history

It is difficult to describe to people who have never heard of Malcolm Gladwell what he does for a living. He is a journalist, author, and public speaker who writes about the kinds of things calculated to appeal to the movers and shakers of the new tech world: tipping points, intuitive thinking, innovation, the secrets to success, and so on. A staff writer at The New Yorker, he has produced a number of influential books, including, most recently, David and Goliath: Underdogs, Misfits, and the Art of Battling Giants (2013). Gladwell has recently launched a podcast entitled “Revisionist History” which studies the same types of questions in the same Gladwellian way.

http://revisionisthistory.com/

An essay by Allison Miller, “History and You: Malcolm Gladwell’s ‘Revisionist History’ Podcast Cleanses History of the Past,” which recently appeared in The Baffler, has taken Gladwell to task for masquerading as a historian:

http://thebaffler.com/blog/gladwell-podcast-miller

Miller points out that crafting history is an act of empathy. As she puts it, “Analyzing the past requires you to see a particular set of circumstances from someone else’s point of view—knowing full well that the gulf between then and now will prevent you from truly understanding them and what they faced.” To analyze people from the past, though, Gladwell relentlessly employs the latest concepts garnered from the social sciences. These concepts generally do not recognize that people from the past were fundamentally different. Miller chooses the example of Episode 1, “The Lady Vanishes” to explain the faults of Gladwell’s modus operandi. A concept that he borrows from the social sciences (in this case, “moral licensing,” which comes from social psychology), she argues, is an inappropriate tool for explaining why Victorian artist Elizabeth Thompson did not obtain admittance to the Royal Academy for her widely acclaimed painting, The Roll Call (1874).

thompson-roll-call

Having listened to a number of episodes on “Revisionist History,” One Thing after Another couldn’t agree more with Miller. Gladwell’s podcast disregards one of the most fundamental truths established by the discipline of history: everything—including people’s behaviors and world views—changes over time. Miller does neglect, however, to mention another important way in which “Revisionist History” fails to live up to its title. The way in which Gladwell applies his various concepts from the social sciences indicates that he does not understand what the word “revisionist” signifies when associated with history. Over time, for a wide variety of reasons, historians constantly revise their understandings of the past—they employ different methods to interrogate it, they use different sources, they bring different world views to the task, or they use their imagination in different ways. History is an ongoing conversation in which many interpretations are provisional; no matter how well they explain the past, they are usually superseded by subsequent understandings. This process of revision is what revisionist history is all about. For Gladwell, though, revisionist history appears to consist simply  of revisiting certain past incidents and solving their mysteries definitively. There is no sense that his findings are part of a larger exchange or that they are in any way tentative. Gladwell provides clarity and closure. One obtains the impression that for Gladwell, the past is merely a scene where he can demonstrate the utility of his latest interesting theory in cracking various paradoxes.

This attitude on Gladwell’s part may be the product of sloppy thinking (surprising in someone who was a history major as an undergraduate). One cannot help noting, however, that Gladwell himself benefits from peddling this point of view. Using the social sciences to solve many puzzles from the past, Gladwell dramatically expands their jurisdiction and gives the impression that they produce immutable, universal laws that transcend time and space. Who should benefit from this impression but the popular purveyor of social scientific explanations, Gladwell himself?

One Thing after Another is not merely attempting to defend history’s turf for turf’s sake. The questions Miller raises about how to do history have important implications. As Miller points out, those who wrestle with the past can develop the judgment to provide a variety of feasible alternatives for the future. Gladwell’s vision is very attractive; he provides easily grasped certainties. History is less attractive; it asks us to wrestle with an alien past for the sake of sharpening our judgment. At the end of the day, as we confront the future, Gladwell supplies answers. History, on the other hand, compels us to struggle, but in so doing, it gives us the opportunity to develop wisdom.

Dubrulle and Masur Lead History 112 on an Adventure in Goffstown Microhistory

2016 Spring History's Mysteries Selfie

In January 1821, the widowed Anna Ayer of Goffstown claimed before the selectmen of the town that Daniel Davis Farmer (who was married and had four children of his own) was the father of her unborn baby. Born in Manchester, Farmer had recently owned a farm in Goffstown but had returned to his home town. He furiously contested Ayer’s charge. If she could somehow prove that Farmer was the father, however, he, not the town of Goffstown, would be responsible for support of her child. On April 4, 1821, he bought some rum at Colonel Riddle’s store in ‘Squog Village (the area where the Piscataquog drains into the Merrimack River—which was then a part of Bedford), walked the five miles to what is now East Dunbarton Road (currently the northeastern part of Goffstown), and visited Ayer. What transpired next is unclear, but it appears Farmer hit Ayer on the side of the head with a shovel and then beat her with a stick. He also beat Ayer’s daughter before attempting to burn down the Ayer house. The widow Ayer was fatally wounded in the attack and died some eight days later. Ayer’s daughter, also named Anna, was injured but survived. A little over a week after the widow Ayer’s death, Farmer was indicted for first-degree murder by a grand jury sitting in Hopkinton. On October 10, he was put on trial in Amherst, then the seat of Hillsborough County, before the Superior Court of Judicature (the ancestor of New Hampshire’s supreme court). The trial began at 8 AM and concluded at 10 PM of that day. After deliberating for one hour, the jury returned a verdict of guilty. The Attorney General sought the death penalty. The next morning, Farmer was brought before the bar, and Levi Woodbury, an associate justice of the court, pronounced a sentence of death. Farmer was kept in the county jail in Amherst until January 3, 1822, when, on a bitterly cold day, he was hanged near the town common before a crowd of 10,000 people (at a time when the population of Goffstown was only about 2,000). Farmer’s execution was one of only three that occurred in all of New England in 1822. Farmer also had the bad luck to be one of the five people executed in New Hampshire in the first half of the 19th century.

The Ayer murder serves as the topic for the research project in History 112: History’s Mysteries, team-taught by Professors Matt Masur and Hugh Dubrulle at Saint Anselm College. The reading list in the course consists of microhistories, almost all of which revolve around some sort of mystery (often including a court case). It is in this fashion that Masur and Dubrulle hope to initiate students in the practice of history as a discipline.

What is microhistory, you may ask? Writing in The Journal of American History, Jill Lepore argues that like biographers, microhistorians often study an individual’s life. Yet “microhistorians do have nonbiographical goals in mind: even when the study a single person’s life, they are keen to evoke a period, a mentalite, a problem.” To put it more thoroughly in Lepore’s words:

If biography is largely founded on a belief in the singularity and significance of an individual’s life and his contribution to history, microhistory is founded upon almost the opposite assumption: however singular a person’s life may be, the value of examining it lies not in its uniqueness, but in its exemplariness, in how the individual’s life serves as an allegory for broader issues affecting the culture as a whole.

In History 112: History’s Mysteries, then, students will read books that are classics not merely in microhistory but history as well, including Jonathan Spence’s The Question of Hu and Leonard Dinnerstein’s The Leo Frank Case. By reading about people like an 18th-century Chinese copyist who accompanied a Jesuit missionary back to France and an early 20th-century Jewish-American factory superintendent who was lynched by a Georgia mob, students can learn about past periods. Not only that, they can also learn how historians use documentary evidence to build arguments about these periods.

Having read a number of prominent works in microhistory, students will then try their hand at building the foundation for a microhistory of their own: Daniel Davis Farmer’s murder of the widow Anna Ayer. As students sift through some of the most important primary source material associated with the case, they will map out areas of secondary research that will help illuminate the world that both Farmer and Ayer inhabited. And thus they will lay the groundwork upon which future students enrolled in History 112 will build. Stay tuned for further updates on this blog as students in History 112 find out more and more about the mysteries surrounding the bloody events that transpired in Goffstown in the spring of 1821.

Assassin’s Creed, Robespierre, and the French Revolution

Assassin's Creed Unity

The following post is dedicated to Professor Dubrulle’s History 226: Modern Europe class. They know why.


Although not a gamer, One Thing after Another knows that the latest edition of the Assassin’s Creed franchise, Assassin’s Creed: Unity, just came out on November 11. Set during the French Revolution, the game has already attracted much attention and some controversy, as this article from The Atlantic indicates:

http://www.theatlantic.com/entertainment/archive/2014/11/let-them-play-assassins-creed/382818/

More than anything, it has been Jean-Luc Melénchon’s comments that have drawn attention to the game. A former Socialist who founded the Left Party in 2008 (and later became a major player in the Left Front, a coalition consisting of the Communist Party, the Left Party, and the Unitarian Left), Melénchon ran in the French presidential election of 2012, finishing in fourth place. Melénchon has complained that the game portrays the “treacherous” Louis XVI and his wife, the “cretin” Marie Antoinette, as innocent victims of the revolution. Meanwhile, it depicts the French people as “bloodthirsty savages.” Furthermore, according to Melénchon, Maximilien Robespierre, who headed the Committee of Public Safety, the National Convention’s executive body during the revolution’s most radical phase, the “The Terror” (1793-1794), comes out looking like a “monster.” For more details about Melénchon’s comments, take a look at the following article from the Daily Telegraph:

http://www.telegraph.co.uk/news/worldnews/europe/france/11231217/Assassins-Creed-Unity-makes-travesty-of-the-French-Revolution.html

The Atlantic is spot on with a number of observations. To start with, the controversy does raise the much-debated question of what responsibility the media bears for representing past events “truthfully.” Le Monde produced a list of seven ways Assassin’s Creed: Unity gets history wrong:

http://www.lemonde.fr/pixels/article/2014/11/16/assassin-s-creed-unity-ou-le-petit-jeu-des-7-erreurs-historiques_4524347_4408996.html

Ubisoft, the maker of the game, has mounted a variety of defenses, none of which are mutually exclusive. First, it has argued that Assassin’s Creed: Unity is a game, not a history lesson. Second, Ubisoft has asserted that it did hire historical consultants to create an accurate period feel. Third, and most interestingly, it has claimed that it compromised historical accuracy in the game for the sake of enhancing the experience of gamers. For example, Ubisoft made conscious decisions to have the tricolor flag appear in 1789 and the “La Marseillaise” sung in 1791–even though these artifacts of the revolution did not appear until later–because gamers would have found it strange to see historically accurate flags and not hear “La Marseillaise.” For more information on what Ubisoft was thinking, see Le Monde‘s interview with Antoine Vimal du Monteil, one of the game’s designers:

http://www.lemonde.fr/jeux-video/article/2014/11/13/assassin-s-creed-unity-est-un-jeu-video-grand-public-pas-une-lecon-d-histoire_4523111_1616924.html

Of course, all of these defenses do not necessarily counter the argument that contemporary media ought to get the past right lest it warp perceptions of great events. Certainly, Melénchon and Alexis Corbière, the secretary general of the Left Front, suggest that there is, perhaps, a sinister political plot behind Ubisoft’s portrayal of the revolution.

One need not subscribe to conspiracy theories to see that various versions of history are linked to different political positions. As The Atlantic points out, Assassin’s Creed: Unity has revived a 200-year-old political debate over whether or not the French Revolution, particularly its radical phase, was a good or bad thing. Anyone who has a passing familiarity with the revolution knows it sparked a great deal of resistance within France itself among those who sought to preserve different elements of the old regime. It also led to the emergence of modern conservatism: Edmund Burke’s Reflections on the Revolution in France (1790), which was extremely critical of the revolutionaries, is widely considered one of the founding texts of this political movement. The Left, and this would include people like Melénchon, sees the revolution in a very different light. From this perspective, the revolutionaries, particularly Jacobins like Robespierre, appear as heroes and models of virtue. The political debate between Right and Left has found their way into the historiography of the French Revolution. It should come as no surprise (and One Thing after Another generalizes here) that French historians are somewhat more forgiving of the revolution’s excesses than the “Anglo-Saxons”–that is, English-speaking scholars from the United States and Britain. After all, Americans have tended to be somewhat forgiving of their Founding Fathers even if these men had flaws of their own.

This blog must admit that it has a small soft spot for Robespierre: One Thing after Another‘s great-great-great-grandfather was born in the same parish of Arras within a year of Robespierre’s birth. One Thing after Another, however, is not blind to Robespierre’s faults. Moreover, One Thing after Another suspects that contemporary left-wing French politicians have fallen into a tradition of invoking  “the Incorruptible” for the sake of burnishing their socialist credentials. This invocation can make one sound edgily revolutionary without demanding too much in the way of concrete action. In this context, The Atlantic‘s quote from Milan Kundera’s Unbearable Lightness of Being is quite apposite:

If the French Revolution were to recur eternally, French historians would be less proud of Robespierre. But because they deal with something that will not return, the bloody years of the Revolution have turned into mere words, theories, and discussions, have become lighter than feathers, frightening no-one. There is an infinite difference between a Robespierre who occurs only once in history and Robespierre who eternally returns, chopping off French heads.

In other words, it is easy to refer to Robespierre as a hero because such a reference is ultimately meaningless; he died over 220 years ago, and his days will not come back. In all likelihood, politicians like Melénchon would shrink from the acts that Robespierre committed. Robespierre invested himself in The Terror; Melénchon is whining about a video game.

Descent into Hell: Japanese Civilians and “The Bomb”

Hiroshima Bombing Victim

“Would the Japanese have surrendered without Hiroshima?” is the question that opens Jonathan Mirsky’s review of Descent into Hell: Civilian Memories of the Battle of Okinawa.

http://www.nybooks.com/blogs/nyrblog/2014/oct/23/descent-hell/

In some ways, such an opening is unfair to the book. First-hand accounts of Japanese civilian life during World War II have not appeared in English with much frequency, let alone narratives about Japanese civilians literally caught in the crossfire of combat on land. For this reason alone, Descent into Hell should attract the attention of anybody  interested in the fighting that took place in the Pacific theater during this conflict. It does not, however, necessarily answer the question of whether or not the Japanese would have surrendered without the atomic bomb.

The connection between Descent into Hell and Hiroshima amounts to this: Mirsky claims that the book suggests Japanese civilians were devoted to the emperor and unwilling to surrender, so only the atomic bomb could have shaken their will to continue the fight. This argument does not seem to recognize the forces that truly led the Japanese government toward surrender in 1945.

In the last year of the war, the Japanese leadership clearly understood that the United States had obtained the upper hand in the Pacific. Japanese objectives had shrunk since the heady days of late 1941. As American forces began to close in on the home islands, the Japanese hoped to preserve their independence and avoid unconditional surrender by bringing the Americans to the negotiating table. The only way to to that was by inflicting heavy losses on the United States and making the war as terrible as possible. The Japanese no longer had a navy to speak of, and they had few trained pilots at their disposal, but they believed their willingness to take and inflict casualties would allow them to eventually demoralize the Americans. In other words, the Japanese leadership did not seem overly concerned about their own losses. And certainly, one part of this calculation held true: in the Pacific, American losses in 1945 rose dramatically. Once the United States became involved in large-scale ground combat in Normandy (June 1944), its casualties began averaging about 16,000 to 19,000 men per month. Large numbers of these casualties were suffered in the Pacific: Leyte (17,000), Luzon (31,000), Iwo Jima (20,000), and Okinawa (46,000). To put these losses in perspective, the first thirty days of the Normandy campaign, which was extremely hard fought by European standards, led to 42,000 casualties.  (By way of comparison, it is interesting to note that American casualties in Iraq between 2003 and 2012 amounted to about 36,000.)

If the Americans were appalled by the obstinacy of Japanese resistance, they were still capable of applying enormous military pressure on the home islands. Their submarines continued to wipe out the Japanese merchant fleet, their planes had mined Japanese waters and brought the coastal trade to halt, and their bombers had started to lay waste to Japanese cities. Among other things, Japan was running out of food, and in October 1945, after the war was over, famine was only averted by massive American aid.

Yet the Japanese believed they had additional cards to play. The Soviet Union had remained neutral in the Pacific war, and the Japanese set great store on being able to use the Soviets as an intermediary in talks with the United States. There was even some hope that Japan could foment discord between the United States and the Soviet Union whose alliance had always been somewhat awkward. Such thinking was delusional, but it kept the Japanese leadership hanging tough.

At Yalta (February 1945), the Soviets agreed  to join the Pacific war within three months of the conflict ending in Europe. As German resistance collapsed, the Soviets feverishly prepared to launch an invasion of Japanese-occupied Manchuria. They kept these plans secret, of course, from the Japanese. The Americans, for their part, had tested their first atomic bomb by mid-July and were now keen to end the war before the Soviets became heavily involved in Asia. The problem was that the Japanese showed little sign of wishing to surrender on terms that the Americans were willing to accept. The Americans wrestled with what kind of terms they might be willing to concede to the Japanese to bring them to capitulate, but the lure of a quick, unconditional surrender that would not cost many American lives, proved impossibly attractive. The Americans prepared for an invasion of the home islands, but they also deployed their atomic bombs.

The first atomic bomb hit Hiroshima on August 6, and a second bomb fell on Nagasaki on August 9. On the latter date, the Soviets invaded Manchuria. Which of these events convinced the Japanese government to surrender has been at the center of a lively debate. On one side, Tsuyoshi Hasegawa, author of Racing the Enemy (2005), has argued that the Soviet declaration of war played the preponderant part in the Japanese decision to capitulate. On the other, scholars like Richard Frank, who is perhaps best known for Downfall (1999), maintain the more traditional view that the atomic bombs brought the Pacific war to an end.

Of course, if one tends toward Hasegawa’s view, then Mirsky’s opening question is irrelevant because the atomic bombs did not really end the war. But even if one does not see eye-to-eye with Hasegawa, Mirsky’s question is still irrelevant. While they might disagree on what prompted Japan to throw in the towel, there is one thing on which all of these historians agree: the feelings of Japanese civilians did not enter into the matter. At the end of the day, it was the Japanese cabinet and emperor that made the decision. As Max Hastings has argued in several of his works about World War II, when it comes to waging armed conflict, authoritarian regimes have this great advantage over democracies: they can exert more coercion against their own people and need not engage in consultation. For that reason, they are capable of extracting more from their populations and suffering losses that more representative forms of government would never countenance. And so it was with the Japanese in August 1945. Read Descent into Hell, and read it for any number of reasons. But it won’t tell you if the atomic bombs were necessary.

How Many Native Americans Were There before Columbus, and Why Should We Care?

The manner of their attire.

The online edition of The Atlantic recently republished the following article that first appeared in 2002:

http://www.theatlantic.com/magazine/archive/2002/03/1491/302445/

The Atlantic wants us to take advantage of Columbus Day to reflect on the historiographical debate concerning the pre-Columbian population of the Americas. We should take the monthly up on its offer because this debate is, in many ways, exemplary in that it expresses what is so interesting and significant about these kinds of controversies.

First, it indicates very clearly what historians do. Historians, of course, dispute what happened and how it happened. They almost always have to do so with limited evidence. Perhaps even more important, they also argue about the significance of what occurred. The debate over America’s pre-Columbian population revolves around a number of very big and difficult questions. Before Europeans exerted any influence on the Western hemisphere, how many people lived in the Americas? How did they live, and what was the nature of their influence on the land? What was their quality of life when compared with the Europeans who arrived on their shores? How did they die, and how did this death lead to important changes?

Second, this debate involves a variety of fields. Any historical debate of significance is, to some extent, interdisciplinary. That’s because big questions bleed into a variety of fields. This particular argument involves not only historians, but also archeologists, anthropologists, geographers, epidemiologists, ethnographers, demographers, botanists, and ecologists. In some ways, the interdisciplinary nature of this discussion is a virtue in that a variety of fields can see a question in the round. At the same time, however, conversations between disciplines can become chaotic because they employ varying approaches and see the world from different perspectives. That kind of situation can lead to specialists talking past one another, making it difficult to reach agreement.

Third, as all the participants seem to recognize, this discussion informs a series of important contemporary arguments. The most significant ones have to do with, first, our relationships to each other and, second, our relationship to the land. To start with the first one, as Mann points out, “given the charged relations between white societies and native peoples, inquiry into Indian culture and history is inevitably contentious.” Controversies revolving around such issues, of course, touch upon the responsibilities of white societies to native peoples. While the contemporary discussion influences how each group sees the other, it also shapes the way each groups understands itself. In other words, this debate has much to do with identity. This feature of the argument partially explains its ferocity. All historiographical debates of any worth involve a collision of world views, but this collision is fraught with emotion. As for the second contemporary argument, the one revolving around our relationship to the land, this historiographical debate has great significance. For centuries, the myth of the noble savage led many to believe that native Americans were a part of nature rather than actors who molded that nature. The modern environmental movement took inspiration from that vision. If you don’t believe One Thing after Another, take a look at this famous public service announcement produced by Keep America Beautiful in 1971 (otherwise known as the “Crying Indian ad”). Here, the iconic native American represents nature and serves as the standard by which to criticize a dysfunctional and polluted modern world. But as experts in a variety of fields increasingly appear to believe, native Americans were a sort of “keystone species” who influenced the land to suit their own needs. In other words, as long as there have been humans in the Americas, there has been no such thing as “pristine” nature.

Because of their political consequences, these kinds of debates tend to smolder for decades, occasionally breaking out into open flame. For the same reason, the findings that issue from these arguments often find their way to the public in distorted form via the news media. It is for these reasons that it makes sense to read up on these controversies from the beginning. We should all take a look at Dobyns, Crosby, and Cronon’s works.  Of course, who has time for that except historians?