History News Network - Front Page History News Network - Front Page articles brought to you by History News Network. Thu, 27 Jun 2019 02:03:42 +0000 Thu, 27 Jun 2019 02:03:42 +0000 Zend_Feed_Writer 2 (http://framework.zend.com) https://hnn.historynewsnetwork.org/site/feed Stonewall's Legacy and Kwame Anthony Appiah's Misuse of History

 

In a recent op-ed in the New York Times, “Stonewall and the Myth of Self-Deliverance (June 22, 2019) Kwame Anthony Appiah, a distinguished philosopher at Columbia University, tries to debunk what he considers a myth: that the 1969 Stonewall Rebellion led to the takeoff of the Gay Rights movement in the United States. Instead, he credits “black-robed heterosexuals” (judges) who made important legal decisions and “mainstream politicians” who jumped on the gay rights bandwagon decades after Stonewall. In Appiah’s philosophy, long-term abusers who finally recant and mainstream political figures that drew them into coalitions, not the people whose actions created crises and precipitated change, deserve credit for marginalized groups gaining basic human rights.

 

Curiously, much of Appiah’s argument rests on events in Great Britain rather than the United States, events that he does not completely or accurately report. Appiah’s hero is Leo Abse, a backbench Labour Member of Parliament who pushed a private member bill that Appiah claims, “basically decriminalized homosexuality.” The Sexual Offences Act of 1967 revised a 1553 Buggery Act that made male same-sex sexual activity punishable by death and an 1885 law that eliminated the death penalty but maintained criminality. 

 

What Appiah ignores in his claims is the lead up to passage of the 1967 act and the limits of the act itself. In the Cold War climate of the early 1950s, British police were actively enforcing laws prohibiting male homosexuality, partly out of concern with national security and partly as part of rightwing anti-communist crusades. A series of high-profile arrests and show trials, including prominent members of the British elite, led to imprisonment and in the case of Alan Turing, a scientist and World War II hero, enforced chemical castration and suicide. Reaction to the anti-homosexual campaign led to the 1957 Wolfden Report that recommended the decriminalization of homosexuality. However it was not until 1967 when the Labour Party took power that the recommendations were implemented. Abse was credited with the bill so Labour Party leadership could distance themselves from accusations that they were pro-gay.

 

Appiah also missed that the Sexual Offences Act maintained general prohibitions against “buggery” and indecency between men, and only provided for a limited decriminalization when sexual relations took place in the privacy of someone’s home. It was still criminal if men had sexual relations in a hotel or if one of the parties was under age 21. 

 

Appiah is correct that individual acts of defiance, by themselves, do not generate major institutional change. Rosie Parks was not a little old lady who refused to give up her seat on a bus because she was tired. Parks was part of organized resistance to bus segregation by the local NAACP and a coalition of Montgomery’s Black churches.

 

What Appiah misses in his dismissal of the Stonewall Rebellion’s historical importance is that symbols like Rosa Parks sitting down and Stonewall are crucial to social movements as they mobilize and move from the political margins to the center of civic discourse. In the 1850s, the Underground Railroad was both an important symbol for Northern abolitionists demanding an end to slavery because it demonstrated the enslaved Africans desire for freedom whatever the risk and for Southern slaveholders demanding constitutional protection for their “property” rights. John Brown was executed for his role in the 1859 raid on the federal armory in Harpers Ferry. Two years later, United States troops marched into battle against the Confederacy singing that while Brown’s body is “molding in the grave, his soul’s marching on.”

 

Appiah’s argument is indicative of his larger philosophical outlook. As I read his philosophical work, Appiah tends toward Hegelian idealism, the belief that abstract ideas somehow play an independent and dominant role in shaping history. In New York Times Magazine advice columns his tendency is to recommend a hands-off or non-interventionist approach to personal moral dilemmas. Politically, Appiah places his hopes for change on persuasion and rejects intervention.

 

In his book The Honor Code, How Moral Revolutions Happen (Norton, 2010), Appiah credited British commitment to the idea of “honor” for the end of the trans-Atlantic slave trade and slavery in the British colonial empire. Missing from the book are any references to Toussaint Loverture and Sam Sharpe, leaders of slave rebellions in Haiti and Jamaica that shook the colonial world and were instrumental in ending slavery. Paradoxically, Appiah managed to attribute honor and idealism to 19th century British leaders, the same people who were busy colonizing India and Africa and marketing opium in China, while ignoring famine in Ireland and exploiting and impoverishing its own working class.

 

But Appiah’s argument doesn’t end with how history is written: it’s also about how history is taught. As a teacher and teacher educator I was disturbed by Appiah’s dismissal of the way the “great moral crusade of the 19th century, is now taught in schools.” He cites the “New York State Regents curriculum guide, which shapes public high school education in the state,” especially its reference to “people who took action to abolish slavery” that “names four individuals, all but one of them people of color.”

 

While Appiah is worried about what he considers a misleading high school curriculum, he is actually quoting from the 4th grade New York State Regents standard (4.5) for teaching about slavery. It focuses on the biographies of individuals with connections to New York and includes Samuel Cornish (New York City), Frederick Douglass (Rochester), and Harriet Tubman (Auburn). It also introduces students to William Lloyd Garrison, a white, non-New York abolitionist. The focus on biography may be simplistic, it is fourth grade after all and the children are ten-years old.

 

The high school standards, which like the 4th grade standards are advisory, not mandatory, are very different. They recommend that students “analyze slavery as a deeply established component of the colonial economic system and social structure, indentured servitude vs. slavery, the increased concentration of slaves in the South, and the development of slavery as a racial institution” (11.1); “explore the development of the Constitution, including the major debates and their resolutions, which included compromises over representation, taxation, and slavery” (11.2c); “investigate the development of the abolitionist movement, focusing on Nat Turner’s Rebellion, Sojourner Truth, William Lloyd Garrison (The Liberator), Frederick Douglass (The Autobiography of Frederick Douglass and The North Star), and Harriet Beecher Stowe (Uncle Tom’s Cabin)” (11.3b); and recognize that “Long-standing disputes over States rights and slavery and the secession of Southern states from the Union, sparked by the election of Abraham Lincoln, led to the Civil War. After the issuance of the Emancipation Proclamation, freeing the slaves became a major Union goal” (11.3c). 

 

Standard 11.10b focuses on how “Individuals, diverse groups, and organizations have sought to bring about change in American society through a variety of methods.” It includes “Gay Rights and the LGBT movement (e.g., Stonewall Inn riots [1969]” efforts to achieve equal legal rights. In addition, in the 12th grade civics curriculum (12.G2d), students learn that  “the definition of civil rights has broadened over the course of United States history, and the number of people and groups legally ensured of these rights has also expanded. However, the degree to which rights extend equally and fairly to all (e.g., race, class, gender, sexual orientation) is a continued source of civic contention.” 

 

The New York Times should have done a better job fact-checking Appiah’s essay. Philosophy may be allegorical. History definitely isn’t.

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172367 https://historynewsnetwork.org/article/172367 0
Racism, Reparations, and the Growing Political Divide Steve Hochstadt is a professor of history emeritus at Illinois College, who blogs for HNN and LAProgressive, and writes about Jewish refugees in Shanghai.

 

 

When I was growing up, white racism was a powerful and ubiquitous force in American life. It was impossible to ignore and nearly impossible to remain untouched, even if one consciously believed that skin color had nothing to do with human worth.

 

The outburst of civil protest about racism in the 1960s was a sign of American optimism: our democracy had severe flaws with deep historical roots, but they could be overcome through peaceful political action. The intransigence of openly racist politicians from the South and covertly racist politicians from everywhere else would yield to massive popular dissent. The Civil Rights Act of 1964, the Voting Rights Act of 1965, and the Fair Housing Act of 1968, among many other legislative victories for racial equality, introduced a new era in American history.

 

It was comforting to believe that over time America would no longer be divided unequally into black and white, that the effects of racism would gradually disappear as legal racism itself became a thing of the past. Viewed from today, that idea appears hopelessly optimistic, the dream of political Pollyannas, who ignored the long history and crude reality of American racism. Every survey and social scientific study demonstrates the continuing power of racism to distort and impoverish the lives of black Americans. What is relatively new is the congruence of the partisan and racial splits.

 

Much has changed for the better, as evidenced, for example, by the ability of black politicians to win races in every state. The ceiling on black political success has been lifted a bit, but not broken. Until 2013, there was never more than one black Senator in office. Although about half of US Senators had first been elected to the House of Representatives, that only works for white politicians: only one black, Republican Tim Scott, has moved from the House to the Senate. There have been only two black governors in our history.

 

Donald Trump has certainly exacerbated racism in America, but the racist attitudes that he plays on never disappeared. While the openly racist public displays of ideological white supremacists have become more common, the much larger undercurrent of racist beliefs has finally found a comfortable home in the Republican Party base.

 

Only 15% of Republicans say that our country has not gone far enough in giving blacks equal rights. The number of Republicans who say that American politics has already gone too far in giving blacks equal rights is twice as large. Three-quarters of Republicans say that a major racial problem is that people see discrimination where it doesn’t exist and that too much attention is paid to racial issues. One out of 5 Republicans say being white hurts people’s ability to get ahead, and one out of 3 say that being black helps. Nearly half of Republicans say that “lack of motivation to work hard” is a major reason why blacks have a hard time getting ahead, and more than half blame “family instability” and “lack of good role models”. One third of Republicans say that racial and ethnic diversity is not good for our country. Among white Republicans who live in the least diverse American communities, 80% wish for their communities to stay the same and 6% want even less diversity. Half of Republicans say that it would bother them to hear a language other than English in a public place.

 

While racist Americans have congregated in the Republican Party, white Democrats appear to be moving away from racial resentment. Between 2014 and 2017, the proportion of white Democrats who said that the “country needs to continue making changes to give blacks equal rights”, has grown from 57% to 80%. A different study offers even stronger numbers. In both 2012 and 2016, about half of Republicans displayed “racially resentful” attitudes toward blacks, and only 3% expressed a positive view. Among Democrats, the proportions shifted: the proportion who were positive about blacks doubled and those who felt most resentful fell by nearly half.

 

Apparently, college-educated whites had long known that the Democratic Party was more likely to be sympathetic to blacks on racial issues, and thus sorted themselves politically according to their own racial attitudes. But less educated whites came to recognize this partisan difference more recently, especially during the Obama presidency, and those with racial resentments who had been Democrats moved to the Republican Party.

 

While white Americans who feel negatively about blacks and believe that too much has been done to redress centuries of discrimination are collecting in the Republican Party, Democrats, both politicians and voters, are openly discussing reparations. 80% of Democrats believe that the legacy of slavery still affects the position of black Americans. The House Judiciary Subcommittee on the Constitution, Civil Rights and Civil Liberties held an unprecedented hearing on slavery reparations last week. Rep. Sheila Jackson Lee of Texas has proposed a bill “To address the fundamental injustice, cruelty, brutality, and inhumanity of slavery ... between 1619 and 1865  and to establish a commission to study and consider a national apology and proposal for reparations”. Such a bill had been stalled in the House for 30 years. Now Speaker Nancy Pelosi says she supports it. The four Democratic Senators who are candidates, Sanders, Booker, Warren and Harris, all co-signed a Senate bill to study reparations. Julian Castro, Kirsten Gillibrand and Beto O’Rourke also support such a study. Amy Klobuchar and Joe Biden have been circumspect, but not dismissive. I couldn’t find any Republican political figure who supports even a study of the issue.

 

Reparations are already being considered: Georgetown University students voted 2 to 1 to impose a $27.20 fee on themselves to compensate the descendants of the 272 slaves sold in the 1830s by Georgetown’s founders.

 

One of the big arguments that Republicans, like Mitch McConnell, have used against even thinking about reparations is that slavery is 150 years in the past. But government discrimination against African Americans deliberately deprived them of financial resources in my lifetime.

 

Returning black veterans could not take advantage of the GI Bill because of racist tactics in North and South. Blacks were excluded from getting home loans in the newly expanding suburbs.

 

Reparations would certainly be difficult to decide upon and to administer. The clean partisan split over whether to consider the issue demonstrates how Republicans and Democrats are moving away from each other. We’ll see whether that strengthens or weakens continuing American racism.

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/blog/154224 https://historynewsnetwork.org/blog/154224 0
How the Debate Over the Use of the Term ‘Concentration Camp’ was Amicably Resolved in 1998 When on June 18th, the Jewish Community Council of Greater New York (known locally as the JCRC) addressed an open letter of complaint to Rep. Alexandria Ocasio-Cortez for calling migrant detention centers “concentration camps,” the JCRC was reflecting how emotionally charged this term is for Jews.  In subsequent statements, Ocasio-Cortez made it clear that she was not drawing an analogy to Nazi-era death camps.  The JCRC’s letter compounded what might be considered community-relations malpractice in patronizingly offering “to arrange a visit to a concentration camp, a local Holocaust museum, hear the stories of local survivors, or participate in other educational opportunities in the hopes of better understanding the horrors of the Holocaust.”  

 

But lost in the controversy was a resolution of a parallel dispute in 1998 that redounded to the credit of all concerned.  At that time, Japanese-American organizers were preparing a museum exhibit at Ellis Island entitled “America’s Concentration Camps: Remembering the Japanese American Experience,” on the forced relocation and imprisonment of Japanese Americans by the United States government during World War II.  Instead of criticizing the exhibit’s curators, the American Jewish Committee (AJC) conferred with them and amicably arrived at an arrangement that satisfied both understandable Jewish sensibilities regarding the memory of the Holocaust and the right of other Americans to commemorate the injustice they endured during those very same years. This was explained in their joint press release:

 

An exhibit—entitled America’s Concentration Camps: Remembering the Japanese American Experience—chronicling the shameful treatment of Japanese Americans during World War II, will soon open at the Ellis Island Immigration Museum. Thousands have already seen the exhibit, which was created by and, in 1994, shown at the Japanese American National Museum in Los Angeles. Today, our sights are trained on the importance of such an exhibit in teaching about episodes of intolerance. We strongly urge all who have the opportunity to see the exhibit to do so and to learn its critical lessons.

 

A recent meeting between Japanese American and American Jewish leaders in the American Jewish Committee’s New York City offices led to an agreement that the exhibit’s written materials and publicity include the following explanatory text:

 

“A concentration camp is a place where people are imprisoned not because of any crimes they have committed, but simply because of who they are. Although many groups have been singled out for such persecution throughout history, the term ‘concentration camp’ was first used at the turn of the century in the Spanish-American and Boer Wars.

 

“During World War II, America’s concentration camps were clearly distinguishable from Nazi Germany’s. Nazi camps were places of torture, barbarous medical experiments, and summary executions; some were extermination centers with gas chambers. Six million Jews were slaughtered in the Holocaust. Many others, including Gypsies, Poles, homosexuals, and political dissidents were also victims of the Nazi concentration camps.

 

“In recent years, concentration camps have existed in the former Soviet Union, Cambodia, and Bosnia.

 

“Despite differences, all had one thing in common: the people in power removed a minority group from the general population and the rest of society let it happen.”

 

The meeting and the agreement about the text also reinforced the close and constructive relationship that has long existed between the Japanese American and American Jewish communities. Jewish community groups, especially the American Jewish Committee, were among the first and most vocal outside the Japanese American community calling for the U.S. government to offer an apology and monetary redress for its treatment of Japanese Americans during World War II.

 

In 1988, Congress and President Reagan passed legislation that formally granted the redress and apology to Japanese Americans who were incarcerated. Both communities have been among America’s leading voices advocating for strong civil rights, anti-discrimination and hate crimes laws. The meeting’s participants were encouraged to continue the work of preserving the memories of our communities’ experiences and helping other learn from them.

 

The exhibit represents a precious opportunity for those who must tell its story—Japanese Americans and other victims of tragic intolerance—and for those who must hear it. The story is one of betrayal; betrayal of Japanese Americans, who were deprived of protections that all Americans deserve; and betrayal of the American soul, which is defined by its unique commitment to human rights. The best insurance that we will never again commit such acts of betrayal is to use history of this sort as an object lesson for Americans today and in the future.

 

We know that today’s iteration of this dispute over terminology and history is political in ways that the 1998 episode was not, as exemplified by partisan brawling on the meaning and motives behind Ocasio-Cortez’s words.  Still, it’s good to know that communities and individuals can come to an accord over such a sensitive matter when they exercise prudent judgment.  

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172330 https://historynewsnetwork.org/article/172330 0
10 Things To Check Out At the Library of Congress’s New Exhibit on Women’s Suffrage

1. “Declaration of Sentiments” print

Although the original version is lost, this printed version of Elizabeth Cady Stanton’s “Declaration of Sentiments” has somehow survived 150+ years and now sits in the first display of the exhibit to show where the suffrage movement began to gain steam in the US. In it, Stanton demands moral, political, and economic equality.

 

2. "More to the Movement” placards

These placards, interspersed throughout the exhibit, cast light on minority women suffragists – figures who have usually gone unacknowledged in accounts of US and women’s history. While the Seneca Falls Convention is conventionally seen as the beginning of the suffrage movement, the “More to the Movement” placards note that women’s rights were first considered as an issue in 1837, at the Anti-Slavery Convention of American Women in New York City. 

 

3. Women in Politics video

The end of the exhibit features a video compilation of famous speeches made by women politicians and figures, including Shirley Chisholm, Sandra Day O’Connor, Ileana Ros-Lehtinen, and Hillary Clinton when she became the first woman presidential nominee from a major party. 

 

4. "Music of the Suffrage Movement” display

Tucked into the corner of one of the displays is this screen, which allows visitors to see and listen to the music that inspired suffragists across generations to resist. Many of the songs were anthems composed and written to be sung in large public forums.

 

5. Kentucky House of Representatives Roll Call

Although Kentucky was not the 36thand final state required to ratify the 19th Amendment, this sheet of paper has been preserved as a reminder of the rapid progress of the movement after World War I. The 19thAmendment was officially ratified just months after this roll call was taken.

 

6. Suffragist cap and cape

Displayed in a case along with other 20thcentury artifacts; this white cap and purple gown was worn by members of the National Woman’s Party from 1913 to 1917.

 

7. Abigail Adams letter

Remarkably, this letter from Abigail Adams to her sister Elizabeth Shaw Peabody has survived 220 years. Part of it reads, “I will never consent to have our sex considered in an inferiour point of light. Let each planet shine in their own orbit.”

 

8. “Surviving Prison and Protecting Civil Liberties” display

This display powerfully documents the commitment to the movement’s ideals that many women displayed during World War I. Suffragists protested the war as hypocritical, resulting in prison sentences and, in some cases, torture. Posters and newspaper headlines depicting women with feeding tubes shoved down their throat convey the strength and resolve of these women in the face of horrendous treatment.

 

9. Story of Ida B. Wells-Barnett

Ida B. Wells-Barnett’s story reveals how many women, particularly women of color, faced challenges and obstacles outside of the fight for women’s suffrage. Many African American suffragists were segregated during the March 1913 national suffrage parade, but Wells-Barnett refused. She marched with her state group from Illinois, despite the endorsement from some of the parade’s segregation policy.

 

10. World War I protest photos and artifacts

An entire section of the exhibit is dedicated to women’s protest of World War I as hypocritical, noting that while the US fought for “democracy,” it denied over half of its own citizens the right to vote. It features original images and artifacts from the resistance movement, including a piece of sign pictured in the photo above.

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172329 https://historynewsnetwork.org/article/172329 0
The Stonewall Exhibit in New York Needs Sturdier Walls

A paper fan, one of the objects at the exhibit

 

On the morning I went to see the museum exhibit Stonewall 50 at New York Historical Society about discrimination against gays in America, Newark Liberty Airport held its first ever drag queen show in Terminal C, with a huge crowd of cheering onlookers. New Yorkers were celebrating World Pride month. Mayor Pete Buttigieg, of South Bend, Indiana, an openly gay man, was not only running for President of the United States, but comfortably fourth in the polls. The LGBTQ organization at my University is growing and thriving. Could any of these things have happened in the summer of 1969, the summer of the fabled Stonewall riots, that gave birth to the gay movement in America?

 

Of course not.

 

Stonewall was a momentous event and its 50th anniversary is being celebrated in many ways across the country, including this exhibit at the New York Historical Society, W. 77th Street and Central Park West, New York. 

 

The exhibit on Stonewall, though, needs more walls of its own. It also needs a sharper focus and a richer historical backdrop. As it stands, it is a pretty weak Stonewall itself.

 

You read a book on the history of the New York Yankees and it is about …the Yankees. Here, there is an exhibit about Stonewall but there is little Stonewall. There is a small mention of the club and riots, sort of an afterthought, and that’s it.

 

The Stonewall riots of 1969 not only shook the city to its core, but all of the country, too. It was not just an event in gay history, but American history. How can there be nothing about Stonewall in an exhibit about Stonewall? Am I missing something?

 

The infamous Stonewall riots began early in the morning on June 28, 1969 when a team of New York City police raided the Stonewall Inn, a small gay bar on Christopher Street, in New York’s Greenwich Village. The police burst into the bar and arrested numerous customers. The patrons, gays, did not go quietly into the night. They protested, loudly. That brought in more police. That brought in more protestors, more than 600, that shut down numerous streets, and Stonewall was quickly engaged in a riot. Fires were set, police cars nearly overturned, buildings damaged and the neighborhood trashed. The riots continued for six days. Police arrested 21 people. Much of this was captured by TV news crews and the riots, seen nationally, became infamous. They gave the gay community new strength and changed gay history in this country.

 

All of this drama is generally overlooked in the exhibit.

 

The exhibit has numerous problems. First, it is is really small, almost as small as the Stonewall Inn itself. The whole exhibit takes up just two walls and a tiny, tiny room that is really, really badly lit. That’s it. You could walk down the hall and, if you don’t look carefully, you will miss the entire exhibit and wind up in birds of America.

 

This is not really an exhibit about Stonewall, or even the volcanic effects of the Stonewall riots; it’s an exhibit about gay life in New York in the 1950s and 1960s. That’s fine, but the museum should not connect it to Stonewall.

 

There are some very witty posters in the exhibit, such as “No Lips Below the Hips” and very sad ones, such as “Lesbians Don’t Get AIDs, They Just Die from It.”

 

There are all sorts of protest march posters, gay magazines and even 1950s gay paperback novels, about men and women. There is a huge wall of items from the Lesbian Herself Archives in New York There are some Lesbian “Lavender Menace” T-Shirts sprinkled throughout the exhibit. 

 

“The Stonewall uprising…was a watershed moment in the gay rights movement and we’re proud to honor its legacy…during this 50th anniversary year,” said Dr. Louise Mirrer, President and CEO of New York Historical. “The history of New York’s LGBTQ community is integral to a more general understanding of the long struggle for civil rights on the part of LGBTQ Americans. We hope that with our Stonewall exhibition and displays, our visitors will come to appreciate the critical role played by Stonewall in helping our nation towards a more perfect union.”

 

The exhibit is in three sections. Letting Loose and Fighting Back: LGBTQ Nightlife Before and After Stonewall is about entertainment, particularly in clubs. It is the best of the three, by far. In it, visitors get a really good understanding of how gay clubs had to operate in the 1950s and 1960s as secret entertainment centers, and meeting places. There were dozens of them, usually with non-descript fronts. Inside there was really wild, colorful entertainment by singers and dancers, splashy floor shows and lengthy and loud drag queen extravaganzas. The exhibit highlights these in a splendid display that includes color videos of shows in clubs and marvelous miked conversations by apparently unknowing customers at the club. 

 

The exhibit includes special guide books for newcomers to the city so they could find gay entertainment, club posters, programs, ticket stubs and even the complete architectural plans for The Saint club. Just outside the club room is an exhibit of the gay queen Rollerina, a 1970s Wall Street worker who skated to work on roller skates each day and into the gay clubs each night. There is a tribute to the cross dressing Flawless, an entertainer and club hostess (cross dressing was illegal in the United States for years).  

 

The gay club The Blue Parrot was the centerpiece of the “bird clubs” for gays throughout town.  The very mainstream Hotel Astor had a gay club, the Matchbox.  The 82 Club was where celebrities like Judy Garland and later Liz Taylor hung out.

 

The exhibit explains, too, how city leaders neatly got around small items like the first amendment in policing the clubs. They invented a “disorderly persons” crime that covered all gay activity and permitted tough law enforcement of the venues. Hundreds of people were arrested for being “disorderly.”

 

The most interesting part of the exhibit, presented in this section, is the role the Mafia played in gay clubs. Homosexuals had nowhere else to go, so the Mob bought or controlled many of the gay clubs, including the Stonewall Inn. The Mob made a fortune and used some of them as fronts for Mafia activities. The mob paid off the police not to raid them. There was a slip up over Stonewall in 1969, though, and that miscue, it was said, led to the raid and the subsequent riots.

 

The entire club exhibit has a very authentic you-are-there feel to it. 

 

By the Force of Our Presence offers highlights from the Lesbian Herstory Archives. It includes memorabilia from the life of a lesbian African American woman born in North Carolina who moved to Harlem plus numerous posters.

 

Say It Loud, Out and Proud: Fifty Years of Pride is a wall that contains an enormous timeline of years of gay pride parades and protest and photos of men and women, in wild costumes, who marched in the parades.

 

The exhibit is curated by Rebecca Klassen, Jeanne Gardner Gutierrez and Rachel Corbman.

 

The exhibit offers a nice look at gay life in New York, and America but, still, it needs a little bit of Stonewall because those folks at Stonewall caused a whole lot of trouble, trouble that changed history.

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172326 https://historynewsnetwork.org/article/172326 0
Tangled Lives, Tangled Culture, Tangled History

 

It is 1979 and sodomy is illegal in some states and cross-dressing in others. It has been ten years since the Stonewall riots in New York. The gay community has emerged from the closets of the nation but was still trying to find itself. Marvin, a headstrong, good looking young man, is married to his beloved Trina and the father of an adorable ten year old boy, Jason. Life is good, life Is strong, but Marvin has a problem. He is in love with another…a guy.

 

He leaves his wife and son and moves in with his boyfriend. His wife, stunned, falls for and lives with her psychiatrist, Mendel. The ten year old boy is thunderstruck by all this activity and concentrates on playing to playing Little League baseball a bit better to keep calm.

 

Their story is told in a new revival of the 1980s musical Falsettos, that just opened at the Princeton Summer Theater, in Princeton, N.J.

 

This version of Falsettos is rock solid, a super musical with fine acting, sharp direction, edgy songs and a solid, and quite emotional, story to tell.

 

I wanted to see this play about 1979 because all the problems the gay community faced forty years ago remain today, but the solutions are better, the public perception of those problems better and the people who have the problems today do not suffer as their forebears did back in 1979. Life for gays in America is better, but not yet as good as it is for heterosexuals.. That theme is underscored in Falsettos, that has music and lyrics by William Finn and book by Finn and James Lapine. Today, though, Falsettos is more than a good play; it is an historic look backwards at the suddenly open lives of gay men and women and the troubles they were besieged by, legal, cultural and medical, in the early 1980s.

Falsettos is a triumphant, but not an altogether happy, tale. Marvin does not find true love with his new boyfriend in 1979. They split up and he is left stranded because his wife has moved in with the psychiatrist. Marvin has lost her and has nearly lost his son. 

 

Trina, the wife, who faces more problems than the Biblical Job, wants the kind of straight, traditional loving relationship with the psychiatrist that she so badly desired with Marvin, but did not get. She is looking for a safe harbor in a mixed-up world and finds it – somewhat. 

 

The songs in the play are good and help to tell the story. They also add nuances to the play. As an example, the opening number is a very hip “Four Jews in a Room Bitching,” a very witty tune, and it sets up what you think is going to be a funny play, which is it, for a while. Later songs, such as “I’m Breaking Down” and “Something Bad Is Happening,” give the play its serious side.

 

Falsettos is a fine show, but has its problems. It is a good twenty minutes too long. The performance I saw ran nearly an hour and forty-five minutes. There are several songs in the first act that are redundant and could be cut along with a scene or two. In fact, here are 37 songs in the play, way too many. This is a play, not an opera. The first act also drags a bit here and there and the storyling gets lost. The second act is much better. Not only is the story tighter, but far more dramatic and introduces two gay women to the storyline. The second act also expands the plot from marital woes to social and cultural troubles for this torn family and brings in the medical woes that gays faced in the 1980s.

 

Director Daniel Krane has overcome these problems, though, by offering up a fast-moving story that is heavy on emotion. He has a fine cast that works well as an ensemble. Its members also shine in their work as individuals. They are Michael Rosas as Arvin, Justin Ramos as Mendel, Dylan Blau Edelstein as boyfriend Whizzer, Hannah Chomiczewsi as Jason, Chamari White-MInk as the doctor and Michell Navis as Cordelia. Their neat dance numbers are choreographed by  Jhor Van Der Horst.

 

This thirty year old musical about gay life is decidedly not worn out or dated. It is as vibrant today as when it first debuted  in New York.

 

What’s interesting about it is that today the play offers a nice historical window for people to look back at the gay rights crusade, and all of its triumphs and tragedies, by viewing a play that debuted right in the middle of it.

 

Much is different about gay life today than in 1979. Marriages between gays are legal now,, as an example. Yet many of the troubles homosexuals faced back in the ‘70s still exist. Finn and Lapine could rewrite this play set in 2019 and the plot and characters would pretty much remain the same. It would still be a powerful punch in the stomach tale, though.

 

PRODUCTION: The play is produced by the Princeton Summer Theater. Sets: Jeffrey Van Velson, Costumes: Jules Peiperi, LIghting: Megan Berry, Sound: Tashi Quinones. The play is directed by Daniel Krane. It runs through June 30.

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172327 https://historynewsnetwork.org/article/172327 0
From "Ghettos" to "Concentration Camps," the Battle over Words

Aushchwitz in German-occupied Poland, 1944

 

“What’s in a name?,” Juliet famously asks in Act II of Shakespeare’s Romeo and Juliet. For Juliet, the answer is perfectly obvious: nothing. It should not matter that her sweetheart Romeo is a Montague and she a Capulet, feuding families in the strife-riven city of medieval Verona. “Tis but thy name that is my enemy,” Juliet assures Romeo. He could cease to be a Montague, he could cease even to be Romeo, and he would remain her beloved. “O! Be some other name: What’s in a name? That which we call a rose by any other name would smell as sweet.”

 

Juliet’s answer is difficult to square not only with the ultimate fate of the star-crossed lovers in Shakespeare’s play but also with the controversies that frequently attend the application to someone or some thing of a highly charged label. We tend to think we have minimized the scope of disagreement when we boil a debate down to semantics. The reality is that so many of our cultural arguments manifest as arguments over words, over what they mean, how they are used, and who gets to define them.

 

The past week has offered yet another reminder of this truth. On Tuesday, Representative Alexandria Ocasio-Cortez, the fiery left-wing freshman Democrat from New York, blasted the Trump administration on Twitter for creating “concentration camps on the southern border of the United States for immigrants, where they are being brutalized with dehumanizing conditions and dying.” Her description of immigration detention centers as “concentration camps” precipitated a riposte from Representative Liz Cheney of Wyoming, the chairwoman of the House Republican Conference, who, likewise in a tweet, scolded Ocasio-Cortez for betraying the memory of the six million Jews murdered in the Holocaust with her choice of words. The days since have seen further sharp exchanges on this issue, as politicians from opposing sides of the aisle, op-ed columnists, Jewish organizations, cable news journalists, and, of course, the Twitterati on the right and the left have weighed in on whether the “concentration camp” designation is appropriate or misplaced.

 

This is far from the first time the usage of a term with Holocaust associations has provoked a backlash. Some of the same arguments that are now being made to rebut the labeling of detention facilities as “concentration camps” were trotted out in the past to resist the naming of segregated African American neighborhoods as “ghettos.” In 1964, the American Jewish intellectual Marie Syrkin wrote, “In the immediate as well as historic experience of the Jews the ghetto is not a metaphor; it is a concrete entity with walls, stormtroopers, and no exit save the gas chamber.” Syrkin penned this jab at the transference of the word “ghetto” from Jews to African Americans at a time when the black referent for the term was only beginning to become mainstream. Yet, nearly three decades later, the American Jewish author Melvin Jules Bukiet expressed the same pique. In April 1993, on the eve of the gala opening of the U.S. Holocaust Memorial Museum in Washington, DC, Bukiet published an editorial in the Washington Post that was bitingly critical of the museum, which he judged a well-intentioned but insidious attempt to absorb the Holocaust melting-pot style into a vocabulary of American politics and culture and to sap it of its Jewish specificity. “Something similar,” he wrote, “has also begun to happen with other elements of the Jewish tradition. Take the word ‘ghetto,’ which is commonly used as a synonym for slum. A ghetto is a an enclosed, preferably walled-in, domain within a city, and despite the invisible walls around Harlem, there is no barbed wire across 125th Street and there are no guard towers.”

 

Both of these attempts to limit the word “ghetto” to its Holocaust meaning skipped over the fact that, prior to the Holocaust, the term had a long history of being used to refer to both coercive and non-coercive Jewish quarters. Its early uses centered on two cities: Venice, where it referred to the segregation of the Jews in 1516, and Rome, where the ghetto survived until the fall of the Papal States in 1870, long after it had ceased to exist elsewhere. The ghettos of the early modern Italian peninsula were compulsory, segregated, and enclosed Jewish spaces, and while they were kept under round-the-clock surveillance, there was no “barbed wire,” which had yet to be invented; nor were there “guard towers” (which may have been a feature of some Nazi concentration camps, but were likewise absent in Nazi ghettos). Even after the original mandatory ghettos disappeared with the legal emancipation of Western and Central European Jews, the term “ghetto” was resurrected near the turn of the twentieth century to refer to densely packed but voluntary Jewish immigrant neighborhoods such as London’s East End, New York’s Lower East Side, and Chicago’s Near West Side. Moreover, over the course of the nineteenth century, “ghetto” came increasingly to be used especially by Jews as a metaphor for an old-world, traditional Jewish society and mentality believed to be disappearing.

 

One argument that has been made on the left to defend Ocasio-Cortez—which she herself has now adopted—is that the term “concentration camp” also has a history that predates (and coincides with) the Holocaust. It first became prominent during the Second Boer War from 1900 to 1902, when it was applied to the internment camps established by the British in South Africa. Later, in the 1940s, it was widely used in discussing the American internment of Japanese-Americans during World War Two. The difficulty with this argument is that it is clear that Ocasio-Cortez intended to mobilize memory of the Holocaust by appropriating a term that, since 1945 at least, has become indelibly associated with the destruction of European Jewry. The day before her tweet that went viral, she posted an Instragram video where, in addition to using the word “concentration camps,” she invoked the slogan “Never Again,” which is commonly used to refer to the importance of remembering the Holocaust. By 1945, the term “ghetto” was sufficiently venerable and versatile in its signifying capacity that efforts to reserve it for the Holocaust were unsuccessful. The label “concentration camp” did not have a similarly long history of changing and contested meanings. While academics may continue to employ the term to designate the general phenomenon of camps in which people not guilty of a crime are interned by the state, in the popular lexicon today, “concentration camps” cannot be used without effectively conjuring the Holocaust.

 

There are others on the left who, confronted with the analogy with the Holocaust, have said, essentially, “bring it on.” They have argued that analogies by definition do not presume identity, only resemblance, and that the evocation of the Holocaust in reference to the Trump administration’s cruel treatment of immigrants is not out of bounds. Both on substance and as a matter of strategy, this seems misguided. The presence of the attributes of incarceration, atrocious conditions, and even racial animus in the case of the detention centers is not enough to warrant comparison with Nazi concentration camps that were factories for slave labor at the very least, and eventually became part of a system of mass murder. The problem with the analogy is not merely that it is strained at best and inaccurate at worst. Marshaling the Holocaust to criticize (or justify) contemporary policies diminishes the scale of the genocide against the Jewish people and allegorizes Jewish suffering. Certainly, we can and should learn from the Holocaust to fight against racism and dehumanization, mindful of what they can lead to, but we should avoid comparisons that collapse the distance between the Holocaust and current events and result in the trivialization of the former. Simply on the level of tactics, Ocasio-Cortez’s choice of words has backfired by deflecting attention from what should be the focus—the Trump administration’s appalling treatment of immigrants—and creating an unnecessary flap over the Holocaust and historical semantics.

 

None of this excuses the response of Jewish organizations that acceded to this deflection. It is morally obtuse for groups like the Jewish Community Relations Council, however officially nonpartisan, to condemn the language used to describe the Trump administration’s sadistic policy for trespassing on Holocaust memory while barely commenting on the policy itself. When the internal watchdog for the U.S. Department of Homeland Security documents cases of migrants and asylum seekers being kept in “standing-room-only” cells for days or weeks in prison-like facilities meant to hold a fraction of their current population, it should merit more than a passing expression of concern, with the real ire directed atone congresswoman’s pressing of the Holocaust into service to indict the inhumane actions that have led to these conditions. 

 

The battle over words, though it can be a distraction, is not likely to abate anytime soon. So long as we continue to use labels that have a history and are, as Erich Auerbach once described biblical narrative, “fraught with background”—which is to say, forever—there will be controversy. What’s in a name? Quite a lot, as it happens.

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172325 https://historynewsnetwork.org/article/172325 0
We Need A Long-Arc Historical Perspective to Understand White Supremacist Extremist Violence.

 

“Beware the Ides of March.” Those words of warning from a Roman soothsayer to the great Julius Caesar have—thanks to William Shakespeare—resonated through the ages. They still strike a chord today. Had Caesar not ignored them, his brutal betrayal and murder might not have happened and probably would not have been chronicled in one of those Shakespearean Tragedies. 

 

But at least Julius Caesar had a fortuneteller’s warning about impending disaster. The worshippers in the Christchurch, New Zealand mosques had none when a gunman attacked their place of worship this past ‘Ides of March’--Friday March 15, 2019. What greater betrayal than to be at afternoon prayers, communing with your Creator, and then be indiscriminately slaughtered by some crusading White Supremacist wielding rapid-fire automatic killing-machines? Fifty dead. Fifty wounded, with the unfathomable added carnage of wrecked and ruined lives. 

 

That was the original opening for the first draft of this piece back in late April. As I edited the article, I could not keep pace with unfolding explosively “related” events: 46 tourists dead at three luxury hotels, over 220 Christians slain while celebrating Easter Sunday at three churches in Sri Lanka, with at least 500 people injured; one student dead in STEM charter school at Highlands Ranch, Colorado, and 8 injured. Had they anything in common? Undoubtedly.

 

Once again, with these new atrocities, we have a new generation of people—primarily seemingly alienated, young males with personal, political and/or religious grievances--who feel the need to express those beliefs and purge those “grievances” by killing other humans who do not look like them, think like them, or worship as they do. The Sri Lankan attackers were young radicalized Islamists- 8 males, and two of their wives. Some have speculated that those killings were retaliation for the slaughter of the Islamic Worshippers in New Zealand. The New Zealand murderer (alienated young white male extremist), and the Colorado school shooters, are seemingly part of an ongoing continuum that includes Timothy McVeigh. On April 19, 1995 in Oklahoma City Oklahoma, the workers and other individuals using the Murragh Federal Building also had no warning when the bombing planned and executed by McVeigh shattered their lives with a massive death-dealing, life-wrecking explosion. Death toll 168. Over 680 Injured. This is a global, repetitive pattern. And as the New York Times recently highlighted, these individuals are referencing and drawing inspiration from previous White Extremists mass atrocities. 

 

What is often missing in the discussion and coverage of white extremism and mass violence is an historical perspective. This spike and acceleration of right wing extremist assaults against various “others” is something we have seen before on a global scale. As a Broadcast Journalist in the 1980s, I witnessed first-hand a similar spike in violence and, seemingly, the media’s reluctance to accurately name it and help curtail it. I also witnessed a similar cultural and political environment that contributed to the mass violence. Much like today, in the early 1980s, people of color and the gains of the civil rights movement were increasingly under attack, many Americans wanted to “turn back the clock to an earlier era,” and the President often encouraged these aims. 

 

On December 9th 1980, the Chairman of the House Subcommittee on Crime, Representative John Conyers, began hearings on Racial Violence Against Minorities. “There is abundant evidence of a marked increase in the incidence of criminal violence directed against minority groups” he said in his opening remarks.“ Hate groups appear to have reached the conclusion that their activities are no longer so disreputable, and violence-prone organizations have been conducting their activities more openly and flagrantly….This situation confirms the view that government authorities have done less than an adequate job at investigating the causes of racial violence, monitoring its extent and punishing the offenders.” 

 

What had transpired to move the Honorable John Conyers to hold such hearings? A rather scary litany of events starting in the late 1970s: in Chino, California, a white hunter unable to find any game, turned his rifle on African Americansand shot them; in Chattanooga, Tennessee, the Klan, after burning a cross, celebrated by shooting five black women at random; in Greensboro, North Carolina, the Klan together with neo-Nazis, attacked an anti-Klan demonstration, killing five people; in Jackson, Mississippi, a white policeman shot and killed a pregnant black woman; in Atlanta, Georgia, 15 black children slain, at least 3 others missing; and in Buffalo, New York, six black peopleslain by sniper-fire or stabbings, with at least two of the victim’s hearts ripped from their bodies.

 

As you can imagine, Representative Conyers was not the only person concerned about these assaults and murders. “The fact that the perpetrators of these acts of violence have yet to spend four months, if that, in any prison, leads one to believe that it is indeed OPEN SEASON ON BLACKS in this country,” I wrote in 1981. Indeed, my concern had reached such a level that in February of 1981 I sent an almost 4-page memo to key individuals in the ABC network television news division, including President Roone Arledge. In the memo, entitled “OPEN SEASON ON BLACKS??? (The Rise of the KKK, Neo Nazism, and the New Right),” I argued that what was needed, by our news division, was an in-depth, big-picture assessment of these repeated violent incidents targeting African Americans. My sense of urgency was a personal reflection of the same concern that caused Congressman Conyers to start those hearings: blacks were indeed being assaulted at an alarming, seemingly epidemic rate. And it wasn’t confined to the U.S.: “Klan/Right Wing activity has been notably evident in Canada and Great Britain, with a marked increase in Neo-Nazism and other Right Wing extremism in Germany, France, Spain and Italy,” I wrote.

 

Not everyone agreed with the assessment Congressman Conyers and I had independently made.  Northwestern University School of Law’s Journal of Criminal Law and Criminology published an article by James B. Jacobs and Jessica S. Henry in its Winter 1996 issue entitled “The Social Construction of A Hate Crime Epidemic” which statistically “debunked” the actuality of any such “epidemic.” The authors concluded it was not an epidemic in the “strict” definition of what constitutes an epidemic. But, try telling that to the people who were under siege, and who at that particular moment in time knewt hey were under siege. I still stand by the conclusion of my 1981 memo: “Unquestionably there is a major story here. Utilizing the ABC News Division/Programs to the fullest capability could and should result in a landmark series of reports that would do justice to ourselves as a News Organization, and to the nation as a whole.” 

 

I got no immediate response. A week or two later, in my role at the time as an Assignment Editor and sometime Field Producer, I found myself on loan to ABC’s Atlanta Bureau. I was assigned to cover one of the very stories that had compelled me to write my “Open Season on Blacks” memo: the missing and murdered children of Atlanta. From 1979 to 1981, at least 28 children and adults were murdered in the Atlanta area. I will never forget covering those weekend “Community Searches” trekking through local neighborhood parks and wooded areas; will never forget that Saturday morning, as we waded through low shrub undergrowth, bright sunlight jaggedly streaking through tall trees, the disconcerting, uncomfortable unease as our line of searchers actually stumbled upon the body of yet another young black murder-victim. 

 

The Atlanta Child Murders, however,  werenever connected to any wider coverage of the issue of a nationwide trend of assaults against blacks, despite the rumors of possible Klan involvement. This was in part because Wayne Williams, the man suspected of killing at least 28 of the 30 people killed in the Atlanta Child Murders, had recently been tried and convicted of killing two adults. That significantly lowered the profile of that two-year series of murders, even though many people harbored serious doubts about Williams’ guilt in the murders of those children. In March 2019, the Atlanta police reopened the cases of the murdered children. 

 

As I continued to investigate this story as a Producer for ABC’s 20/20 in 1982, my team grilled the Georgia Bureau of Investigation (GBI) about our earlier leads on possible Klan involvement in the killings of those children. The GBI categorically denied any such link. Later, Freedom of Information Act (FOIA) probes of the GBI unearthed the fact that the GBI had indeed launched serious investigations on probable Klan links to some of those murders. To my knowledge, Klan involvement was never proven to be true, but the point remains: as violence against African Americans remained high, it was often seen as isolated incidents instead of part of a larger pattern.  The bulk of my research on the subject went to the NIGHTLINE Show which was quite interested in exploring the issue, but I have no recollection of a NIGHTLINE report or series of reports ever airing. If there were any other media outlets, print or broadcast tackling the issue on such a wide-ranging level, they went right by me. Let me not be facetious. It never happened.

 

I did, while on loan to the Atlanta Bureau in 1981, receive a request to reach out to the Southern Poverty Law Center, to “check into” the activities of the Klan and the New Right. That evidently was the unofficial upshot to my detailed suggestion for an in-depth,wide-ranging look at the growing White Extremists phenomenon, although there was no acknowledgement of my original memo summarizing my already extensive research on the issue, and definitely no mention of a major News Division larger-picture overview. Unfortunately, that was the extent of it: much too little, and no follow-through. 

 

The early 1980s was also similar to today in another way: a period of progressive reform was facing backlash and roll-back. The hard-fought victories of the Civil Rights Movement were challenged by citizens and leaders.

 

I witnessed this tension first-hand in March 1981 when I covered the anniversary of the  Bloody Sunday-Edmund Pettus Bridge-Selma anniversary. March 1981 was only 16 short years from the actual March 7th 1965 Bloody Sunday attack on the Edmund Pettus Bridge: a State-sponsored, not a KKK, White Extremist assault on blacks peacefully seeking to exercise their Constitutional Rights.

 

On a so-bright-it-hurts-the-eyes sunshine of a day, Rebecca (Becky) Chase, myself and an Atlanta Bureau Crew covered that 16th Bloody Sunday Anniversary March for the Sunday ABC Weekend News. Significantly, that bright sunny day was in sharp contrast to the growing darkness of the times. It was at a time when, as today, people of color and the gains of the civil rights movement were increasingly under attack; a time when there was a “let’s turn the clock back to those ‘60s Bloody Sunday days” when the State and the KKK-mobs moved as one; a time when there definitely should have been a “big picture” look at what was happening. As the American Friends Service Committee wrote in 1981, “the current upsurge in activity comes at a time of economic and social uncertainty“ which then fostered the subsequent Klan scapegoating and assaults of blacks.

 

Ronald Reagan had already deliberately tapped into that “social uncertainty” by launching his 1980 general election campaign at, of all places, the Neshoba County Fair in Philadelphia, Mississippi. Yes, the place where civil rights activists Andrew Goodman, Michael Schwerner and James Chaney were murdered by a Neshoba County faction of the Klan in June of that Freedom Summer of 1964, a long festering wound of the Civil Rights era. One could not have gotten any deeper into Klan Country than Philadelphia, Mississippi. Needless to say, the Klan loved and fully endorsed the Reagan Candidacy. If you thought Donald Trump was slow to repudiate the public endorsement of his avid White Supremacist supporters, he was downright speedy compared to Reagan. It took Ronald Reagan almost a month to repudiate the Klan endorsement. Reagan, whose slogan was also “Make America Great Again,” promised to “turn the clock back” to his version of a better time—one that had fewer protections and rights for African Americans. Reagan’s proposal to weaken the Voting Rights Act and his support of the South African Apartheid Regime received the full blessing of Bill Wilkinson, then the Imperial Wizard of the KKK.

 

Today, Donald Trump assiduously coddles, sympathizes with, and even praises his home-grown White Supremacists, while being eulogized as one of their own not only by them, but by their international counterparts, e.g. the New Zealand mass murderer. Indeed, there is a global connection, a global exacerbation of this White Supremacist extremism that needs that broad-brush, insightful, historical perspective. The parallels between the early 1980s and today are quite striking. In the words of the legendary baseball Hall-of-Famer Yogi Berra, it’s “deja vu all over again.” 

 

But despite some admirable chronicling of the current global spate of White Extremist carnage in major news publications especially the New York Times, we still do not have an adequate, full, historical breadth-and-depth take on the ongoing, repetitive nature of this phenomenon. As the violence, whether “sympathetic,” “retaliatory” or “copy cat” continues seemingly unchecked—another mass shooting, 12 dead in Virginia Beach just two weeks ago—such analysis is obviously long overdue. In the US, a study by the Center for Strategic and International Studies found the number of terrorist attacks by Far Right perpetrators quadrupled in the US between 2016 and 2017, and that Far Right attacks in Europe rose 43% over the same period. The Anti-Defamation League attributes 73% of extremist-related killings in the US from 2009 to 2018 to the Far Right.  Maybe some “Team” of journalists and mental health professionals will put this negative, corrosive cancer of hate-mongering, violence-inciting, fear of the “other” nativist divisiveness into its needed “big-picture” historical and moral perspective. Hopefully they can and will help call to account all those perpetrators of violence, whether Radical Islamists, White Supremacists, or whomever. 

 

One thing is certain, we cannot make light of or down-play this burgeoning problem. Unlike Trump, we must remain vigilantly focused in identifying and holding accountable not only those perpetrators of violence against blacks and other people of color, but also their inciters, enablers, and all those who say and do nothing whenever these xenophobic, anti-Semitic, Islamaphobic, anti-immigrant zombies rear their ugly heads. Hopefully, for whatever “Team” that takes on this most necessary of tasks, may their efforts resound successfully on platforms far, wide and multi-dimensional, with many, many, eyeballs. And maybe we can do more than just hope.

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172319 https://historynewsnetwork.org/article/172319 0
Wealth, Access, and Archival Fetishism in the New Cold War History

 

“Have you looked at the Djiboutian archives?” This is a jest I commonly ask my counterparts in the field of Cold War history as it pokes fun at the almost crazed search amongst some Cold War historians for new archival documents in the former Soviet bloc and the Global South. However, at the heart of the joke, a more serious problem stands out. The “New” Cold War history is now almost exclusively an elitist space as it overemphasizes the discovery of new archival documents in multinational and multilingual contexts. 

 

The field of Cold War history has over-prioritized finding materials in previously unexplored archives. In other words, the supposedly “best” scholarship is now reserved to those elite few with ample research funds and the time to go on lengthy archival jaunts around Eastern Europe and the Global South. 

 

This turn to multinational research in Cold War history began with the collapse of the Eastern Bloc in the early 1990s. With the opening of communist archives, many historians flocked to Eastern Europe and Asia to look at previously unexplored sources. This was the “new” Cold War history as researchers perused KGB archives and documents from the Chinese Foreign Ministry. However, with the ascent of autocrats in the non-Western world, most notably Vladimir Putin and Xi Jinping, many of these previously accessible archives closed their doors to foreigners. This then ushered in the current phrase of what I call archival fetishism, in which Cold War historians assume the posture of elite antique collectors going archive to archive across various national borders looking for their scholarly treasures.

 

Currently, conducting Cold War research is extremely expensive and environmentally detrimental. From buying visas and plane tickets to booking Air BnBs in several different countries, a Cold War-related research trip can easily cost several thousand dollars. In addition, with the world currently in an ecological crisis, more overseas airplane rides is damaging no matter how small one’s personal environmental footprint may seem. Once at the research location, the expenses continue as hotel costs add up and copying costs are often extortionate  (can you blame the financially strapped local archive though?). 

 

The point of this critique is not to downplay the many successes of the new Cold War history as finding new primary sources is inherently constructive and an important means of producing new knowledge. It often provides different perspectives and valuable vantage points, especially for those in the formerly communist and colonized worlds. Additionally, the “new” Cold War history allows a space for scholars who are native to those non-Western countries to bring their unique expertise into an exceedingly Western-dominated field. 

 

However, Cold War history has gone too far in its archival fetishism. The quality of scholarship is now mostly measured by how many different national archives one has perused and the expansiveness of one’s source base rather than the actual arguments or analysis. The diversity of multinational sources now seem to count more than the author’s analysis when it comes to determining a manuscript’s contribution to the field.

 

The most egregious example of archival fetishism is Columbia Professor Charles Armstrong’s 2013 book, Tyranny of the Weak: North Korea and the World, 1950-1992. Based on multinational research materials from the former Eastern bloc, such as Russia, China, and East Germany, Armstrong’s book was supposedly going to be a trailblazing piece of scholarship in the new Cold War history. However, Tyranny of the Weak was soon found to be full of text-citation errors, deeply flawed translations, and footnotes to archival documents that simply did not exist. After numerous scholars raised concerns about the Tyranny of the Weak, Armstrong returned his 2014 prestigious John K. Fairbank Prize to the American Historical Association in 2017. Armstrong attributed his book’s many errors to his poor Russian language skills and disorganization of research notes. However, many, if not all, of these errors could have been avoided if Armstrong had not overreached in his search for new archival sources in the former Eastern bloc. As a result, Armstrong’s archival fetishism resulted in a deeply problematic book that has cast a cloud over the entire Korean studies program at Columbia University.

 

In addition, the image of predominantly wealthy white men going to archives in relatively impoverished places and scouring their archives in order to bolster their own scholarship is reminiscent of colonial-era archaeology and anthropology. This archival exploitation hardly benefits the local populations but can improve the research profile of one’s tenure dossier. As a whole, Cold War history does more to help those in places such as Cambridge and New Haven than those in Dar es Salaam and Tirana. 

 

This type of archival fetishism also limits who can enter the field. As a former PhD student who studied at an institution in one of the most expensive American cities, I can attest that graduate students are usually short on money and grants often do not cover the entire costs of research trips abroad. The “best” scholarship in Cold War history amongst junior scholars is often the ones coming from Ivy League universities and similarly elite institutions with huge endowments. The time and opportunity to learn multiple languages and go on lengthy international research jaunts is an elite privilege that is not talked about openly in the field but has a direct influence on the trajectories of many careers. 

 

The academic discipline of history is collapsing. This is no provocative statement. With the rapidly declining enrollments of history majors in the United States and the notion of studying history as an inherently elitist endeavor, the discipline finds itself in peril. Cold War historians are only reinforcing these elitist hierarchies if they continue to almost singularly hold up multinational research in previously unexplored archives as the gold standard of quality scholarship. 

 

However, there are collective ways to even the playing field. The Woodrow Wilson Center’s Cold War International History Project is one such resource as it collects, translates, and digitally houses thousands of archival documents from around the world. Digital archives are inherently egalitarian and can be used by anyone in the world. The Wilson Center has been at the forefront of Cold War history digital archiving since the collapse of the Soviet bloc in the early 1990s. Researchers can also share their findings with colleagues on listservs, such as H-Diplo. 

 

However, historians need to continue to share their materials with digital archives, such as the Wilson Center, and not merely hoard their resources. I have heard of historical hoarders and that mostly benefits that respective historian and their local moth population. If Cold War history is to become a more open and egalitarian space, sharing materials is necessary or we will be no better than colonial-era anthropologists who exploited “the natives” for their own self-interests.

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172318 https://historynewsnetwork.org/article/172318 0
On the 100th Anniversary of the Treaty of Versailles, It's Time to Reexamine Its Legacy.  

 

Just months before his 1923 Beer Hall Putsch, Adolf Hitler took aim at a familiar target: the Treaty of Versailles. He announced categorically, “so long as this treaty stands there can be no resurrection of the German people.” A decade later, Hitler declared “all the problems which are causing such unrest today lie in the deficiencies of the Treaty of Peace [Versailles].” 

Interestingly, many in the Anglo-American world concurred with these right-wing German nationalist talking points. The influential economist John Maynard Keynes criticized Versailles as a “Carthaginian peace.” Britain’s wartime prime minister David Lloyd George, who had helped draft the treaty, ultimately condemned it as vindictive and short-sighted. In the 1930s, similar attitudes among British and American statesmen allowed Nazi Germany to remilitarize and trample on Versailles with disastrous consequences.

As the one-hundredth anniversary of the Treaty of Versailles approaches, this narrative that the harshness of Versailles directly led to the Third Reich and World War II regrettably persists. However, that narrative collapses when the treaty and its historical context are examined. Versailles did not destroy the German economy, make Germany into a permanent pariah, or inspire the German lust for revenge. Instead, the Nazis capitalized on a unique economic calamity (the Great Depression), German political instability, and deep seated radical nationalist currents.

Unpaid Reparations 

Hitler claimed that crushing reparations had wrecked the German economy and reduced Germany to “serfdom,” and starvation. However, these apocalyptic pronouncements had little basis in reality. 

Given that defeated Germany had caused massive damage during her invasion of Belgium and France, just war theory and millennia of historical precedent expected that Germany would pay compensation for the destruction. 

When assessing the “fairness” of Versailles’ reparations, it is useful to compare them to the reparations imposed on France following the 1870-1871 Franco-Prussian War. After that conflict, Germany demanded indemnities far exceeding the German costs of war. In making these demands, Germany’s leaders aimed to cripple France for 15 years. As an added insult, German troops continued to occupy France until the reparations were paid in full.

The German-Russian Treaty of Brest-Litovsk in 1918 is also instructive. Under the terms of that treaty, Germany forced Russia to give up Finland, the Baltic States, Belarus, and Ukraine. These lands contained 55 million Russian subjects, over half of Russia’s industry, and nearly all of her coal minesA later supplementary treaty compelled Russia to pay billions in gold marks and supply huge amounts of raw materials for German “protection.”

By contrast the Allies hoped to weaken Germany, without destroying her economy—a situation which would harm them as well. Commercially-minded Britain was particularly concerned with a robust German recovery

Although the initial bill for reparations was large, 132 billion gold marks (approximately $33 billion), the Allied leaders never expected Germany to repay this full amount. The 132 billion figure was merely a sop to Allied public opinion. The 1921 London Payment schedule allocated reparations payments into specific classes. The largest class of reparations was interest-free and the Allies acknowledged that they were unlikely to be repaid. Thus, scholars have estimated that Germany’s actual burden likely ranged between 64 billion and 50 billion gold marks. (Incidentally, Germany had made a counter-proposal of 51 billion gold marks.) Furthermore, Allied demands were largely confined to damages to civilian land and property.

The Allies’ real mistake was failing to appreciate Germany’s unwillingness to pay reparations. The two most disastrous economic events in early 1920s Germany, the Ruhr occupation and the related hyperinflation, were self-inflicted wounds perpetrated by an obstinate German government. By 1923, Germany was already regularly defaulting on its reparation payments, which led France and Belgium to occupy the Ruhr region. With government encouragement, workers in the industrial Ruhr began a general strike, wreaking economic havoc. Simultaneously, the government accelerated the printing of money, leading to hyperinflation. However, the economic damage caused by these events had been resolved long before the Nazis took power. 

Rather than being crushed by reparations, the German economy had markedly recovered by the mid-1920s. By 1928, wages were rising at nearly ten percent annually in some sectors, and industrial production exceeded 1913 levels. By the mid-to-late 1920s, Hitler’s exaggerated claims of Germany’s economic weakness found few receptive ears. In the 1928 elections, the Nazis garnered less than three percent of the vote.

During the 1920s, the Allies twice agreed to American-led reparation restructuring proposals. In 1924, the Dawes Plan reduced Germany’s annual payment in exchange for a German pledge to restart payments (Germany had stopped payments during the Ruhr occupation). America pressured France and Belgium to accept the Dawes Plan and leave the Ruhr, which they did in 1925.In 1928, the Young Plan was approved, providing German with even more generous terms. The total reparations were cut twenty percent and annual payments were reduced further. If desired, Germany could also defer up to two-thirds of annual payments

In 1932, during the depths of the Great Depression, Germany and the Allies agreed to continue a 1931 moratorium on reparations. Thus, by the time the Nazis took power in 1933, Germany had not made a significant reparation payment in years. Even the payments Germany had made were less than met the eye since nearly all reparation payments were funded by Allied loans. (Germany would ultimately default on these loans.) Of the payments made directly by Germany, most were payments in-kind, raw materials such as coal and timber. All told,Germany paid less than two percent of the amount specified in the treaty. 

A Self-Imposed Isolation

Nazi leaders believed that the Treaty of Versailles had made Germany an international outcast. However, they had the causation backwards. Germany’s diplomatic isolation was not decided at Versailles, but by the Nazis’ aggressive violations of the treaty. Weimar Germany had been a full participant in the international community, a status Germany would have retained had the Third Reich not pursued policies of illegal rearmament and ruthless territorial aggrandizement.

Moreover, after World War One, Germany was quickly reintegrated into the global economy. Germany was a world leader in industrial, chemical, and technological products, and the Allies simply could not afford to have its productive capacities remain idle. Massive loans and investments of international capital (primarily from America) poured into post-war Germany, helping to reinvigorate German heavy industry. Estimates put these inflows at over $25 billion. From 1925 to 1929, German exports rose over forty percentBy 1929, Germany’s share of world trade was higher than it had been before World War I.

The Treaty of Versailles also did not foreclose Germany’s return to a prominent place within the European order. Under the guidance of visionary Foreign Minister Gustav Stresemann, Weimar Germany rejected the alienation of the immediate post-war years. In 1925, Germany and several other European nations signed the Locarno Treaties, setting the borders of Western Europe and significantly improving European diplomatic relations. The following year, Germany joined the League of Nations. In 1928, Germany signed the Kellogg-Briand pact,renouncing war as a means of resolving international disputes. Aristide Briand, the French Foreign Minister (and co-author of Kellogg-Briand), would share the Nobel Prize with Stresemann for the work they had done to promote Franco-German reconciliation

Versailles did require German territorial concessions, most notably the return of Alsace-Lorraine and the creation of a Baltic Sea corridor for reconstituted Poland. These losses and the elimination of Germany’s few overseas possessions surely rankled some Germans. However, the Versailles conditions were far more justifiable and lenient than those imposed on the other defeated nations. 

Although Alsace-Lorraine had changed hands through history, it had been part of France for generations. As Germany had seized Alsace-Lorraine by conquest in 1871, its return to France was not particularly controversial. Regarding the Polish corridor, this thin strip of land was ethnically Polish. Predominantly German Danzig (today Gdansk), became an open international city instead of joining Poland.

At Versailles, Germany lost approximately 10% of its pre-war territory. Austria-Hungary’s successor states fared far worse. In the Treaty of St. Germain, Austria gave up half its territory, and was left so weak that it immediately sought to join Germany. Hungary lost over two-thirds of its pre-war territory at the Treaty of Trianon. The Treaty of Sevres intended to dissolve the Ottoman Empire and leave ethnic Turks with a greatly diminished homeland. Even after Versailles, Germany remained the largest European nation west of the Soviet Union. Geo-strategically, Germany was perhaps in an even better position than before World War I, when she had been sandwiched between hostile France and Russia. Now, a sizable Polish buffer state separated Germany and the Soviet Union.

The Myth of War Guilt

Perhaps the most misunderstood passage in the Versailles treaty is Article 231. Wrongly termed the “war guilt clause,” apologists claim that this article shamed the entire German nation and inspired a burning desire for revenge. Yet, when read in context as the introduction to a section on reparations, it becomes obvious that the Article 231 has little do with “war guilt.” The article merely provided the basis for collecting reparations by establishing German legal liability. The Treaties of St. Germain and Trianon used the exact same wording regarding legal “responsibility.” However, the governments and people of Austria and Hungary never made it an issue. In contrast, German nationalists willfully misconstrued the Article 231’s meaning, to the shock and dismay of its American writers. 

Ironically, rather than saddling Germany with boundless moral guilt, the reparations section of the treaty actually limited Germany’s financial liabilities. Article 232 recognized Germany’s depleted resources and reduced Germany’s liability primarily to civilian damage

Unquestionably, Germany was responsible for the destruction its armies brought upon Belgium and France. In addition to the thousands of civilians killed by the Germans during their invasion, four years of fighting had utterly devastated the region. After the war, France deemed hundreds of square miles too dangerous for human resettlement because of unexploded ordinance. (many of these “red zones” remain uninhabitable to this day.)The Germans also pursued a scorched earth policy during their 1917 retreat flooding mines, leveling villages, and burning fields.

German behavior in Belgium and France demanded recompense and Versailles provided it. However, the Nazis had little interest in modifying the treaty to make it “fairer.” Instead as the U.S. Consul General in Berlin noted in 1933 the Nazi program instead “strived in every way to impose its will on the rest of the world.” 

During the Paris Peace Conference, French Premier Clemenceau had chided President Woodrow Wilson. “Don't believe they [Germany] will ever forgive us; they seek only the opportunity for revenge. Nothing will extinguish the rage of those who wanted to establish their domination over the world and who believed themselves so close to succeeding.” Even given Clemenceau’s Germanophobia, he was absolutely correct about Germany’s militarist right. Nationalists felt ashamed not because Germany had been blamed for the war, but because Germany had lost it. 

Revisionist History

Unique circumstances during the final years of World War One and during the Armistice allowed a dangerous revisionist myth to emerge. Beginning in 1916, Germany was under a virtual military dictatorship. Tightening the controls of an already authoritarian state, the military leaders enforced strict censorship and expanded patriotic propaganda. As a result, although German civilians suffered greatly under the British blockade, they remained largely unaware of German reverses in mid-1918. The military leadership recognized the war was unwinnable in early autumn and pushed for an armistice. However, they concealed the totality of Germany’s defeat from the population, claiming that since Germany had not been overrun, the army remained unbeaten. Disturbingly, moderate politicians such as President Frederick Ebert perpetuated these delusions, declaring to returning troops “no enemy has vanquished you.” 

Given the confusion around the armistice, many Germans lived in a sort of “dreamland,” for several months uncertain whether their nation had truly been defeated. For those ordinary people, the Versailles treaty was a shock. It removed any doubts about the war’s outcome, Instead of accepting reality, Hitler and radical German nationalists sought to rewrite history. 

Their first objective was deflecting blame from the German army, an easy task given the limited flow of uncensored information and German society’s veneration of the military. Next, they cast blame on labor organizations, civilian leaders who accepted Versailles, and of course, the Jews. They claimed that had these civilian groups not stabbed the German army in the back, Germany would have been victorious.

Hitler’s scapegoating of Jews and “November criminals” (the right-wing term for politicians who supported the Armistice) directly related to his condemnation ofthe Treaty of Versailles. If Germany had been victorious, then Versailles was an unjust swindle. By attacking Versailles, he was attacking the basic fact of Germany’s defeat. Versailles’ terms were largely irrelevant as the treaty itself represented German weakness. Hitler’s rabid anti-Semitism and anti-Liberalism called for the elimination of those he held responsible for Versailles. Correspondingly, his plans for rearmament and European conquest aimed to obliterate the world created by the treaty. Only then could Germany rise again.

An Imperfect Peace

The Paris Peace Conference certainly had its faults. While trumpeting the lofty ideals of self-determination and international cooperation, the conference was marred by self-interest and petty bigotry. 

Dismissive treatment of Arab demands for self-government demonstrated that Britain and France intended to continue their imperialist ways. The consequences of their division of the Middle East endure today. The U.S. and Britain rejected Japan’s proposal for an amendment on racial equality. Although most nations supported the resolution, President Wilson overrode them. The racist rejection angered Japan and strengthened its hardliners who argued that military power was the only road to equality. The U.S.’s failure to join theLeague of Nations. America’s absence and the League’s watered-down powers rendered it wholly inadequate to preserve world peace.

However, despite valid criticisms of the Paris Peace Conference, the Versailles treaty was far less flawed than many have believed. While the Allies erred in not giving Germany a seat at the negotiating table, the terms of the treaty did not destroy the German economy isolate Germany diplomatically or assign “war guilt” to Germany. Most importantly, Versailles did not make Hitler inevitable. 

Virtually any peace agreement short of full Allied capitulation would have served Nazi propaganda ends. Germany never accepted its defeat in World War One and as a result, no treaty would have been acceptable. 

The Treaty of Versailles has long been an easy target for Anglo-American critics since it allegedly “explains” Hitler’s rise. If the Allies had been more conciliatory, the narrative goes, then Germany would not have gone Nazi. That false narrative had terrible consequences as Britain and the U.S. tolerated Germany’s systematic dismantling of the terms of the treaty. By the time the Allies realized that Hitler sought not the revision of a treaty, but the revision of history, it was too late. 

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172316 https://historynewsnetwork.org/article/172316 0
Why We Need Better Children's History Books

 

My book, An Indigenous Peoples’ History of the United States, was published five years ago and has found a wide readership, especially among teachers and college students. Now, the book has been brilliantly adapted for young readers by Jean Mendoza and Debbie Reese, both scholars specializing in the absence or flawed presence of American Indians in children’s literature. 

 

The resulting text in no way dumbs down the original, maintaining the recurrent theme that the history of the United States is a history of settler-colonialism; that it is a history of aggressive war—often genocidal—against Native nations in order to acquire their lands and resources and turn the sacred and sustaining into real estate; that the wealth of the colonial elite founders of the United States derived from the capital value of the bodies of enslaved Africans and their forced unpaid labor, as well as speculation in seized Indigenous lands; that their motives for independence was economic, to expand across the continent without the restrictions imposed by the British Crown.

 

Throughout the text, the present is visible in that past: modern police forces derived from slave patrols still target Black men; real estate/private property remains the basis of wealth;  white supremacy­­­; continued colonization of Native nations, Hawaii, Alaska, Guam, Puerto Rico; a constitutionally guaranteed armed citizenry and continued gun violence creates a population crushed by fear of the other; the unending war against Mexicans on a disputed border wrought by the invasion, occupation, and annexation of the northern half of Mexico; forever aggressive wars against peoples of the non-European world, with nearly a thousand US military bases outside the United States and floating war machines on every coast and at sea.

 

Some parents, educators, and especially politicians and mainstream historians, regard such naked truths as inappropriate or even harmful for minors. But the proverbial cat is out of the bag. Indigenous scholars, poets, novelists, theater and filmmakers, environmentalists and other activists, who have always been there but little heeded, have formed a critical mass of documentation, testimony, and interpretation of US history that cannot be refuted or assimilated into current standard or multicultural narratives.

 

A current example of why this book for young people is so needed is a controversy over a mural titled “Life of Washington.” The mural covers 1600 square feet of a wall at the entrance of George Washington High School in San Francisco, which opened with the mural as a part of the structure in 1936 during the depression.  It’s one of the few examples of public art that presents a counter-narrative to the common depiction of the founding of the United States, portraying Washington as a wealthy slave ownerand Indian land speculator. In the mural, bent enslaved Africans work the fields as palatial Mt. Vernon looms; Washington, with his land surveying equipment on one side of the mural, points westward over a slain Native and white settlers rush to occupy land taken by violence. 

 

Paloma Flores of the Pitt River nation in northern California is the San Francisco school district’s program coordinator of Indian education and thinks the mural should be removed: “We do have to speak the truth rather than continue to support the dominant narrative view of erasure, the romanticizing of the settlement process, and the lies that have been told.” Paloma observes that without updated textbooks reflecting the truth, the harsh images depicted in the mural can be hurtful to Native and Black students.

 

This is what I have heard echoed in the past five years from K-12 teachers:  they are hungry to learn how to incorporate Indigenous history into classes and to have texts that provide context to truthful images, like the Washington mural, and racist depictions of history, which are far more common. We decided to adapt the book in response to such calls from educators and parents.   

 

Each chapter of the book includes items that enhance the text. There are maps, photographs, and inset boxes that ask students to engage with the context. One example is “George Washington: Hero? Or Monster? It depends on who you ask!” It gives readers the opportunity to think critically about Washington's work as a land speculator, how it benefited him personally, and why the Senecas called him "Town Destroyer." 

 

Children are able and eager to comprehend history. Today, young people have access to vast information on demandand  can easily debunk myths about US history.  However, assembling a coherent narrative that inspires students to image a different world is much more complicated than a Wikipedia search. Teachers can only carry out this task if they have access to narrative histories that are decolonized.

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172321 https://historynewsnetwork.org/article/172321 0
How Estes and Ike Transformed the New Hampshire Primary – And How JFK Defined It

 

In the last sixty years, the Granite State’s first-in-the-nation primary has crushed the ambitions of many a presidential hopeful—including incumbent presidents. The nearly two dozen Democratic candidates now descending on New Hampshire understand this all too well.

 

The state did not become a force in national politics simply because it is the first presidential primary. It was made by presidential candidates who could only glean victory by challenging backroom politics and party bosses. Instead of courting likely state delegates to party conventions, outsiders sought the support of rank-and-file members. As a result, in the 1950s, the blessing of New Hampshire voters began to hold disproportionate sway in nominating races. The national media took note and the “popularity contest” (more than delegate commitments) became the mark of success in other states as well. Ever since, candidates have followed in the footsteps of Dwight Eisenhower, Estes Kefauver, and John F. Kennedy.

 

The presidential campaign of 1952 established the importance of primary to the fate of the election ahead. That year, two outsiders made a stir in New Hampshire through the increasing role of television and celebrity.

 

On the Democratic side, it seemed unlikely that President Truman would be displaced at the top of the ticket. Senator Estes Kefauver (D-TN) managed to do the unthinkable. Kefauver had entered Americans’ living rooms with televised hearings on organized crime and quickly capitalized on name recognition. In a situation that anticipated Eugene McCarthy’s 1968 strong challenge to President Lyndon Johnson, Kefauver entered the New Hampshire primary and finished ahead of Truman by 4,000 votes. Truman bowed out of the presidential campaign shortly thereafter. Although Illinois governor Adlai Stevenson ultimately won the Democratic nomination, Kefauver’s bid had enabled party members to express themselves and underscored the power of a strong showing in New Hampshire.

 

The Republican primary race was marked with even greater excitement. Ohio Senator Robert Taft, “Mr. Republican,” hoped to bring his party to the White House for the first time in two decades. As the front-runner, he might have scored a run-away victory had one name been kept off of the ballot—General Dwight Eisenhower.

 

Ike was still in Europe and was not campaigning. In fact, he was not a declared candidate. But New Hampshire Governor Sherman Adams helped to enter Ike’s name on the ballot. Ike prevailed with his own kind of celebrity—a victorious general who was standing up to Soviet threats. Taft’s campaign then collapsed in other states and, after several weeks, Eisenhower filled the vacuum and formally entered the race. In November he defeated Stevenson. 

 

Four years later, American voters were treated to a rematch, but the New Hampshire campaign proved to be less dramatic. Primary voters of both parties chose convention delegates, but there was no race in the “popularity contest” indicating preferences for the presidential nomination. Eisenhower sought reelection. Kefauver again crisscrossed New Hampshire and handily won the primary. Playing an older game, Stevenson overlooked the state but amassed the most delegates on his way to victory at the Democratic convention in Chicago. Kefauver landed the vice presidential slot as consolation, beating out a rising star from Massachusetts.

 

While name recognition had played a big part in the primary campaigns of 1952 and 1956, the prospect of serious battles for the votes of Granite Staters increasingly led candidates to tap into the state’s party organizations. This was especially true for Democrats, whose party had spent years in the political wilderness of New Hampshire. This was still a predominantly Catholic institution split between French and Irish, with little structure outside of cities like Manchester and Nashua. The GOP consistently outmaneuvered the party. Any Democratic hopeful willing to tend to the hard work of restructuring the party and extending its base in New Hampshire might gain a definitive advantage over rivals for the nomination.

 

The first candidate to do so was John F. Kennedy, who found a close ally in Mayor Bernard Boutin of Laconia. Boutin had travelled across the state by Kefauver’s side in 1956. Soon thereafter, thinking of the next presidential contest, Kennedy reached out to him and won his support. In 1958, Boutin won the Democratic gubernatorial nomination and began to reform the party. He inspired a new sense of confidence among New Hampshire Democrats and helped to build a party organization that would have the means and desire to support Kennedy in 1960.

 

Boutin helped to revive the party and to make it a viable political option again. So too would Kennedy, in 1960, simply by visiting places long written off to the GOP and by assuring voters that the convention would hear their voices. Ultimately, the two men’s efforts facilitated the rise of Senator Thomas McIntyre, three-term governor John King, and Representative J. Oliva Huot in the decade that followed.

 

Kennedy and Boutin discussed “the very bad experience that others had had in New Hampshire, it being more a spoiler in presidential primaries than a builder.” They were no doubt thinking of Truman and Taft. But they also needed to avoid a quixotic campaign like Kefauver’s.

 

Kennedy had no major rival in the New Hampshire “popularity contest.” But he sent a clear message to party bosses and voters alike two months before the vote, when he declared, “It is incumbent on those who seek the Democratic nomination for President to be willing to submit their names in the primaries . . . The days when presidential candidates—unknown and untested—can be nominated in smoke-filled rooms, by political leaders and party bosses, have forever passed from the scene.” He also responded to Truman’s view that the primaries were but “eye-wash.”

 

Kennedy was not entirely an innovator, however: he brought the same personal touch that had paid off for his fellow senator in 1956. The Boston Globe reported that he “campaigned Kefauver-style, greeting people along the streets of the two biggest cities, exchanging handshakes, giving autographs, dropping into a luncheonette for coffee, stopping at a courthouse, fire station, talking at factory gates and touring mills.”

 

Kennedy travelled the state and earned the endorsement of long-time Manchester mayor Josaphat Benoit. The interest and enthusiasm raised by his campaign more than compensated for the hostility of the Democratic old guard. This also became a family affair. His mother Rose visited the state; brother Edward “brought the house down” when speaking French in Suncook. Kennedy drew larger crowds than Kefauver ever had. The organizational support he received enabled his campaign to spend relatively little on the race.

 

This multifaceted strategy paid off. Kefauver had polled some 22,000 votes in 1956; Kennedy won 46,000. In the process he overcame the fear of many New Hampshire Catholics that the national political stage might not be ready for someone of their faith. We can only surmise how large a part New Hampshire had in making Kennedy a successful presidential candidate—in regard to his campaign style, his message, and his understanding of issues near and dear to voters.

 

Like Kennedy, Vice President Richard Nixon faced little opposition in the first primary. That did not diminish its overall importance. The two winners eventually became their parties’ nominees, serving the state’s cause as an inescapable stop for any presidential candidate.

 

As Bernard Boutin later remarked, “If [Kennedy] hadn’t done well in New Hampshire, I think that the West Virginia victory and the victory in Wisconsin would have been impossible.”

 

Since then, New Hampshire has regularly shaped nominating races—from Eugene McCarthy’s grassroots campaign to Ed Muskie’s alleged tears, from the debate financed by Ronald Reagan to Pat Buchanan’s challenge to another sitting president.

 

But, as we are now seeing, the primary has been constant in the style of campaigning it demands and in perceptions of its significance. Those factors, as important as its first-in-the-nation role, resulted from a relatively brief period in U.S. political history—and from the efforts of three men whose hopes hung on the support of Granite Staters.

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172324 https://historynewsnetwork.org/article/172324 0
Weaponizing Nicknames, From Earl Long to Donald Trump Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172323 https://historynewsnetwork.org/article/172323 0 Karin Wulf on Why History Hashtags on Twitter are Inherently Inclusive

Karin A. Wulf is an American historian and the executive director of the Omohundro Institute of Early American History and Culture at the College of William & Mary. Wulf began her tenure as the Director of the Omohundro Institute on July 1, 2013. She is also one of the founders of Women Also Know History, a searchable website database of women historians. Additionally, Wulf worked to spearhead a neurodiversity working group at William & Mary in 2011. She is currently writing a book about genealogy and political culture in Early America titled, Lineage: Genealogy and the Politics of Connection in British America, 1680-1820. Her work examines the history of women, gender, and the family in Early America.

 

What are you currently reading?

I’m never reading just one, and I suspect that’s a regular phenomenon among historians!  The new books on my desk right now, with bookmarks stuck in, marginalia and notes accumulating, include Stephanie Rogers-Jones, They Were Her Property:  White Women as Slave Owners in the American South (Yale UP, 2019) and Mary Thompson’s The Only Unavoidable Subject of Regret: George Washington, Slavery, and the Enslaved Community at Mount Vernon (UVA Press, 2019) for research; Kathleen Fitzpatrick’s Generous Thinking:  A Radical Approach to Saving the University (JHU Press, 2019) for academic culture; Jesse Cromwell, The Smuggler’s World:  Illicit Trade and Atlantic Communities in Eighteenth-Century Venezuela for teaching and because it's new from OIEAHC (& UNC Press, 2019). 

 

What is your favorite book, or which book has had the greatest impact on you?

A book that I return to often, for teaching, thinking, writing, is Michel-Rolph-Trouillot’s Silencing the Past:  Power and the Production of History (1995).  This work foreshadowed and complements a lot of more recent work on archival absences and the power of archival constitution, including from Marisa Fuentes and Ann Laura Stoler.  It’s so profound, and I never fail to learn something new, or to be motivated by reading and teaching it. 

 

What do you think makes for a good history book?

 We comprehend the past through the compelling relationship between evidence and argument.   The fresh investigation or analysis of evidence (be it documentary, material, oral), and the clearest presentation of historical argument on its basis make for the strongest work.  I often use Annette Gordon-Reed’s Thomas Jefferson and Sally Hemings: An American Controversy (UVA Press, 1997) as an example.

 

Why did you choose history as your career?

I wanted to be a political journalist!  But I fell in love with the early modern period in classes about feminist art history, Shakespeare, and the early United States (specifically early Congress).  I didn’t want to stop reading or writing about it.

 

What would you like people to know about you as a historian that is not included in your general biography?

A lot of the work I do directing the Omohundro Institute is developing programs; I’m passionate about the importance of history, and particularly the importance of early American history (what I would call “Vast Early America”).  Creating events and fellowships and other programs is as vital a part of being a historian, for me, as my research and teaching.  The kind of projects we’re working on, to continue to articulate the significance of an expansive understanding of the early American past, is going to be even more important as we approach the 250th anniversary of the American Revolution.  That’s the kind of commemoration that should be looking to the work of decades of scholarship since the bicentennial showing us the continental, Atlantic, and even global context for an era that had very different meanings and very different impact across diverse, dynamic societies. 

 

Continuing with the theme of books, I know that you run a book club at the Omohundro Institute, that you've written about on The Scholarly Kitchen (https://scholarlykitchen.sspnet.org/2018/08/01/engaging-public-scholarship-case-small-scale/). You cited outreach as one of the reasons why you began the club for mostly retirees. What does the term outreach mean to you and how is the club going? 

That’s a great example of OI programs that I think are part of my core work as a historian.  The book club is run for local Williamsburg folks, almost entirely retirees.  We read 4-6 books a year, serious works of scholarship (you can see the reading lists here:  https://oieahc.wm.edu/events/reading-series/ ).  It’s been going for three years, and the group now really digs deep into the footnotes, asking questions about sources and about the historiography.  I think scholarship can be accessible to everyone; I’ve argued on Scholarly Kitchen that “democracy needs footnotes” (https://scholarlykitchen.sspnet.org/2016/11/07/does-democracy-need-footnotes/) and I really believe that— understanding the evidence for an assertion, whether historical, economic, or political— is absolutely crucial.  

 

You are an active #twitterstorian. How do you think history and the study of history is changing with platforms such as Twitter? How is it affected by hashtags/movements such as #VastEarlyAmerica and #womenalsoknowhistory? 

 I love Twitter for communicating with all sorts of folks, scholars in history and other disciplines, general readers, and teachers.  The hashtags, as one of my colleagues on a recent panel about #VastEarlyAmerica, Christian Crouch of Bard College put it, is inherently inclusive.  Anyone can use it, claim it, debate it.  It’s the property of the virtual town square.  I think #WomenAlsoKnowHistory has been really important, along with like hashtags, for calling attention to the often unthinking ways that historical (and other) expertise is represented.  Almost 3500 women historians have signed up for the database at womenalsoknowhistory.com, making public their areas of expertise; we hear all the time about how much people are using it to find folks for media requests, panels, and more.  I hope that will only continue to amplify women experts in public.

 

You also write for The Scholarly Kitchen, your own blog (karinwulf.com), the OI blog, Uncommon Sense, and in other forums. Why do you think it is important to utilize these different avenues for telling history?

These different venues have different audiences and missions.  It’s a real privilege to write for the The Scholarly Kitchen, which is an enormously important platform in scholarly publishing.  There is so much churn is that realm, where funders, publishers, and libraries meet, that often escapes the notice of the scholars whose work is being funded, published, and disseminated. And many of the enormous changes, from metrical evaluations based on citations to digital platforms, is being developed and managed based on the publication pattens of the high volume and high dollar STEM fields.  I’m so glad to have the chance to bring a humanities (and history) perspective, and beyond Scholarly Kitchen I speak about this to as many groups, through webinars and conferences and other gatherings as possible.  Our needs in history and the humanities generally are very different, and the impact of the decisions and practices based on STEM can have on us is profound.   

I write on my own blog and elsewhere about my research and teaching, and about other issues for historians, because I think those are equally important to share with whatever audience is reading  —and sometimes, especially on my own blog, I’m writing because I just really like to write!  

 

How do you think the study of history will continue to change in the next few years? 

It is exciting to see new forums for communicating history developing.  The African American Intellectual History Society, for example, and its powerhouse blog, Black Perspectives, is one of the most exciting developments in the last decade.  The consistency generative work that is being gathering and shared through that group has really had an enormous impact not only on the huge audiences that participate and read, but also on the ways others have been encouraged to follow their example.  One thing I took from their work early on was that, per my comments on my OI local reading group, serious history has a public audience.  A lot of hand-wringing about how scholars need to learn to communicate with the public emphasizes accessible writing and more general topics.  But if you look at something like Black Perspectives what you see is that accessible writing is simply good writing— and that serious, in depth, sophisticated historical writing will always have an audience well beyond the academy. 

 

What are you doing next?

I’m finishing a book about the power and practice of genealogy in 18th century British America.  I look at the ways that all kinds of families and individuals created family histories for complex, personal reasons some emotional, some practical, and the ways that institutions required the production of genealogical information.  It’s a big book in that the research has taken a long time, and I’m really excited to be in the home stretch.  I am lucky to advise a fantastic (and large!) group of PhD students, whose work is always teaching me new approaches and new things about the early American past.  At William & Mary I’m also co-chair of the Neurodiversity Working Group, which leads Neurodiversity at William & Mary, an initiative to create a campus climate around disability support and an appreciation of brain differences.  We’ve got a big year ahead of us there, and at the OI working with colleagues in ASWAD for a conference centered on the 400th anniversary of the first Africans in Virginia, and around issues of inclusion in scholarly practice including publishing.  I love my research and teaching, and I love my work at the OI on behalf of issues that I see as deeply connected.

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172322 https://historynewsnetwork.org/article/172322 0
The Perfect Mentor: Alan Brinkley

 

As anyone who has experienced graduate school may tell you, the relationship between a doctoral student and mentor is special.  A conscientious mentor can work magic, helping to transform a rough, tentative thesis into a defensible treatise and eventually an important book.  An advisor can be a wellspring of motivation, and might even save a wayward student from the trap that ensnares too many Ph.D. candidates—the unfinished dissertation.  By tradition, a mentor will be the first to address you as “Doctor” after you defend your dissertation, and then help you gain a foothold—or even just toehold—in the job market.  You may think of your sponsor every day for the rest of your career, perhaps the rest of your life.  

 

I had the perfect mentor.  Alan Brinkley, the revered Columbia University historian and Allan Nevins Professor Emeritus, who died on June 16, did all of that for his advisees, and much more.   

 

Alan arrived at Columbia during a time of transition.  In 1990, John Garraty, a prolific writer and author of the best-selling textbook, The American Nation, retired as the History Department’s twentieth-century political historian, leaving a huge void and a cadre of graduate students without an advisor.  The Department launched a search for a senior historian, indicating that it wanted to hire an eminent scholar who could provide seasoned guidance to students.  I remember attending Alan Brinkley’s interview lecture, and the Department subsequently hired him.  After concluding his teaching and advising responsibilities at CUNY during 1990-91, he joined the Columbia faculty in September 1991. 

 

I met Alan in 1990, while he was a CUNY professor.  I was shaping a dissertation topic on Gerald R. Ford’s presidency.  Early in the Fall 1990 semester, Alan took me out to lunch at a restaurant near his West 81st Street apartment, and we discussed ways to structure that study.  

 

I was impressed that Alan set aside time for me. Although he had yet to start at Columbia, he began to be my mentor, approaching my topic in a rigorous, organized manner. I think he discerned that after Garraty’s retirement, graduate students specializing in U.S. political history hungered for guidance, and those of us in the pre-dissertation stage wanted to lay more groundwork but needed help in doing so.  During that lunch, Alan showed he was already deeply invested in assisting me. 

 

That meeting also marked the beginning of my exposure to Alan’s many kindnesses—the “Brinkley treatment” that enriched my graduate school experience and the years beyond.  Later that semester, when Alan learned that I would stay in Manhattan during Thanksgiving, he invited me to dinner at his apartment with his wife Evangeline and their extended family.  For a graduate student away from home, a genuine Thanksgiving meal was the greatest treat imaginable.  

 

Over the years to come, the time Alan spent with me exemplified a quality that made him a sterling mentor.  For a university professor, time is a tyrant.  Classroom instruction, research, publications, and administrative duties exert a pulverizing grind, and no workday has enough hours for it all. Yet Alan juggled these imperatives deftly, while providing generous time for his students.  Sitting in Alan’s office—receiving his individual attention and imbibing his wisdom—was always flattering.  He had a gift for putting visitors at ease, never giving any pretense that he was a world-renowned historian with multiple demands tugging at him. Once, when I was in his office, a line of students formed outside, and I expressed concern about making them wait. “That’s okay,” he said, “I’m with you now.”  I think of Alan today when students visit my own office.  Invariably, the phone rings, texts ping, and emails beckon.  But like Alan, I try to offer students unalloyed attention and as much time as they need to discuss whatever prompted their visit.  

 

Watching a mentor’s behavior offers other enduring lessons.  Alan’s schedule and routine were a clinic in understanding how to use time wisely and maximize productivity.  He had a fantastic work ethic, churning out books and articles, yet as modest as he was, he never betrayed any sense of a relentless schedule.  While I was in graduate school, Alan was researching a new book, poring over the records of the Temporary National Economic Committee and the works of economists like Alvin Hansen, later publishing the results as the critically acclaimed The End of Reform:  New Deal Liberalism in Recession and War.  In 1992, he received a contract to write a biography of magazine magnate Henry Luce.  For the next fifteen years, he mined the Time-Life archives and wrote this book—while producing at least two other monographs—showing a rare ability to maintain multiple works simultaneously on his research agenda.  (In 2010, his Luce biography, The Publisher, became a Pulitzer Prize finalist.)  During his first year at Columbia, whenever I visited him at his office, he was usually working on two other projects as well.  One was abridging his textbook, American History, which he published as the shorter The Unfinished Nation, a volume that surpassed the longer version in popularity.  

 

The other project was Alan’s classroom lessons, including new course preparations and a reconfiguration of his lectures, which he rewrote to fit Columbia’s classroom schedule.  From the start, I observed firsthand Alan’s two great passions, writing and teaching, and his dedication to both.  He was a master at these crafts, and seeing him labor at them provided instructive lessons in their importance.  

 

To evaluate professors, academia traditionally embraces those two criteria—teaching and publication—plus a third, service.  Alan’s mentorship made me conscious of a fourth element:  public dissemination of scholarship. “Publication,” after all, implies addressing the public, and Alan emphasized that historians need to reach general audiences.  He was in heavy demand as a speaker and television pundit; I recall walking to campus and seeing a TV crew preparing to interview him in the courtyard outside the History Department.  Alan appeared on The Today Show and ABC’s political coverage, and published articles in The New York Times, Newsweek, The Atlantic, and other popular venues. Alan’s alumni all subscribe to a Brinkley school of history, striving to write for the public.  

 

In 1995, I became Alan’s first student to defend a dissertation at Columbia, and he and Evangeline later took me and a classmate to dinner to celebrate. It was, once again, the Brinkley treatment—kind, generous, and memorable—making his students feel special.  Graduation marks a new phase in the student-mentor relationship; the sponsor next helps a former advisee to secure employment.  The academic job market is daunting, as Alan warned, but just before the Fall 1996 semester, after I had worked a year as a Writing Fellow for Oxford University Press, he and Eric Foner informed me of a position at Long Island’s Dowling College.  It was a temporary, one-year appointment, but miraculously, my one year there turned into nineteen years, as I remained at Dowling until the school closed in 2016.

 

Landing a job near New York City enabled me to stay in close contact with Alan, and he continued to welcome my visits.  In September 1998, I stopped by his office the day before he left for London to be Oxford University’s Harmsworth Visiting Professor.  That day, he was unstinting with his time to the point of self-sacrifice.  I asked, “You must be all packed and ready to leave tomorrow?”  Laughing gently, he said, “Actually, we still have a long way to go.”  A wave of guilt swept over me.  There I was, taking up Alan’s time, when he could have been preparing to leave for England.  Yet talking to me that afternoon, he never seemed rushed at all.  

 

Alan maintained contact with former students in many ways.  I was in graduate school before the era of email—which seems like antediluvian times now—but once Alan began emailing, he was a most reliable correspondent, and not just with me.  When I graded the Advanced Placement U.S. History Exam, a high school teacher reported that one of her students peppered Alan with email questions all year as he prepared to take the AP Exam.  Alan, this teacher said with admiration, answered every email.  

 

In the world of AP U.S. History, Alan loomed as a mythic figure because of his textbooks.  In 2010, he gave the keynote address at the AP Exam grading in Louisville, and many teachers wanted their picture taken with him.  Alan had a self-effacing sense of humor, and he commented to me that when he was a child, people would stop his father, NBC news anchor David Brinkley, on the street and ask for his autograph.  “It’s only at the AP U.S. History grading,” Alan chuckled, “that I get a sliver of my father’s celebrity.”  

 

Remarks like that reminded me of one of Dwight D. Eisenhower’s favorite aphorisms.  A furious worker himself, Eisenhower reminded aides to keep a sense of perspective, saying, “I want you to take your work seriously; don’t take yourself too seriously.” Alan always strove for such a balance in his life and career, and that was perhaps the most edifying lesson I learned from him.  As devoted as he was to history, he led a balanced life, with ample activities outside the discipline.  He swam laps for exercise.  He liked to cook.  He enjoyed theater, and he gave me tickets to a play that Evangeline produced, Distant Fires, which offered searing commentary on racial tensions at the workplace.  After the 9/11 terrorist attacks, Alan volunteered at Ground Zero.  Most importantly, his family was dear to him.  After their daughter Elly was born in 1991, he and Evangeline invited me to their home to meet her, and I recall seeing Elly for the first time, sleeping peacefully in her crib.  Alan’s protean interests may remind all students to keep the wheels of their activities balanced.  

 

While Alan Brinkley might not have seen himself as a celebrity, for American historians, he was a star—especially at the graduate level. Beyond his impeccable scholarship and insightful lectures, he was a mentor to dozens of doctoral students. In that role, he was perfect, providing profound lessons that will last through many careers, and likely, many lifetimes. 

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172317 https://historynewsnetwork.org/article/172317 0
African Art Belongs Where It Was Created. Give It Back. Steve Hochstadt is a professor of history emeritus at Illinois College, who blogs for HNN and LAProgressive, and writes about Jewish refugees in Shanghai.

 

 

Two weeks ago, I wrote about the controversy at the Museum of Fine Arts in Boston, provoked by racist comments made at a group of black 7th-graders who were on an outing as reward for excellent school work. The Museum leadership proposed new procedures to be more welcoming. But the racial tensions that a visit to the MFA could arouse can’t be fixed by even the friendliest attitude toward African-American guests, because racism is embedded in the whole enterprise of international art collection.

 

We visited the MFA last week to see their new exhibit on the Bauhaus, whose creativity, aesthetic, and politics we very much appreciate. The path to the exhibit led through a much larger display of African art objects.

 

The cultural context in which African bodies were represented in objects is strikingly different from what we call Western art, so African artifacts in museums inevitably appear exotic to the Western eye. Because European whites assumed that their civilization was superior to all others, the creations of African artists were long considered primitive. The MFA, like most museums now, makes an effort to counter the myth of white Western superiority by stressing the high levels of skill embodied in their collection of bronze and ivory sculptures made over six centuries in the independent Kingdom of Benin, now part of southern Nigeria. The Benin kings sponsored workshops to produce objects of the highest quality, often depicting religious rituals, veneration of ancestors, and kingly power. The introductory sign said these pieces “rank among the greatest artistic achievements of humankind.” Other commentators on Benin art agree that the artworks in the MFA represent the finest creations of African artists.

 

The MFA signage explains a little about how these objects came to be in Boston. The Benin Kingdom was invaded by the British in 1897, Benin city was razed and burned, the King overthrown, and about 4000 artworks were seized as “spoils of war”. They are now housed in museums in London and Oxford, Berlin, Hamburg and Dresden, Vienna, New York, Philadelphia, and Boston. Few remain in Nigeria. The Benin objects are a small part of the British Museum’s collection of 200,000 artworks from Africa. In what I assume is a relatively new wording, the exhibitors at the MFA ask, “What are the ethics of collecting and displaying works removed from their places of origin during periods of European colonialism?”

 

Other cases of the “collection” of foreign cultural creations are instructive in answering this question. The Nazi state and its representatives systematically collected the valuables of the people they conquered, considered inferior, and were killing. They stole everything valuable from Jews in the Third Reich and across Europe. They looted official collections from the nations they defeated.

 

Over 70 years later, the return of these objects to their owners continues to pit those currently in possession against their previous and rightful owners. Legal issues complicate these discussions, but nobody argues against the moral right of the victims and their descendants to their property. The universal belief that Nazi stealing was morally abhorrent is the foundation for every discussion of return.

 

When Soviet armies pushed into Germany, they took vast quantities of art from museum collections. The Russians made a persuasive claim – German armies waged a war of obliteration against the Soviet population and landscape. The German art works taken back to the Soviet Union were only a token compensation for property destroyed and millions of lives lost. Since then, some have been returned and some remain in Russian museums, objects of international argument. In 1998, Russia passed a law legitimizing their continued possession of what they termed “Cultural Valuables Displaced to the USSR”, with this justification: “partial compensation for the damage caused to the cultural property of the Russian Federation as a result of the plunder and destruction of its cultural valuables by Germany and its war allies during World War II.”

 

No such justification exists for the presence of African artifacts in Western museums. They were simply taken as part of a wider appropriation of value by Western colonial conquerers, who simply showed up in independent African nations and kingdoms, killed anyone who tried to defend themselves, and took whatever they wanted back home. Although the US was not part of the colonial scramble which divided up Africa in the late 19th and early 20th centuries, American collectors and collections now possess large quantities of the stolen objects.

 

Many white Americans, and other white Western peoples, see their relationships with Africans through a haze of patriotism and self-assurance. We were beneficent teachers about advanced human civilization, the unquestionable truths of Christianity, and the proper use of economic resources, earthly and human. The assumption that humans could also be property was no longer official national policy, but white supremacy still defined white, and thus US federal, policy. Ownership of property was changed through conquest.

 

We now share a significant debt with all the national looters by conquest, the Germans, the Russians, the British, the Portuguese, and other European nations. Our institutions, public and private, and many individuals, most of whom are very wealthy, owe reparations for organized theft. Difficulties in figuring out how it should be accomplished are no excuse for pretending the debt doesn’t exist.

 

The carefully worded acknowledgment that inhuman behavior lay behind the movement of African art into Western museums is new. But that’s too late and not enough. The middle schoolers from the Helen Y. Davis Leadership Academy did not need a guard to mention watermelons to experience racism at the MFA. The collections of African art proclaim the racism of the past, the white assumptions that they could take anything they wanted from blacks, including their labor and lives.

 

Now the MFA admits this is controversial. It isn’t. White colonial conquest of Africa was a holocaust before the Holocaust. The fruits of genocide lie in glass cases in Western museums.

 

African art belongs where it was created, where it is meaningful rather than merely exotic, where it could induce pride in its creation instead of shame in its looting.

 

Give it back.

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/blog/154221 https://historynewsnetwork.org/blog/154221 0
All for One and One for All! The Three Musketeers Ride Again

 

There are few greater heroes in history, real or imagined, than French novelist Alexandre Dumas’ swordsmen in The Three Musketeers. They were that daring trio – Aramis, Athos and Porthos – and their pal D’Artagnan - who fought for France and her King in the seventeenth century, putting their lives on the line for him every day. The boys have been the stars of countless movies, television films, books comics. Board games and even cartoon shows all over the world. Their glory never fades. Now they are back again, on stage, in Ken Ludwig’s The Three Musketeers at the New Jersey Shakespeare Theatre, Drew University, Madison, New Jersey. It is a rollicking good show about 1620s France with dazzling sword fights, torrid romances, friendship and loyalty. The Musketeers, swords in hand, hats tilted just so, would love it. So, will you.

Dumas’ novel The Three Musketeers was first published in 1844 and has sold millions of copies since. It was one of the world’s first films in 1903 and has been a favorite of writers, directors and Hollywood stars ever since. The story has been made into more than forty movies. It has been a cartoon and even a board game. Disney’s Mickey Mouse, Donald Duck and Goofy even play them in one movie and the doll Barbie managed to star in another.

The Musketeers were very real historical figures. They were founded in 1622 by King Louis XIII of France, who wanted a personal security force that served as part of the army. The King insisted that all of them be in terrific physical shape and trained to be superior soldiers (they were France’s version of our Navy Seals). He was nervous because his Catholic monarchy was continually at war with the Huguenots, French Protestants. Louis’s father had been assassinated. 

Novelist Dumas said he stumbled across the King’s Musketeers in his research. He based his musketeers on real life musketeers. D’Artagnan was actually Charles de Batz Castelmore.  Porthos was Isaac de Portau. Aramis was Henry d’Aramitz. Athos was Armand de Sillegue. King Henry XIII was real and so were Richelieu and the Queen, Anne of Austria. Lady De Winter was a real person, but not involved in the life of the court. The fictionalized Dumas’ story is set amid this real historical landscape and very real people. The only real change Dumas made was to set the story in the 1620s and not the 1640s, when the actual Musketeers served.

What is impressive about the play, that opened Saturday, is that playwright Ludwig quickly sets the table and tells you the story of the three musketeers, D’Artagnan and D’Artagnan’s sister Sabine, added to the story by Ludwig. That’s just in case you forgot the tale or the characters. In just five minutes, you are on your horse, sword in hand, and ready to ride into history.

The story is simple and yet at the same time complex. The Musketeers fight for the King, Louis XIII, and there are soldiers who fight against the King for his religious rival, the powerful and quite evil Cardinal Richelieu, the King’s chief advisor. The Cardinal wants to dethrone the King and get rid of the Musketeers. They want to defrock him. It all comes to a merry head when the Queen of France gives an expensive necklace to her boyfriend, a British nobleman. The Musketeers charge off to Britain to get it back, but so does Lady de Winter, the Cardinal’s duplicitous niece. They are thwarted again and again and it is not until the very last minute that you find out if they succeeded, and who, after numerus swordfights, is left standing.

The play is a good adventure, says one of the Musketeers. He is wrong. It is a terrific adventure, a fine rumble in the European courts and a barn burner of a tale that pits good vs. evil at every turn.

The play is masterfully directed by Rick Sordelet. He is actually the “fight director” for the theater. Now, with the fight director in charge, there are fights all over the place. There are more sword battles than there are Democratic Presidential contenders (that’s a lot of sword fights). Sordelet gets wonderful performances for all of his actors. Cooper Jennings as D’Artagnan and John Keabler, Paul Molnar and Alexander Sovronsky as the three musketeers are marvelously gifted actors and perfectly heroic characters. The new girl with the sword, Sabine, played well by Courtney McGowan, is an OK character but no real addition to the story.  She accompanies her brother to Paris to see him join the Musketeers, urged to do so by their dad, a former Musketeer himself. Other fine performances are by Bruce Cromer as the always agitated and forever scheming Cardinal, Fiona Robberson as the Queen, Anastasia Le Gendre as the lovely and dangerous Lady De Winter and Billie Wyatt as Constance. Michel Stewart Allen absolutely steals the show whenever he is on stage as the pompous, jealous, ditzy, so very full of himself King Louis XIII (no wonder the whole country went to hell shorty afterwards). He prances, he dances and he (badly) romances. He is a delight. Long live the King!

The play, and novel, offers you a good look at French history in that era. The novel includes actual wars and attacks by the Musketeers, but the play really does not. You learn much about the court of Louis XIII, although there is little mention of his reign from the age of 9 to adulthood.

See The Three Musketeers. It is a sizzling tale of derring-do, adventure and a solid piece of French history.

En Garde!

PRODUCTION: The play is produced by the Shakespeare Theatre of New Jersey. Sets:  Brian Prather, Costumes: Brian Russman, Lighting: Matthew E. Adelson, Stage Manager: Jackie Mariani. The play is directed by Rick Sordelet. It runs through July 7.

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172283 https://historynewsnetwork.org/article/172283 0
Roundup Top 10!  

 

The historical argument for impeaching Trump

by Heather Cox Richardson

Since Nixon, Republicans have pushed the envelope under the guise of ‘patriotism’, and Democrats have tolerated it because of ‘civility’.

 

The impeachment illusion

by Donald A. Ritchie

What is clear from the record is that politically-motivated impeachments fail. Only in cases where malfeasance has become overwhelmingly obvious to members of both parties in the House will there be any chance of conviction in the Senate, raising the question: is a losing effort worth the trouble?

 

 

There Is No Middle Ground on Reparations

by Ibram X. Kendi

Americans who oppose reparations care more about responding to political expediency than about the emergency of inequality.

 

 

Don’t Be Outraged They’re Being Called Concentration Camps. Be Outraged They Exist.

by Eladio Bobadilla

We should call Trump’s detentions centers what they are: concentration camps.

 

 

The End of History? FDR, Trump and the Fake Past

by Cynthia Koch

Trump is thinking more about history than we imagine and he is doing so in a way very different from former presidents.

 

 

Trump, America, and the Decline of Empires

by Tom Engelhardt

Trump is not an isolated phenomenon--historically or globally.

 

 

Why the Founders would be aghast that Trump would take ‘oppo research’ from foreign governments

by Christopher McKnight Nichols

Alexander Hamilton considered “the desire in foreign powers to gain an improper ascendant in our councils” among the “most deadly adversaries of republican government.”

 

 

She Was Born Into Slavery, Was a Spy and Is Celebrated as a Hero—But We're Missing the Point of the 'Mary Bowser' Story

by Lois Leveen

As a historian, I’ve grown concerned that our impulse to celebrate a black spy in the Confederate White House is impeding us from getting history right, in troubling ways.

 

 

The Opioid Epidemic as Metaphor

by Faith Bennett

As cinematic portrayals of opioid use and abuse tend to be this sensational, commonplace opioid usage almost feels as if it doesn’t fit into this pattern of addiction at all.

 

 

The Impeachment Roadmap from 1974 to 2020

by Sidney Blumenthal

What lessons does Watergate offer for politicians considering impeachment today?

 

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172315 https://historynewsnetwork.org/article/172315 0
Are Historians Doing Enough to Address Climate Change?

 

Human-caused climate change is the greatest global threat to the lives of our children and grandchildren. Yet President Trump and most Republican members of Congress are deniers or belittlers of it. Their views are closer to those of Republican Senator James Inhofe (Okla.), author of The Greatest Hoax: How the Global Warming Conspiracy Threatens Your Future (2012), than they are to books like The Uninhabitable Earth: Life after Warming (2019) or the 2018 report of the U.N.’s Intergovernmental Panel on Climate Change (IPCC).  In May 2014, Politifact.com checked out California Governor Brown’s claim that few Republicans in Congress accept human-caused climate change, and could find only 8 out of 278 who admitted that it existed.

Although the general public, especially Democratic voters, are now taking climate change more seriously, a 2018 poll revealed that “voters ranked climate change as the 15th most important issue in voting when asked about a total of 28 voting issues.”

For too long historians also paid too little attention to the most important topic of our day. In 2012 one historian (Sam White) wrote, “When it comes to public discussion of climate change, historians are nearly invisible.” To back up his statement, he indicated that at the 2012 AHA annual conference “not a single panel [out of more than 250] or meeting” addressed “arguably the most pressing issue of the century.” 

 

True, there were some exceptions to White’s “nearly invisible” observation. In the first edition (1983) of The Twentieth Century: A Brief Global History (by Goff, Moss, Terry, and Upshur), we wrote  “the increased burning of fossil fuels might cause an increase in global temperatures, thereby possibly melting the polar ice caps, and flooding low-lying parts of the world.” In the third edition (1990) we expanded our treatment by mentioning that by 1988 scientists “concluded that the problem was much worse than they had earlier thought. . . . They claimed that the increased burning of fossil fuels like coal and petroleum was likely to cause an increase in global temperatures, possibly melting the polar ice caps, changing crop yields, and flooding low-lying parts of the world.”

 

But considering that this edition of our twentieth-century global history contained about 500 pages, this was very little mention. A more wide-ranging perspective on climate-change concerns was James Fleming’s Historical Perspectives on Climate Change (1998), which contains a series of interrelated essays. Chapters 1 and 2 deal with climate thinking and debate in the Enlightenment and early America. The last chapter is entitled “Global Cooling, Global Warming: Historical Perspectives.” In it we read, “As public awareness of global warming reached an early peak in the mid-1950s, the popular press began to carry articles on climate cooling.” Fleming cites, for example, a 1975 article by conservative Washington Post columnist George Will that predicted “megadeaths and social upheaval” if the perceived cooling continued. Three decades later Will overstated a supposed scientific consensus in the 1970s about climate cooling, partly to discredit the increasing scientific agreement about human-caused global warming. In fact, although the rate of global warming slowed during the third quarter of the twentieth century and then rapidly increased thereafter, scientists realize today that the earlier “global cooling” was exaggerated.  Regardless of some temporary confusion in the 1960s and 1970s, Fleming’s words of 1998—“Since the mid-1980s, the dominant concern has been global warming from rising concentrations of CO2 and other greenhouse gases”—remains true today.

 

Another historian, J. R. McNeill in his Something New under the Sun: An Environmental History of the Twentieth-Century World (2001), made the bold statement that “the human race, without intending anything of the sort, has undertaken a gigantic uncontrolled experiment on the earth.  In time, I think, this will appear as the most important aspect of twentieth-century history, more so than World War II, the communist enterprise, the rise of mass literacy, the spread of democracy, or the growing emancipation of women.”

 

About seven pages of McNeill’s book are devoted to a section on “Climate Change and Stratospheric Ozone,” but include the line “no one knows for certain if human actions are the cause” of global warming. By 2009, however, increased scientific data made it clear that the “greenhouse gases emitted by human activities are the primary driver” of global warming. Although McNeill concluded that the consequences of the twentieth-century’s global warming “remained small,” he thought that twenty-first century effects could be momentous if humans don’t change their ways.

 

In a three-page section on “Global Warming” in my An Age of Progress? Clashing Twentieth-Century Global Forces (2008), I cited material from McNeill’s book, as well as various other sources such as the UN’s IPCC and Al Gore’s book Earth in the Balance: Ecology and the Human Spirit (1992) and his film An Inconvenient Truth (2006).  

 

A year after Sam White wrote that “historians are nearly invisible” in the “public discussion of climate change,” a major historical work appeared that signaled an increased historical interest in climate change—Geoffrey Parker’s massive award-winning Global Crisis: War, Climate Change and Catastrophe in the Seventeenth Century (2013). Parker’s main concern was to demonstrate how in the seventeenth century “an intense episode of global cooling” affected “an unparalleled spate of revolutions and state breakdowns around the world.” But he also discussed “two distinct categories” of sources that he and other historians were and are able to use to determine historical climate changes, “a “natural archive” of four sources and a “human archive” of five. 

 

In his dozen-page epilogue, he suggests White’s assessment of historians’ response to climate change has been too pessimistic. Citing an 1990 IPCC warning about global warming, he writes: “The response of the scholarly community, including many historians, has been magnificent: since 1990 they have compiled thousands of data- sets and published hundreds of articles about past climate change, revealing a series of significant shifts that culminated in an unprecedented trend of global warming.” Parker does, however, fault politicians like Republican Senator Inhofe, who in 2011 “co- sponsored legislation that would prevent the federal government from ‘promulgating any regulation concerning, taking action relating to, or taking into consideration the emission of a greenhouse gas to address climate change.’”

 

Based on his research, the historian believes that “in the twenty- first century, as in the seventeenth, coping with [large] catastrophes . . . requires resources that only central governments command.” And he faults the “deep-seated fear of ‘Big Government,’” manifested by Inhofe and others, “that any attempt by a Federal agency to mitigate or avert damaging climate change represents a ‘power grab’ by Washington that must at all costs be resisted.” Besides the responses of denying politicians, Parker also offers other explanations why many people in various countries do not take climate change more seriously,  how costly weather and climate catastrophes are and will be, and advice that “it is better to invest more resources” in preventing climate catastrophes rather than living “with the consequences of inaction.”

 

In a recent email to me, Sam White mentioned the work of Parker. Comparing 2012 to today, White wrote that his “impression is that we’re doing a little better than before. Histories that emphasize past climate variability and change and their human impacts and adaptations, such as the work of Geoffrey Parker and Dagomar Degroot, have received increasing scholarly attention and praise. [Degroot is an assistant professor of environmental history at Georgetown University and the author of the 2018 book The Frigid Golden Age: Climate Change, the Little Ice Age, and the Dutch Republic, 1560-1720.] More historians include discussions of weather and climate in narratives of social and political history.” White also mentioned that “projects such as the Princeton Climate Change and History Research Initiative (CCHRI) and the Past Global Changes (PAGES) network have brought together historians and climatologists for new kinds of collaborations, which are beginning to produce high-impact publications.” 

 

As a result of the information he provided, I also discovered other valuable sources such as HistoricalClimatology.com, founded byDegroot in 2010, and Cambridge University Press’s 42-book Studies in Environment and History, which includes John L. Brooke’s Climate Change and the Course of Global History: A Rough Journey(2014). The co-editor of the series is Georgetown’s J. R. McNeill, and the presence of him and Degroot in the same department, as well as White and Brooke at Ohio State, suggests at least some history departments are now taking environmental history, including climate, change seriously.

 

Although White believes that historians’ scholarship and public outreach concerning climate change will continue to improve, he is still “concerned that few historians—even environmental historians—are actively working to address what is likely to become the single most urgent public issue of the 21st century. . . . Few examine the historical factors and decisions that led us to fossil-fuel dependence, climate change denial, and political and diplomatic gridlock on climate policy, with an eye to identifying changes that might bring us out of our current impasse. Moreover, public discussions about climate change impacts, mitigation, and adaptation rarely discuss insights from historical research.”

 

Viewing the titles of the 285 sessions at the 2019 AHA annual conference also disappoints anyone hoping to see growing historical interest in climate change. Very few sessions address the subject in any historical period. Thus, despite the hopeful signs of recent years, including those that White mentions, the question remains: “Are historians doing enough to address climate change?” If and when we do, we can then offer another question that we have often asked before about other subjects: Are politicians and the general public paying sufficient attention to our findings? 

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172246 https://historynewsnetwork.org/article/172246 0
The Lavender Scare and Beyond: Documenting LGBTQ History from the Great Depression to Today

 

 

The Lavender Scare, a new documentary that will air on PBS on Tuesday, June 18th, documents the systematic firing and discrimination of LGBT people under the Eisenhower administration. The film is based on David Johnson's book The Lavender Scare: The Cold War Persecution of Gays and Lesbians in the Federal Government. The film is directed and produced by Josh Howard, a producer and broadcast executive with more than 25 years of experience in news and documentary production. He has been honored with 24 Emmy Awards, mostly for his work on the CBS News broadcast 60 Minutes. Josh began his career at 60 Minutes reporting stories with correspondent Mike Wallace. He was later named senior producer and then executive editor of the broadcast.  Following that, he served as executive producer of the weeknight edition of 60 Minutes.

 

Recently, Eric Gonzaba interviewed Director Josh Howard via phone. This interview was transcribed by Andrew Fletcher and has been lightly edited for clarity. 

 

 

Gonzaba: I just wanted to let you know, I actually got to watch the documentary last night. I knew a little bit about it before watching it. I know the content quite well, but I knew of your work beforehand and I just want to say it was really fabulous to watch it finally and I find it extremely credible. It’s well covered and has lots of fantastic aspects to it, so I’m excited to talk to you today. 

 

Howard: Thank you, thanks so much.

 

G: Now I’m curious to start off thinking about your own part of this documentary. What drew you to the subject matter; what prompted you to think about this period in general?

 

H: Well, to tell you the truth, I came across David Johnson’s book, The Lavender Scare, and I was just surprised that I didn’t know this story. I’m a little bit of an American history buff and I thought I knew LGBTQ history, being old enough to have lived through a lot of it, and it was just shocking to learn how systematically it was that the government discriminated against gay people. I worked in TV news my entire career and I was happily retired from that career, but after reading this I thought this isn’t just history really, this is a news story; this is something that people don’t know about. It seemed natural to try to capture the stories of these people on film. That’s what drew me to it.

 

G: Now it’s funny, when I think about this story, especially thinking about the Red Scare in the fifties and the Lavender Scare at the same time period, I think grade school education in Social Studies and History have pushed this understanding of the Red Scare in different ways – even in other facets like The Crucible in literature classes and whatnot. It seems to be that public education has this knowledge that the Red Scare is an important part of history, but I’m curious why you think the story of the Lavender Scare hasn’t been told before and is not understood by the public.

 

H: Well, a couple of things. I think partly, gay history has traditionally been marginalized. It’s really only in the past three decades that we’ve come to recognize the need to understand the histories of different minority groups. We’re just more recently coming to the understanding of the need for gay history to be acknowledged as well. But I think the big reason that people didn’t know about this and even people within the community really didn’t know about it is that when this was going on in the fifties and sixties, and into the seventies, eighties, and nineties – you know, having seen the film, that it wasn’t until the 1990s that this policy was reversed – but particularly during those early years, it was in everybody’s interest not to talk about it. The gay men and lesbians who were being fired didn’t want to talk about, even to their close friends and family, why they had been fired because they did feel a need to remain in the closet at that time. The government, after some initial publicity about how we’ll track down these people and get them out of the government, as the years went on and the firings continued, the government stopped talking about how many people were being fired because then the question started to become ‘well why did you hire them in the first place? Why are you only finding out that there are gay people working for the government now, after they’ve been there for all these years? Why didn’t you have better security systems?’ It was really in everybody’s interest not to talk about it. The remarkable thing is even someone like Frank Kameny, who was right in the center of this battle for all these years, he didn’t know how widespread this was and how many people were either denied employment or fired. It wasn’t until the 1990s when a lot of documents from this time period were being declassified that David Johnson was able to do the research that really put together the enormity of what happened. It’s really a combination of the lack of gay history being taught but also the lack of knowledge in general of this time period. 

 

G: Something you said earlier that struck me a little bit was you said that thinking about this project, when you were reading Johnson’s book, you were thinking about how it’s not just history, it’s also kind of a news story with your news background. What do you mean by that – what is the difference between history and a news story?

 

H: Well, what specifically I was referring to was that it’s a news story because people don’t know about it. I worked at “60 Minutes” for many years and the goal was to come upon some stories that would surprise people and put some issue into a broader context, so on a very basic level it’s a news story because it was news to me. On that level it was news. But beyond that I think there is a real relevance to the message today that frankly I wasn’t expecting there to be when I started working on this, believe it or not almost ten years ago. I think we’re living in a precarious time right now and it is very similar to what was going on in the 1950s. The homophobia of the fifties, as the film explains, was a pretty direct backlash against an earlier period during which there was much less discrimination against LGBTQ people. We’ve obviously made enormous strides in the past decade and more, but I think one message of the film and one of the things that makes it relevant is to remember that progress in issues of social equality doesn’t necessarily continue in a straight line, and there can be a step back for every couple of steps forward. I think we have to be aware of that. On a broader perspective, the film explores a time when a specific minority group was demonized in the name of national security and patriotism and so forth. You could argue that we’re seeing a repeat of that today with different minority groups. I think there’s a message here that history has not looked kindly on by those who have embarked on those kinds of policies, whether it be Japanese-Americans during World War II or LGBTQ people. There’s a whole list, sadly. 

 

G: You know, funny that you mention that – I think one of the most interesting things for me and one of the things that I really enjoyed about the documentary was that you don’t just focus on the 1950s. When we think about the title of your film, we think you’re just going to be talking about the Lavender Scare the entire time, but we also hear about this incredible time in D.C. in the 1930s that you show; we hear about World War II and the birth of gay liberation at Stonewall and beyond that. I guess I’m curious about going beyond the moment of the 1950s – what does that do to your story? Obviously, it provides some context, but what were you trying to show by giving a broader narrative rather than just focusing on the fifties?

 

H: I think it was able to show how the discrimination of the fifties, just as that was a result of a more permissive earlier time; it also set the stage for the reaction of the sixties, in which people did decide to stand up for their rights and say ‘this is wrong.’ I think it’s really important – Stonewall obviously is a huge milestone in our history, but I think it’s really important to acknowledge that there were incredibly brave people in the 1940s and particularly the fifties and early sixties who were sowing the seeds of the Stonewall Rebellion and pay tribute to their contribution, but also to really remember and to respect the activism and commitment that they made – to keep in mind that as much progress as we’ve made, we have to keep at it. 

 

G: Well what’s interesting too is that going into this film, I always assumed that because we’re approaching the fiftieth anniversary of Stonewall, that people are still obsessed with the seminal moment in 1969. Frank Kameny’s efforts and activism is, like you said, something that we need to acknowledge, so what I loved about your film too was that you argue that Kameny is also reacting to a gay activism that was before him, that was fighting along lines of civil rights but was also fighting along lines of separatism, or difference I should say, not along lines of marriage or employment rights or anything like that. That’s something that for a gay historian was really interesting to think of, even pre-Stonewall activists not being united about how to go forth in politics or culture. 

 

H: Absolutely, and it really is. One of the things that attracted me to the story is that there are three distinct acts in this story. There’s the Depression and World War II time when gay people are finding each other and building communities, and then there’s the fifties and the Lavender Scare when those communities are really under attack, and then we see how the community picks itself up and began the fight that led to Stonewall and led to marriage equality and where we are today. 

 

G: I’m curious, when did you begin this project? I was just thinking about how Frank Kameny is such a central figure to this film, as he should be, and I’m curious – did you get to interact with him at all? I know he passed away in 2011 I believe, but I’m curious if you had any contact with him and how his story helped you craft this larger story.

 

H: I read David’s book in 2009 and reached out to him. I had assumed that a documentary must have been made on the subject because it just seemed like such a natural. I tracked him down and my question really was where can I find the documentary and obviously he told me that none had been done. David and I met for the first time to discuss the possibility of doing this film on July 4th, 2009. Not only is this the fiftieth anniversary of Stonewall, but it’s also the tenth anniversary of David and I [beginning the project]. It’s a big year for anniversaries. So we talked about it and I’d never done an independent film before. I had always worked for broadcast companies – CBS and later NBC – so I really didn’t appreciate the difficulty in raising funds and really doing something on my own. But in any event, reading the book and talking to David and learning about the story – I did realize that Frank was, in a way, the central character. And so really before seriously figuring out how to raise money or go about doing this, I hired a camera crew and spent three days with Frank in July 2010. [I] interviewed him over three days, so the interviews of Frank that you see in the film were done by me. I spent three days with him, and it was, you know, the word ‘fascinating’ and ‘an honor’ and all those words really undersell it. I knew, even before reading David’s book, of Frank and certainly knew the details of all his contributions and activism. We didn’t shoot in his house – you might have noticed the snapshot of Frank sitting next to his desk which is piled high with folders and papers. Frank’s house didn’t lend itself to being a place that we could get a camera crew into. We shot at a different location, and for each of the three days I drove to Frank’s house and picked him up and drove him to the interview location, and I remember driving thinking ‘this is the Rosa Parks and Susan B. Anthony; this is the person that started our movement,’ and it was just incredibly moving to be able to interact with him. I will say, after three days I started to have some sympathy for the people in the federal government who had to interact with him because Frank is single-minded and doesn’t take direction easily and is quite a character, which is why he became the incredible person he did. It was just an amazing experience to be with him and I’ll never forget it. 

 

G: I liked in the film, John D'Emilio called him stubborn and that stubbornness has gotten him in trouble, but in some ways, it also helped fuel the movement that needed someone like him, to believe how right he was.

 

H: Absolutely. We estimate about 5,000 people had been fired before Frank, and all of those 5,000 obviously went quietly, and it never would have occurred to Frank to go quietly. Yet it also worked against him in ways. This didn’t make it into the film, but Frank founded the Mattachine Society of Washington and was the driving force and so forth, and at some point, he was thrown out of the organization because he was so difficult to work with. He was voted out as president by his own organization, and many years later he was quoted as saying, ‘the only thing I did wrong with the Mattachine Society was making it a democratic institution.’ That captures Frank and it was that personality, though, as you say, that without that he wouldn’t have been who he was and who knows when someone would’ve come along to start the movement that he did. 

 

G: I’m curious, Frank’s such a fascinating person, and I think for the wider public, even among historians, I think Kameny’s name is nothing, in terms of general knowledge about LGBT history, anything to Harvey Milk, who is lauded in the movement. Do you see your film as kind of correcting this larger historical ignorance of Kameny and the early activist work in the 1950s and 60s? 

 

H: I do. I mean, I really do think he deserves more attention and more credit than he’s gotten. A friend of mine does trivia, runs a little weekly trivia contest at a bar in San Diego, and every once in a while, he’ll throw in the question: who was Frank Kameny? Younger gay people, and older gay people as well I assume, don’t know. I asked him [Frank] about this when I interviewed him, and he was thrilled with the recognition he got at the White House, and he liked that idea that he was, as he put it, on a first name basis with President Obama, but he didn’t seem overly concerned about his place in history. I think it really is he did what was right for him to do, and if people know about it, great, and if they don’t, that’s ok too. I think he should be on a stamp, and he deserves recognition, because he did incredible things. I should also add, there were a handful of other people – Barbara Gittings, Jack Nichols – who were equally active and vocal, but no one like Frank who really stuck to it his entire life and really devoted every minute of the rest of his life to the struggle. 

 

G: Moving a little bit to your filmmaking, one of the interesting aspects of the documentary is that we’re not just hearing from people who were kicked out of their occupations or just from their families or even just from historians, we’re also, in the documentary, we get to hear from the very people who were involved in the kicking-out process. You actually have interviewed people like John Hanes and Bartley Fugler. What was your interest in including their perspectives on the story and what was your reaction after hearing those perspectives?

 

H: Well, I’m so happy you ask that, because in a way, those stories are my favorite stories in the film. The people who were fired – obviously their stories are moving and tragic and infuriating – but you kind of know how the story is going to go. You understand what happened to them. I found fascinating, talking to these guys all these years later, who initiated these policies and carried them out, and were generally unremorseful. The most they would say would be ‘I wouldn’t do the same thing today,’ but every one of them said, for the times, it was the right thing to do. The credit for this, by the way, goes to my brilliant associate director Jill Landes, who I had worked with at 60 Minutes and then later at NBC and I dragged her into this project as well. She was the one who tracked down the government officials. David, in his book, really focused on the victims. Jill was able to find the investigators, and particularly John Hanes, who was the number three person in the State Department at the time and was directly responsible for this policy. I just thought it was so amazing, here was a guy who Frank Kameny wrote to in 1960-something, and Haynes responded to him and said don’t write to me anymore because I’m not changing the policy. When we interviewed him, he had no recollection of who Frank Kameny was, or why they were corresponding. You mentioned Fugler – Fugler was the one person, the one government official who said he would still not hire gay people today. After the interview, my director of photography said to me as we were cleaning up that ‘you must have wanted to slug that guy,’ and I said as a filmmaker, to tell you the truth, I wanted to give him a kiss, because that’s what you need – someone to be honest on camera and portray the story that needs to be told. I’m really grateful to him, to Fugler, that he was completely honest and shared his point of view. 

 

G: What does it mean for those people, for these two men, to agree to this interview knowing that they’re probably not going to be on the side that this documentary is going to be supportive of? By them sticking to their points of view, what does that tell you, or did that surprise you at all?

 

H: It surprised me that they agreed to the interview as readily as they did, but I think – I’ve given this a lot of thought because it’s a great question – I think they felt that they really didn’t have anything to hide and didn’t do anything wrong, and if anything they were happy to defend their positions. From our point of view, I might think they’re going to run from the cameras because they don’t want to be associated with this, and I think from their perspective this is what they did, they were right when they did it, and they were happy to talk about it. None of the interviews ended with any confrontation. With John Hanes, after we shot the interview – he has since passed away but at the time he lived in Montana, in Boseman – we went and we had drinks at his house, it was all very friendly. You know, they said what they wanted to say and believed it, and that’s great. 

 

G:  Moving away from that side, you say David Johnson focused so much on the people who were fired; I don’t want to give anything away to people who are going to read this, but there are some great details about some of the people that you followed in this documentary, like Madeleine Tress and Carl Rizzi, that you leave for the very end of the documentary, literally the last minute or two, that are some really major details and some of the biggest things that I keep thinking about, which is an applause for your fantastic documentary style. I’m just curious about why: was that a conscious placement of those details at the very end, or were you trying to elicit responses from the audience?

 

H: Well I guess, I mean everything at some point is a conscious decision. We went through several different versions of scripts and structures, and at one point there were a couple more characters who didn’t wind up getting included. The first rough cut was something like three hours long. As a filmmaker, you know that people only have so much attention and time they can devote to something. I think the way it worked out, the epilogue there was a way to encapsulate everybody’s stories and review really what their contributions were. 

 

G: Something that stuck with me is the very last thing I hear about Madeleine Tress is that she’s continually denied a passport to travel away from our country. Her experience of the Lavender Scare is not something of the distant past, it’s something she has to live with the rest of her life, which speaks to the terror that so many people have, right? We can laud Kameny for being great, but in some ways hearing stories like that you totally understand why people stayed private, or like you say in the beginning of the film, it’s one thing to get fired, but so many people just didn’t even want to face that; they resigned their posts. I’m curious what you’ve seen as the biggest reaction to the film since you started screening it and what’s the biggest surprise you’ve gotten from that reaction from viewers?  

 

H: Well I guess the biggest surprise I see from audiences is just the story itself, that people say how is it possible that I didn’t know this, particularly from older people who lived through the McCarthy era or at least shortly in the period thereafter and were familiar with it. That’s the biggest surprise from audiences. I think for me, a big surprise, and I guess this is naïve of me still – we did close to 100 film festival screenings – and I would say, at more than half of them, someone got up during the question and answer period and said ‘I worked for the government, and I was fired because I was gay.’ I guess I’m still surprised when I see how many people this affected. There was one event – we were at a screening in Ocean Grove, New Jersey in the basement of a church, so not a big event. There might have been 50 people there. After the film and after the Q&A these two elderly women came up to me – I later found out that they were both in their nineties – and they told me that they had met in the 1950s when they were both secretaries for the Social Security Administration. They were partners and have been together since, and back in the 1950s when it was discovered that they were lesbians they were fired, they were both fired from the Social Security Administration. They said to me, ‘we never knew, for all these years, that this was part of something bigger. We thought it was just us. Thank you for telling our story.’ It’s that kind of thing that really sends the message home about how many people this affected and as you just said, how it remains with you your entire life. 

 

G: It was a very powerful documentary, and I was excited to see that it will continue to be screened more this summer, and I’m excited to hear more reactions from the film, because it’s something that definitely needs to be seen, so congratulations. 

H: We’re going to be at the Avalon Theatre, so on June 5th I’ll be there if you can stand seeing it again. CBS Sunday Morning is doing a piece about it and it looks like they’ll be filming the question and answer there, and David Johnson (and Jamie Shoemaker) will be there as well. Hope maybe you’ll be there!

 

Note: HNN did go to the screenign at the Avalon Theatre. You can read Andrew Fletcher's excellent write-up of the event here

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172253 https://historynewsnetwork.org/article/172253 0
The Vatican's Latest Official Document Is An Insult to the LGBTQ Community and History

Martyrs Saints Sergius and Bacchus

 

During the fourth-century, Sergius and Bacchus, two inseparable Syrian soldiers in the Roman emperor Galerius’ army, were outed as secret Christians when they refused to pay homage to the god Jupiter. The incensed emperor ordered them beaten, chained, and then, as their fourth-century hagiographer explained, paraded through the barracks with “all other military garb removed… and women’s clothing placed on them.” Both men were sent to trial; Bacchus refused to abjure his faith in Christ and was beaten to death by his fellow Roman soldiers as punishment. The night before Sergius was to be similarly asked to recant his Christianity, the spirit of Bacchus appeared before his partner. With his “face as radiant as an angel’s, wearing an officer’s uniform,” Bacchus asked, “Why do you grieve and mourn, brother? If I have been taken from you in body, I am still with you in the bond of union.” 

 

Bacchus continued to offer his protection to Sergius, stealing the resolve of the later, so that when he was tortured and murdered the following day, he did it steadfast in his faith and love, the very voice of God welcoming the martyred saints into heaven as a pair. Historian John Boswell explained that in writings about the two, they were often referred to as “sweet companions” and “lovers,” with Sergius and Bacchus representing “to subsequent generations of Christians the quintessential ‘paired’ military saints.” There’s an anachronism to the term perhaps, but there’s credible reason to understand both Sergius and Bacchus as a gay couple. And, most surprisingly for some, the early Church had no issue with that reality.

 

Sergius and Bacchus were not a token example of same-sex love countenanced by the Church in the first millennium; there are several other pairs of canonized romantic partners, and both the Orthodox and Catholic Churches allowed for a ritual of same-sex confirmation called Adelphopoiesis. This ritual sanctified unions of “spiritual brothers” that was common among monks in the Latin rite West until the fourteenth-century, and that continued in the East until the twentieth-century. Boswell wrote in his (not uncontroversial) 1994 study Same-Sex Unions in Pre-Modern Europe that far-from the blanket homophobia which we often see as tragically defining Christianity, adherents of the early Church saw much to “admire in same-sex passion and unions.” In his book, Boswell argued from his extensive philological knowledge of sources in a multitude of original languages that Adelphopoiesis was not dissimilar to marriage, and that it allowed men to express romantic feelings towards their partners in a manner not just allowed by the Church, but indeed celebrated by it. 

 

Obviously, this is a history which Bishop Thomas Tobin of Providence is unaware of, having tweeted on June 1st that “Catholics should not support orattend LGBTQ ‘Pride Month’ events held in June. They promote a culture and encourage activities that are contrary to Catholic faith and morals,” and with seemingly no trace of either irony or self-awareness added “They are especially harmful for children.” Medieval commentators wouldn’t necessarily fault a bishop on his lack of historical expertise or context, the healthy anti-clericalism of the period acknowledging that the intellectual heft of the Church wasn’t always necessarily robed in priestly vestments. 

 

However no such forgivable ignorance concerning the complicated history of gender and faith can be proffered on behalf of the Vatican document “Male and Female He Made Them” released on Monday June 10th, which condemns what it calls “gender theory,” and which even more egregiously and dangerously denies the very existence of transgender and intersex people. Hiding the reactionary cultural politics of the twenty-first century under the thin stole of feigned eternity, the author(s) write that Catholic educators must promote the “full original truth of masculinity and femininity.” 

 

From a secular perspective, much prudent criticism can be made concerning this document’s obfuscations and errors. A physician can attest to the existence of both transgender and intersex people, making clear that to define away entire categories of humans and their experience is its own form of psychic violence. The much-maligned gender theorist could explain the definitional fact that biological sex, as broadly constituted, can’t be conflated with the social definitions and personal experience of gender. As philosopher Judith Butler writes in her classic Gender Trouble: Feminism and the Subversion of Identity, “There is no gender identity behind the expressions of gender; that identity is performatively constituted.” 

 

Beyond the unassailable secular critiques of the Vatican’s recent comments on gender theory, there are historical criticisms that can be leveled against it. To claim that the Vatican’s recent statement is incorrect from both medical and sociological positions is one thing, but I’d venture that the it also suffers from a profound sense of historical amnesia too, as demonstrated by the icons and mosaics of Sergius and Bacchus which hang in Roman basilicas. The so-called “Culture War” which defines twenty-first century politics infiltrates the Church every bit as much as it does any other earthly institution, but conservatives like Bishop Tobin cloak what are fundamentally twenty-first century arguments in the language of posterity, claiming that the Church’s position on gender has been continuous and unchanged. Don’t fall for it. 

 

While it’s doctrine that the Church doesn’t change its teachings, a cursory glance at the history of Christendom demonstrates that that’s a hard position to hold in any literal sense. Furthermore, while the Church has evolved over the centuries in at least a temporal manner, it doesn’t always abandon the more intolerant for the more progressive – in some regards our forerunners actually had more welcoming positions. A reading of either Same-Sex Unions in Pre-Modern Europe or Boswell’s earlier book Christianity, Social Tolerance, and Homosexuality: Gay People in Western Europe from the Beginning of the Christian Era to the Fourteenth Century illuminates that definitions of “masculinity” and “femininity” change over the course of history, and that the Church of late antiquity and the Middle Ages could sometimes have a surprisingly tolerant understanding of homosexual relationships. 

 

An irony is that even the well-known history of the Church demonstrates the manner in which understandings of heterosexuality, not to speak of homosexuality, can change over the centuries. The ideal of marriage as primarily a romantic institution – a union of a woman and man in love who produce children and exist in a state of familial happiness –  is one that doesn’t widely emerge until the Reformation, as celebrated by early evangelicals in the marriage of the former monk Martin Luther to his wife, the former nun Katharina von Bora. This ideal of marriage was one that became widely adopted by Christians both Protestant and Catholic, but it’s obvious that the priestly ideal of celibacy (itself only made mandatory by the eleventh-century) is by definition not heteronormative. Our understandings of romance, family, sexuality, and gender have been in flux in the past – within the Church no less – and no amount of thundering about “How the Vatican views it now is how it has always been” can change that. And as Boswell’s studies make clear, there are Catholic traditions from the past that are preferable to today, with current opinions having more to do with right-wing social politics than with actual Christian history. 

 

For the Medieval Church, homosexuality wasn’t necessarily condemned more than other behaviors, and as Boswell writes “when the Christian church finally devised ceremonies of commitment, some of them should have been for same-gender couples.” Monks committed themselves to each other as icons of Sergius and Bacchus smiled down, and an expansive and different set of relationships, some that we’d consider homosexual by modern standards, were countenanced. This is crucial to remember, a legacy more in keeping with Pope Francis’ welcome claim of “Who am I to judge?” when asked how he would approach lesbian and gay Catholics and less in keeping with his papacy’s unwelcome document released this week. 

 

Writing as a baptized Catholic, who welcomes and celebrates the important role that the Church has played for social justice (despite Her copious sins) and who furthermore understands that the energy of the Church has often been driven by her committed LGBTQ parishioners who do the difficult work of faith despite the Vatican’s intolerance, it’s important to enshrine the legacy of men like Sergius and Bacchus. Bluntly, the Vatican’s decision to release their statement is hateful, even more insulting during Pride Month; Bishop Tobin’s remarks are hateful; the reactionary line of the Magisterium is hateful. Not only is it hateful, it’s ahistorical. For LGBTQ Catholics, it’s crucial to remember that the Magisterium has never been synonymous with the Church. Editor of America Magazine, and vital progressive voice, Jesuit priest Fr. James Martin writes that the Vatican’s document is one where the “real-life experiences of LGBT people seem entirely absent.” Presumably such an act of erasure would include Sergius and Bacchus, who unlike any living bishop are actual saints. 

 

As the former Jesuit James Carrol eloquently wrote in his provocative article from The Atlantic “Abolish the Priesthood,” there are ways to be Catholic that don’t reduce the faith to idolatrous clericalism, suggesting organizations of lay-worshipers who “Through devotions and prayers and rituals... perpetuate the Catholic tradition in diverse forms, undertaken by a wide range of commonsensical believers…Their ranks would include ad hoc organizers of priestless parishes; parents who band together… [and] social activists who take on injustice in the name of Jesus.” As a humble suggestion, perhaps some of these lay parishes would consider resurrecting a venerable ritual, the commitment of Adelphopoiesis? For such should be rightly regarded as a sacrament, a proud reminder during Pride Month of the love and faith which once motivated two martyred soldiers. 

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172254 https://historynewsnetwork.org/article/172254 0
Why is Brazil so American?

 

A recent article written by Jordan Brasher about a “Confederate Festival” held in a town in the countryside of São Paulo State, Brazil, drew the attention of readers in the United States. It was not the first time this festival had made the news in America as three years before the New York Times had reported this same event. The difference now is that Brazil is ruled by a new president, Jair Bolsonaro, someone who is becoming renowned for his controversial opinions on race, gender, sexuality and other topics important to various sections of Brazilian civil society. This pulls the “legacy” of the American Confederate movement into the centre of an ethnic-racial discussion at a delicate time in the history of Brazil. To make the situation worse, last May, the president of Brazil visited Texas and saluted the American flag just before giving his speech. Many in the Brazilian media subsequently accused Bolsonaro of being subservient to the U.S.

 

It may be difficult for an American audience to assimilate this information. After all, apparently, there is not a great deal of convincing evidence within Brazil´s history, language or culture of its affectionate bond towards the U.S.A. However, the truth could not be more different. Since the 18th century, the United Stated has greatly influenced the history of Brazil. Our Northern neighbor gradually became a role model for some of Brazil´s failed independence revolutionary leaders, such as the Minas Gerais Conspiracy

 

Nevertheless, it is also true that after gaining independence from Portugal in 1822 the choice for a Monarchic regime diminished the impact of American influence, until November 1889, when Brazil became a Republican Nation and looked upon the U.S. Constitution as its ideal inspiration. This can be both exemplified by the adoption of the Federated Regime, and an introduction of a new name: United States of Brazil (this remained Brazil's name until 1967). At the dawn of this new regime, Americanism inspired our own diplomatic model and even briefly enchanted some of our intelligentsia, especially within the education field.

 

The 1930s were characterized by a process of modernization of our society with assurance of rights, industrialization and education. This could be considered a kind of Americanism with a Brazilian twist as this process was not led by the civil society but by Getúlio Vargas, an authoritarian ruler of Brazil from 1930 to 1945. Such a modernization required the arrival of immigrants from various parts of Europe, especially from Italy. Thus, in the same way that had happened in the United States, a distorted image of this cultural melting pot has gained traction in the Brazilian identity as a solution for a multiethnic society composed of Indigenous peoples, Europeans and peoples of African descent. 

 

Then, automobile factories arrived and gradually the American inspiration once felt in our political structure spread to the culture sphere. Symbolically, the great ambassador of this transition was the Disney cartoon character, José Carioca, a rhythmic parrot from Rio de Janeiro. Alongside him, Carmen Miranda, who after a long stay in the United States, returned to the Brazilian stage singing: “they said I came back Americanized”. Our Samba began to incorporate elements of Jazz, and Bossa Nova became another vehicle united Brazil and the US, exemplified by the partnership between Tom Jobim and Frank Sinatra.       

 

In the 1970s, a new dictatorship accelerated this process of Americanization of the Brazilian cultural sphere. This was because of the Military´s projects that occupied territories of the West (our version of American expansionism and the “Wild West”), and because it seemed to unleash a more “selfish” type of society. In other words, the project of a development cycle implemented by the Brazilian Military Regime accentuated values such as individualism, consumerism and the idea of self-realization. The self-made man became the norm of our country. As the historian Alberto Aggio points out, “after 20 years of intense transformation of society, Americanism has reached the ‘world of the least affluent’ and because of this the ‘revolution of interests’ has reached the heart of social movements”.

 

And that was how Brazilian society, from the lowest class up to the highest, became interested in American culture and tried to emulate its habits and cuisine (as we can see in the current “cupcake revolution” in our bakeries), venerated American television series and artists, Americanized children´s names, incorporated words into its vocabulary, and even changed its predilection for sports. For instance, it is not difficult to find a fan of NBA or NFL teams in our country. This is why, though embarrassing, it is fully understandable that our current president, a retired military man from the Brazilian middle class, has this kind of genuine admiration for Trump and his followers. 

 

In addition to the vastness of its territory, its multiethnic identity, the adoption of presidential federalism, industrial modernization and middle classes inspired by the American way of life, there are other elements that bring Brazil closer to the United States. This includes their history of slavery, the racism bequeathed by that despicable practice and its counter legacy of social activism. Starting in the 1930s, the Brazilian black movement tightened its ties with African American intellectuals and activists, which generated exchanges of writings, symbols and experiences of resistance. At first, Brazilians inspired American blacks, especially in the North, but the Civil Rights Movement reversed the terms of influence. The Black Power Movement, particularly the Black Panther Party with its charismatic leaders and affirmative policies, reshaped the Brazilian black movement, as described by Paul Gilroy in his book Black Atlantic. Similarly, transformations in the production of studies on slavery in the United States, beginning in the Civil Rights era, also impacted Brazilian academic studies from the 1980s onwards, inspiring the historiography dedicated to this subject in a decisive way. However, not everything had a positive outcome. The reaction towards the empowerment of subordinate sections of Brazilian society, especially black people, shed light on Brazilian racism that had been veiled until that point. This reveals a difference between Brazil and the USA: while racism has always been somewhat explicit in American culture, in Brazil, the absence of any “WASP pride” and broader adherence to an idealized image of a cultural melting pot society has often led to “informal” demonstrations of racism.For example, the practice of asking a black customer to leave a restaurantat the request of a white customer was common until the 1990s. Thus, on one hand Americanism stimulated a greater struggle for rights and representativeness, yet on the other, it helped bring racism to light.

 

Regarding the “Confederate Festival”, its origins can be traced back to traditions established by a colony of Southern U.S. immigrants that reached São Paulo State in the late 19th century. However, I would not be surprised if after acknowledging the existence of this Festival, several Brazilians were to spontaneously replicate that “celebration” elsewhere in our country. After all, as the American social scientist and aficionado of Brazil, Richard Morse, once wrote, Latin Americans had the chance to embark on a path to the West without “disenchanting” their culture, in the Weberian sense of the concept. To describe the relationship nurtured by Latin America and the U.S, Morse created a metaphor: a mirror that reflects an inverted image of itself. Apparently, Brazilians got used to being this mirrored version of Northern America. Or put in terms that an Americanized Brazilian, perhaps even Jair Bolsonaro, would understand, we live in a Stranger Things Upside Down of the U.S. 

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172255 https://historynewsnetwork.org/article/172255 0
The Origins of American Hegemony in East and Southeast Asia – And Why China Challenges It Today

 

 

It is hard to ignore the escalating rivalry between the United States and China. The Sino-U.S. trade war hogs the headlines; China’s explicitambitions for hegemony in the Asia Pacific have induced the Trump administration to increase U.S. defense spending and strengthen its partnerships with Asian allies; and experts wrestle with the matter of China’s challenge to America’s longstanding hegemony in East and Southeast Asia, pondering the decline of U.S. influence.

 

But how did America rise to hegemony in East and Southeast Asia in the first place? The popular history of U.S. involvement in Asia after 1945 suggests that American predominance faded fast. By the 1950s, U.S. military supremacy had been punctured in Korea. Instead, Chinese forces proved formidable, driving the U.S.-led coalition deep into South Korea, prompting General Douglas MacArthur’s desperate (and unheeded) call for Washington to use atomic weapons on China. Further humiliation would greet America’s military in Vietnam. Richard Nixon’s rapprochement with China in the early 1970s seems an effort to slow America’s waning paramountcy in world affairs. 

 

If America suffered mostly embarrassment and defeat in Asia, how could U.S. hegemony have emerged in the region? Commentators have rarely addressed this question, stating simply that America enjoyed a “surprisingly advantageous” position in Southeast Asia despite failing in Vietnam.

 

My book, Arc of Containment: Britain, the United States, and Anticommunism in Southeast Asia (Cornell UP), examines the under-studied rise of U.S. hegemony in Southeast Asia during the Cold War and its impact on wider Asia. It shows that after 1945, Southeast Asia entered a period of Anglo-American predominance that ultimately transitioned into U.S. hegemony from the mid-1960s onward. Vietnam was an exception to the broader region’s pro-U.S. trajectory.

 

British neocolonial strategies in Malaya and Singapore were critical to these developments. Whereas Vietnamese and Indonesian revolutionaries expelled the French and Dutch, Britain collaborated with local conservatives in Malaya and Singapore, steering them toward lasting alignment with the West. Drawing from Britain’s repertoire of imperial policing and counterinsurgency, British and Malayan forces decimated the guerrillas of the Malayan Communist Party while Singapore’s anticommunists eliminated their leftist opponents. Beyond Malaya and Singapore, Thailand had already turned toward America (to resist Chinese influence) and the U.S.-friendly Philippines played host to massive American military installations. By the early 1960s, therefore, a fair portion of Southeast Asia had come under Anglo-American predominance. 

 

The rise of Malaysia would strengthen the Anglo-American position. In 1963, when Singapore moved toward federating with Malaya and Britain’s Borneo territories (Sabah and Sarawak) to create Malaysia, President John Kennedy declared Malaysia the region’s “the best hope for security.” After all, Kennedy officials had envisioned that forming Malaysia would complete a “wide anti-communist arc” that linked Thailand to the Philippine archipelago, “enclosing the entire South China Sea.”

 

But Malaysia did not enjoy universal acclaim. President Sukarno’s left-leaning regime in Jakarta and his main backers, the pro-China Indonesian Communist Party (PKI), deemed Malaysia a British “neocolonial plot” to encircle Indonesia. Sukarno had fair grounds for his accusation. Malaya had, soon after independence in 1957, aided Britain and America’s botched attempt to topple Sukarno. Moreover, British military bases in Singapore had supported this effort and were set to remain under London’s control for the foreseeable future. Sukarno’s answer was Konfrontasi(confrontation), a campaign to break Malaysia up. His belligerence would bring his downfall and usher Indonesia into America’s orbit.

 

Malaysian officials responded to Konfrontasi by launching a charm offensive into the Afro-Asian world that diplomatically isolated Indonesia while British troops secretly raided Indonesian Borneo to keep Indonesia’s military on the defensive. These moves worked well. In 1964, Afro-Asian delegates the annual non-aligned conference condemned Konfrontasi; in January 1965, the UN Security Council accepted Malaysia as a non-permanent member, legitimizing the federation and undermining Sukarno’s international influence. Equally, Konfrontasi severely destabilized Indonesia’s economy and battered its armed forces. Now, conservative elements of the Indonesian Army, which America had courted and equipped since the late 1950s, prepared to execute a coup d’etat. When a few elites of the PKI attempted in October 1965 to preserve Sukarno’s authority, Major General Suharto led the Army’s right-wingers to seize power, alleged the PKI and its Chinese patron intended to subvert Indonesia, and massacred the PKI (Sukarno’s power base) in a bloody purge. The Suharto government then tilted Indonesia—the world’s fifth largest nation—toward Washington and broke diplomatic relations with China. 

 

Ironically, President Lyndon Johnson Americanized the Vietnam conflict that same year—supposedly to rescue Southeast Asia from communism—when most of the region’s resources and peoples already resided under pro-West regimes opposed to Chinese expansionism.

 

Southeast Asia began transitioning from Anglo-American predominance to U.S. hegemony at much the same time. Konfrontasihad so taxed Britain’s economy that British leaders (in opposition to Prime Minister Harold Wilson) insisted on a full military retreat from Malaysia and Singapore. As British power waned, Singapore and Malaysia entered America’s sphere of influence, throwing themselves into supporting the U.S. war in Vietnam. As America faltered in Vietnam, it also raced to consolidate its newly-acquired ascendancy in Southeast Asia, forging intimate ties with, and pumping economic and/or military aid into, Indonesia, Malaysia, the Philippines, Singapore and Thailand. These nations formed a geostrategic arc around the South China Sea. 

 

Here, then, are the overlooked origins of U.S. hegemony in East and Southeast Asia. 

 

For, by the late 1960s, the arc of containment had effectively confined the Vietnamese revolution and Chinese regional ambitions within Indochina, causing Premier Zhou Enlai to express frustration that China was “encircled” and increasingly “isolated” from regional and world affairs. In this light, Nixon’s rapprochement with China was undertaken not from a position of weakness but de facto hegemony in East and Southeast Asia. Indeed, Nixon found Chinese leaders eager to thaw relations with America. Even when the Indochinese states came under communist control in 1975, the arc of containment remained firm, its leaders keen to reinforce their ties with America. 

 

It seems remiss to contemplate Sino-U.S. rivalry today without acknowledging this history of American hegemony in East and Southeast Asia, particularly the outsized role of regional actors. For while China has today mounted a profound challenge to America in Asia, during the Cold War America’s generous economic programs and overwhelming military power won little in Indochina. Rather, U.S. hegemony before, after, and during the Vietnam War was created by  anticommunists who chose to cast their lot with America against Chinese expansionism. In the days ahead, it is likely that the regional powers’ choices and actions will again determine how the Sino-U.S. rivalry plays out.

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172256 https://historynewsnetwork.org/article/172256 0
The Cold War Spy and CIA Master of Disguise Writing the History of CIA Tactics in the Cold War

 

Jonna Mendez is a former Chief of Disguise with over twenty-five years of experience as a CIA officer working in Moscow and other sensitive areas. She is the coauthor with husband Tony Mendez of Spy Dust and her work has been featured in the Washington Post, WIRED, NPR, and other places. Her husband, Antonio (Tony) Mendez, perhaps best-known from his book-turned-film ARGO, was one of the most celebrated officers in CIA history. He, sadly, passed away in late January. THE MOSCOW RULES: Tactics That Helped America Win the Cold War is their last book together.

 

 

What was it like going from being “in disguise” as a CIA agent to the whole world knowing that you were once an operative? What as that transition like?

 

I worked for the CIA for 27 years. That whole time I was under cover, whether living in the US or overseas. The cover would vary to fit my circumstances. It usually revolved around other official US government entities. While my colleagues knew, of course, of my true affiliation, my social contacts did not. This would include some close friends over many years – who thought I worked a very boring job for the US government. Some members of my family knew, but none of my friends. When Tony and I came out publicly, it created a good deal of friction with friends I was close to, and in fact I lost several friends who could not believe that I had deceived them over the years. That was painful. My foreign friends probably understood better than my American ones. It was also actually difficult to speak publicly at first. We were so inured to obfuscating that speaking the truth, about such a simple thing was hard.

 

What do you think your personal role is in history and what was it like writing about it?

 

Tony Mendez and I worked together for many years. After our marriage the duality continued. When we began speaking and writing about our work, we did it together. Of course, he was the catalyst for our being able to speak – when others could not. But we had done much of the same work and we had many similar experiences. I think his role in history is heroic, while my role will be helping to publicly un-demonize the CIA. We thought that our role was to personalize the CIA; to demonstrate that it was composed of normal Americans trying to do the best job possible for their country. An apolitical group of really excellent employees. It may sound simplistic, but I think that together we opened up the door to afford a peek inside – at the machinery of this government agency and the people who work there.

 

I also feel that I had a creative role to play in the Disguise arena. We were beginning to produce very advanced disguise systems, modeled after some we had seen in Hollywood, and they became necessary tools in the denied areas of the world, the hard-to-work-in places where surveillance would almost prevent you from working at all – like Moscow. We were constantly innovating and creating new tools to enable our case officer colleagues to work on the streets even though they were surrounded by surveillance.

 

Does the current political climate shape how you discuss your work as an author and as a former CIA agent?

 

The politics do not shape the discussion as much as the need for sensitivity to the information that is classified. The CIA maintains a fairly tight rein on its former employees, insisting on publication review of any written material and keeping a watchful eye on pubic discussions. It is not politics that limit what we say, but the need to protect sources and methods. I have always been glad to comply. I have no desire to divulge classified information. On the other hand, when the CIA has seemed heavy-handed, I have not hesitated to question their decisions. Neither Tony nor I have felt constrained by the CIA in what we say or write.

 

You were a clandestine photographer and are still an avid photographer. What are the similarities and differences between preserving history through photography and the written word?

 

I really do believe that a photo is worth a thousand words. When two people are caught in the act of passing classified information, when the license plate of the car is clear in the print, when the face of the traitor is captured on film, this is evidence that is incontrovertible. In fact, no words are necessary. The photo is proof. But I would never dismiss the written word, the analytical approach to solving the problem, the connecting of the dots. However, if you have a photograph of the minutes of the meeting, or the scene of the crime, you have proof positive. Historically you want to have both.

 

As a member of the Advisory Board for the International Spy Museum, can you speak on public history and the importance of sharing your knowledge with wide audiences?

 

I see this as the primary role of the museum, an opportunity to educate the public and to shine some light on an area that has typically been off limits – the world of espionage. The American public is fascinated by this covert world and seems always interested in the subject. Being a member of the Spy Museum gives me an opportunity to explain how it works, how the tools are used through expansive training programs, and what the work product might look like. We are an international museum, so approach these subjects with a wide-angle lens, so to speak. The museum connection offers a rare opportunity to connect with and educate the pubic at large.

 

There is a fascination of spy life that is often portrayed in the media, particularly in movies and television. Do you think this excitement is justified? Are there accurate portrayals?

 

It took me years to understand this fascination. I believe it is based in part on the pop culture image of the spy (Ian Fleming, Graham Green, John LeCarre), the also on the lure of the unknown, the secrecy surrounding all intelligence work.  There is a basic curiosity about the work, and an assumption about the glamour surrounding the work, that draws the public in. If they only knew that for every five minutes of excitement, there are hours and hours of mundane planning, meetings and administrative details. There are few portrayals that I have seen that seem real and that is why I really don’t watch much espionage-themed media. One exception was The Americans – a TV show that I believe thoroughly captured the ethos of the culture of the spy. The characters seemed real; the situations close to life, and the disguises were fabulous. BBC also did some nice productions of John LeCarre’s work.  And Jason Matthews’ recent novels have an ability to place me back on the snowy streets of Moscow with danger around each corner.

 

As the former Chief of Disguise, are there any historical events that you think disguises played a role in? If not, how do you think disguises have helped shape the history of the world?

 

Yes, there are a number of historical events that revolved around the use of disguise and we have described some of them in our new book, The Moscow Rules. In a city where we could not meet face-to-face with our foreign agents, where the KGB surveillance was smothering our case officers, and where the use of tradecraft was the only thing that allowed our operations to take place, disguise was a tool that allowed operations to move forward. We used unique proprietary disguise techniques, derived from the make-up and magic communities in Hollywood, to protect our CIA officers and their Russian agents. These tools allowed the intelligence product to be delivered to American hands, resulting in a number of incredibly successful clandestine operations in the Belly of the Beast, the name we gave to Moscow. Failure in Moscow would result in the arrest and execution of our foreign assets. This was a life and death situation.

 

You also co-authored the book Spy Dust with your husband Tony Mendez. Why did this one seem important to write next?

 

Spy Dust was a natural follow-on to The Master of Disguise. We met with our editor after the publication of MOD over cocktails, and she asked about how we had met during our days in CIA. When she heard the story she basically commissioned the next book, Spy Dust. She thought the story would make a very interesting book. As it turned out, her publishing house was not the one that bought the manuscript. In fact, there was a heated discussion, once the manuscript was done, about whether our romance belonged in the middle of a spy story. We insisted that there was no book without that story, and so it stayed. It was difficult to write, as it involved the break-up of my marriage, but it was important to us, on several levels, to tell the story truthfully. And so we did.

 

Why should people read The Moscow’s Rules? What message do you hope they take away from it?

 

Many people feel that the Cold War is over and that we should move on with normalized relations with our old antagonists. The Moscow Rules opens with a late night scene at the gate of the American Embassy in Moscow. Set in June 2016, it details the savage beating of an American diplomat by the FSB, successor to the KGB, as he attempts to enter his own embassy. The beating continued into the embassy foyer, legally American soil. The American was medically evacuated the next day with broken bones. This was in 2016, in the middle of our most recent presidential campaign.

 

The FSB was exhibiting a consequence of The Moscow Rules; the heretofore unwritten but widely understood rules of conduct for American intelligence officers in Russia. My best guess was that the American had violated one of those rules: Don’t harass the opposition. The FSB is heavy-handed, as is Putin, a former intelligence officer.

 

The Moscow Rules were the necessary rules of the road when working in Moscow, the understood methods of conducting yourself and your intelligence operations that had proven themselves over the years. They were never before written down, but were widely understood by our officers. And they are dirt simple: Use your gut. Be nonthreatening. Build in opportunity but use it sparingly. Keep your options open. Use misdirection, illusion and deception. All good examples of The Rules.

 

What do you hope this book adds to the legacy of your husband, Tony Mendez, as well as your own?

 

The Moscow Rules is Tony’s fourth book and my second, third if you count my work on the book ARGO. Neither of us is looking for a legacy. Tony’s legacy is already well established; my goal lies more in the educational area. We always believed that our unique opportunity to speak for the CIA and to educate the public on the work that is done in their name was a chance to open the door to a myriad of career opportunities for young Americans who might never give the intelligence field a second thought. While I am not a traditional feminist, I can serve as an example of the continuing, on-going success of women in this field. And in our work with the International Spy Museum we have tried to further these same goals. Between the two of us, and in the books we have written, we have tried to further these goals.

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172251 https://historynewsnetwork.org/article/172251 0
The Origins of the Lost Cause Myth

 

 

The two most significant issues that led to war between the North and South were, most scholars acknowledge, slavery and states’ rights. Northern states had fully abolished slavery by 1804, when New Jersey was the last Northern state to do so, and with an economy that did not depend on the labor of slaves, it demanded that the South do the same. Yet in demanding that the South follow suit, the North, Southerners maintained, was in contravention of the issue of states’ rights—that each state had the right to craft and implement its own laws and policies without Federal governmental intrusion. Yet while the two were independent issues in theory, in praxis they were not. That comes out in Southerner apologist George William Bagby’s somewhat mawkish essay, “The Old Virginia Gentleman”:

 

Fealty to the first great principle of our American form of government—the minimum of state interference and assistance in order to attain the maximum of individual development and endeavor—that was the Virginian’s conception of public spirit, and, if our system be right, it is the right conception.

 

Aye! but the Virginian made slavery the touchstone and the test in all things whatsoever, State or Federal. Truly he did, and why?

 

This button here upon my cuff is valueless, whether for use or for ornament, but you shall not tear it from me and spit in my face besides; no, not if it cost me my life. And if your time be passed in the attempt to take it, then my time and my every thought shall be spent in preventing such outrage.

 

According to Bagby, it was effrontery for Northerners to demand an end to slavery in the South. In making such a demand, the two issues became dependent. Southerners fought hard to keep their slaves only, or at least chiefly in Bagby’s view, because they were told that they could not keep them.

 

That to which Bagby alludes has come to be called the Lost Cause—a sort of treacly revisit to the days before the Civil War, to a paradise moribund, never again to be revived. The Southern attitude is in some sense easy to understand. With up to 700 thousand men lost in the war on both sides—some 100 thousand more Union soldiers than Confederate soldiers and astonishingly nearly 25 percent of the soldiers on each side—Southerners do not have recourse to the sort of warrant available to Northerners: We won the sanguinary war, so that in itself is proof that God’s justice was on our side. Southerners, to justify the loss of some 260 thousand men, had to try to understand, from their perspective, why God slept while they fought.

 

The term Lost Cause was first used by Edward A. Pollard in The Lost Cause: A New Southern History of the War of the Confederates, published the year after Civil War. Three Southern publications—Southern Historical Society Papers (1869), Souther Opinion (Richmond, 1867), and Confederate Veteran (1893)—entrenched the term and gave birth to a movement. The impassioned, lucid voice of Gen. Jubal “Old Jube” Early, the hero of the Battle of Lynchburg, who spent the final years of his life in Hill City after the Civil War, was prominent in Southern Historical Society Papers.

 

In 1866, Early wrote A Memoir of the Last Year of the War for Independence, in the Confederates States of America, in which he stated that he initially opposed secession of Southern states from the Union, but firmly changed his mind because of “the mad, wicked, and unconstitutional measures of the authorities at Washington, and the frenzied clamour of the people of the North for war upon their former brethren of the South.” Lincoln and his cronies were the real traitors of the Constitution. Recognizing the right of revolution against tyrannical government “as exercised by our fathers in 1776, … I entered the military service of my State, willingly, cheerfully, and zealously.”

 

The Civil War, Early unequivocally said in The Heritage of the South, was never about slavery from the perspective of the South. “During the war, slavery was used as a catch word to arouse the passions of a fanatical mob, and to some extent the prejudices of the civilized world were excited against us; but the war was not made on our part for slavery.”

 

Early argued that Southerners had long ago grasped that there was nothing objectionable, moral or otherwise, about the institution. Slavery was a natural state of affairs for Blacks, he stated, because of their biological inferiority.

 

The Almighty Creator of the Universe had stamped them, indelibly, with a different colour and an inferior physical and mental organization. He had not done this from mere caprice or whim, but for wise purposes. An amalgamation of the races was in contravention of His designs, or He would not have made them so different.

 

Blacks, added Early, in their state of subjugation were better off than they were in West Africa, where they wallowed in barbarism, sometimes to the extent of practicing cannibalism. In addition, said Early, black slaves on Southern plantations and in Southern cities were certainly better treated that the Blacks, and Whites, in Northern industrial sweatshops. 

Early next turns to an oft-given objection to slavery: Jefferson’s Declaration of Independence. The argument asserts that slavery is wrong, because “all men are created equal.”

 

The assertion that “all men are created equal,” was no more enacted by that declaration as a settled principle than that other which defined George III to be “a tyrant and unfit to be the ruler of a free people.” The Declaration of Independence contained a number of undoubtedly correct principles and some abstract generalities uttered under the enthusiasm and excitement of a struggle for the right of self-government. … If it was intended to assert the absolute equality of all men, it was false in principle and in fact.

 

Observation, Early intimates, is sufficient to show the de facto falsity of the equality of Blacks and Whites.

 

Yet Jefferson did intend the equality of all men in his Declaration. In his original draft of the document, he castigates George III for keeping “open a market where MEN should be bought & sold.” The capitalization of men is Jefferson’s and it occurs to underscore the notion that Blacks qua men are deserving of the same fundamental rights of all other men. Moreover, Jefferson was a cautious, diligent writer who—his Summary View of the Rights of British America perhaps being an exception—was wont not to be moved by “enthusiasm and excitement.”

 

Jefferson did likely believe that Blacks were intellectually and imaginatively inferior to Whites. In Query XIV of Notes on the State of Virginia, he stated that Blacks, though very likely intellectually and imaginatively inferior to Whites, were morally equivalent to all other persons, and thus, they were undeserving of “a state of subordination.” That Isaac Newton was intellectually superior to all others of his day was not warrant for him having God-sanctioned rights that others did not have. God’s justice for Jefferson looks to the heart, not to the head. Early’s dismissal of Jefferson’s Declaration as containing an ineffective argument against slavery is harefooted, unpersuasive.

 

Old Jube then turns to what might be construed as a legal argument for slavery—an argument from precedence. There is constitutional sanction of slavery, because there is constitutional sanction of states’ rights.

 

The Constitution of the United States left slavery in the states precisely where it was before, the only provision having any reference to it whatever being that which fixed the ratio of representation in the House of Representatives and direct taxation; that in reference to the foreign slave trade, and that guaranteeing the return of fugitive slaves. Had it been proposed to insert any provision giving Congress any power over the subject in the states, it would have been resisted, and the insertion of such provision would have insured that rejection of the Constitution. The government framed under this Constitution being one of delegated powers entirely, those powers were necessarily limited to the objects for which they were granted, but to prevent all misconception, the 9thand 10thamendments were adopted, the first providing that “The enumeration in the Constitution of certain rights shall not be construed to deny or disparage others retained by the people,” and the other that: “The powers not delegated to the United States by the Constitution, nor prohibited by it to the states, are reserved to the states respectively, or to the people.

 

Early here claimed constitutional warrant for slavery being an issue subordinate to states’ rights. That is not necessarily to say that it was less of an axiological issue—as it may be the people on the whole feel more strongly about slavery, pro or con, than about states’ rights—but that the issue of slavery, whatever its worth, was to be decided by each state and not by the federal government. However Northerners, at least some of them, thought the issue was of such significance that it transcended states’ rights.

 

Here again we might profit in analysis by comparing Early’s view on constitutional warrant with Jefferson’s. Jefferson too worried mightily about the issue of states’ rights and slavery—the Missouri problem brought it into focus—and he too recognized that there was no constitutional warrant for eradication of the institution. Thus, he too maintained that the issue of slavery ought to be up to the individual states.

 

Yet Jefferson did not have the same regard for the sacrosanctity of the Constitution that Early did. For Jefferson, constitutions were living documents that needed overhaul with each generation, as each generation in the main advanced in knowledge and such advances needed to be instantiated.

 

Moreover, Jefferson, following other Enlightenment philosophers, certainly paved the path in his Declaration for a new way of looking at persons and fundamental rights: They were natural, God-given, and given to all “MEN.” Jefferson knew that it was only a matter of time before the Constitution would bend on the issue of slavery, as slavery was an institution at odds with the natural state of affairs, and God’s justice, he says in Query XVIII, “cannot sleep for ever.” And so his attachment to the right of each state to determine for itself the issue of slavery was provisional. For Early, the Constitution’s silence on the issue was the final word, and that seems somewhat desperate.

 

In sum, the problem with Old Jube’s efforts to vindicate the South by showing that their participation in the war was due to states’ rights, not slavery, is that the two issues, theoretically distinct, were in praxis intertwined. One sees that quite plainly in the general’s many arguments in The Heritage of the South on behalf of the South not entering the war on account of slavery.

 

Yet Northerners, and enlightened Southerners like Jefferson, were increasingly coming to see that no group of humans ought to have the status of property to another group—that slavery was one issue that ought not to be swept under the states’-rights rug. One large reason for that illumination was the global respect won by Thomas Jefferson’s own Declaration of Independence over the years and its unbending assertion of the moral equality of all persons.

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172250 https://historynewsnetwork.org/article/172250 0
Silent Spring: Why Rachel Carson’s words still ring true today

 

Rachel Carson was one of the early pioneers of environmental science. She fought against the tide of establishment repression, and her own ill health, to get her meticulously-conducted research across. As a woman in science, she was viewed as an outsider but she made her voice heard through her writing: highly readable and approachable accounts of scientific facts.  Man was damaging the environment, and our climate, and action was needed. But the alarm bells she activated are often ignored, nearly sixty years later.

 

Women have always been involved in science. Across biology, chemistry, physics and medicine, countless unacknowledged women have participated in scientific endeavours. But, struggling against oppression, their voices have rarely been heard. Rachel Carson is one of the few exceptions. That’s mainly due to the importance of her message, the way she got it across and the battles that ensued.  

 

Rachel Carson was immersed in nature from an early age. Inspired by her mother and their regular nature expeditions, she crafted her observations into beautiful written stories. Although she initially majored in English at University, her love of Biology, combined with her early prowess as a writer, meant she was perfectly placed for her lifetime work.

 

Rachel’s rural upbringing also meant she witnessed the effects of humans on the environment. Several family members worked in the nearby Pittsburgh industrial power plants,with their towering chimneys which spewed out toxic chemicals into the atmosphere. One particular toxic chemical would propel Rachel into the limelight. 

 

 

After the success of a trilogy of books about the marine environment, she wrote Silent Spring.  Using rigorous scientific assessment, Rachel explained how pesticides like DDT entered the food chain and damaged a whole host of creatures beyond the intended target. Silent Spring kick-started the environment movement and began a roller coaster ride, as Rachel adjusted to fame and was diagnosed with terminal cancer. 

 

Rachel met resistance at every turn. Silent Spring was hard hitting and attacked commercial companies, condemning them for sacrificing the health of the environment in order to generate more profit. She criticized many scientists who were researching insecticides because chemical companies were pouring money into universities. Big business was under Rachel’s forensic gaze. 

 

Whilst the politicians were initially equally skeptical about Rachel’s findings, the tide started to turn. In 1962, the year Silent Spring was published, President John F Kennedy cited the book and appointed a committee to study pesticide use. Over the next two years, the government increasingly called for heightened vigilance and gradual reductions in the use of environmentally-unfriendly pesticides.

 

Sadly Rachel didn’t live to see the effects of Silent Spring. She died in 1964, at the age of 56. In 1972, DDT was banned in the USA, although it is still in use in some countries. The Clean Water Act was passed in 1972 and the Endangered Species Act in 1973.

 

Rachel’s message didn’t just influence the government or public: it also influenced the business community. As Rachel’s message of environmental and climate protection began to percolate, the executives at Exxon, the world’s largest oil company, wrote a memo. Distributed internally to a select group in 1982, it spelled out that maintaining a stable climate would require “major reductions in fossil fuel combustion”, otherwise, “there are some potentially catastrophic effects that must be considered. Once the effects are noticeable, they may not be reversible.”

 

Today, Rachel Carson’s message is as important as ever. President Donald Trump has alleged climate scientists have a political agenda. Rachel faced similar accusations, as critics suggested Silent Spring was part of a left-wing conspiracy to bring down capitalism.In 2017, Trump pulled the US out of the landmark 2015 Paris climate agreement, claiming the international deal to keep global temperatures below 2 degrees Celsius was disadvantageous to US industry and its workforce. Trump continues to ignore warnings from his own government agencies, dismissing a 2018 report of the devastating economic consequences from climate change. Any future US president would do well to heed the likes of Rachel Carson and her successors and invest in clean energy policies, protect the environment, and promote biodiversity.

 

Rachel’s legacy also lives on as other women are prominent advocates for combatting climate change.  The teenage activist Greta Thunberg is a lucid and unremitting climate campaigner. Speaking at the World Economic Forum in Davos earlier this year, she told attendees. “I don’t want you to be hopeful, I want you to panic. I want you to feel the fear I feel every day. And then I want you to act. I want you to act as if our house is on fire. Because it is.”

 

Rachel Carson was certainly fearful. She saw the smoke of the factories in her childhood and the sight never left her.  Without the immediate and far-reaching power of the internet, she used writing as her tool. Rachel knew how to construct a story and this story won’thave a happy ending, unless we act.

 

Like Greta’s, Rachel’s message was unsettling. Cutting edge science often is. Building hope for the future is part of the equation but that has to be balanced with an understanding of the impact of our actions on our planet and humankind. The stories of the other biologists, chemists, physicists, and doctors featured in Ten Women Who Changed Science and the World, reveal that they too understood the power of science to change lives. 

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172252 https://historynewsnetwork.org/article/172252 0
Make History Accessible: The Case for YouTube  

Crash Course is a Youtube Channel that covers historical events

 

History is in trouble. 

 

This is not a new observation. Benjamin M. Schmidt wrote a fantastic piece detailing the severe decline of history majors since the Great Recession. From 2011 to 2017, the total number of history majors awarded has dropped nearly 33%. Moreover, it does not take a lot of effort to notice this decade’s persistent focus on supporting STEM related ventures at the expense of the humanities. The unfortunate perception is that history is not a valuable undergraduate degree and history departments can do little to fight back because of their limited resources. This is a crisis.   

So how can we, as historically-minded people, alleviate this crisis? Part of the answer could come from the widely popular video-sharing service YouTube. It presents a great opportunity for both professional history educators and amateurs to enhance the public’s interest in history.  YouTube’s Strengths 

 

YouTube might seem like an odd choice. It’s not a service exclusively built for history and has received bad press recently that has hurt the business. But, these issues do not take away from YouTube’s four strengths: design, reach, lack of restrictions, and community-building.    

1. Design: History requires a medium that encourages long-form communication and YouTube encourages just that. A simple way to understand this relationship is: the longer the video is, the more likely it is to have ads, creating more advertisement revenue for the creators, their partners, and YouTube itself. Even YouTube’s “recommended” algorithm has been suggesting longer videos to its users when compared with the user’s starting video. This intentional design is one of the reasons why YouTube is so popular and provides such a lucrative educational opportunity. There are only benefits in uploading history lectures to YouTube, and its design can enable information to spread like wildfire.   

2. Reach: On a given day, more than a billion people visit YouTube—and that number is only growing. If you are an internet user, chances are you will visit YouTube no matter your age. But, the most impressive statistic is that almost a third of those people are dedicated users who watch multiple channels and spend a vast amount of time using the service. So not only is the reach vast, but also it can be concentrated to particular users. This reach and active user base allows for niche histories to thrive that otherwise could not have without a global audience. It is personalized mass media and that is an important educational opportunity.   

3. Lack of Restrictions: While a potential weakness, this is also YouTube’s greatest strength: its lack of restrictions. Theoretically, one can start a multi-million dollar business with just a video camera and editing software—and it has been done many times over resulting in the phenomenon of the “YouTube Celebrity.” In contrast to undemocratic and centralized cable companies, YouTube is far more democratic and decentralized which creates a more conducive atmosphere for its users and creators. While cable companies spend enormous amounts of resources garnering millions upon millions of views, YouTube creators spend only time and few material resources to create a smaller but similar impact. If a history professor wanted to share their course online, then they would simply need a camera and some editing software to reach millions worldwide.   

4. Community-Building:One reason why YouTube is so dominant is because of its already-existing communities. YouTube is so ingrained that adopting a new service requires that service to be far better than YouTube for creators and users to even consider switching. Luckily, there is already a healthy amateur history presence on it. Notable examples include the channel “The Great War” (173,372,564 views and 1,030,696 subscribers) and Crash Course’s World History course (54,542,220 views). Moreover, YouTube could also serve as a community “video-library” of sorts storing everything from historical archive footage to “pop history”. One popular case includes the Iowa State University Archives, which in 2008 transitioned to using YouTube and has experienced considerable success.   

These factors, even when combined, do not make YouTube unique. But, currently it is a great forum for historical discussion, appreciation, and education. Granted it is no substitute for an undergraduate study in history, but it is both a great complement and an introduction. Moreover, using YouTube as an educational tool is not a new idea; in fact, it is a successful idea. Historian Joe Coohill argued that incorporating images and videos into his lectures had a positive impact and Alan Marcus makes a similar case but with film and secondary education.   

There are a whole list of problems using YouTube, but they all fall under three general categories: misinformation, disinformation, and the “scarcity or abundance” problem proposed by the late Roy Rosenzweig. But, only the last problem is unique to the internet—the other two problems are amplified by the internet, but are not new issues.  If YouTube is not your cup of tea and you prefer Coursera, Khan Academy or a university open source initiative, then the point still stands. YouTube is one suggestion, but the overall point is one of accessibility. Accessibility is central to education and we should adapt and ensure that more people have access to serious history. As a history student, I encourage historians to use YouTube for it is their duty to ensure that people know and appreciate the past. This can include uploading lectures onto YouTube, partnering up with services like Coursera, consulting with popular history creators or even starting their own podcasts. The important takeaway is that adapting to new communication technologies is imperative and historians should feel free to experiment. 

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172257 https://historynewsnetwork.org/article/172257 0
A Fresh Take on Watergate Illuminates the Present

 

As evidence of illegal activity in the recent presidential election mounts, the attorney general appoints a special prosecutor. The president, after denouncing the news media for false reporting, calls a press conference to insist he has done nothing wrong.  In court hearings, evidence of campaign dirty tricks and secret pay-offs emerges and a growing chorus of Congressional Democrats call for impeachment proceedings. 

 

While these could be scenes from recent CNN coverage, they actually come from 1973-4, the last years of the Nixon presidency. 

 

Washington journalist John Farrell’s book, Richard Nixon, a Life, provides a fascinating narrative that takes the reader inside the mind of a troubled president who is obsessed with taking down his perceived “enemies.” 

 

Farrell, a former White House correspondent for the Boston Globe, has written two previous books including Tip O’Neill and the Democratic Century; he brings a deep understanding of Washington D.C. culture and an eye for telling anecdotes. 

 

Farrell’s book is focused exclusively on Nixon, concluding with his death in 1994, and never mentions Donald Trump. However, readers will inevitably reflect on the current presidential crisis as the author leads us through the racist rants, paranoid visions and surreal plotting against opponents that was a regular feature of Nixon’s White House. 

 

While Richard Nixon’s rise and fall has been repeatedly examined — there are more than a dozen biographies of him — Farrell’s account offers many new insights. He has tapped into a rich trove of new material, drawing from the 37,000 hours of White House tapes (many held in secret until 2013), 400 oral histories from Nixon associates compiled by Whittier College and books such as The Haldeman Diaries.  Farrell has combed through these voluminous files to craft a day-by-day account of how the Watergate fiasco unfolded.  By weaving together key taped conversations and candid observations from his close associates, he provides a day-by-day, sometimes hour-by-hour, account of the dark world of Richard Nixon’ restless mind. Farrell shows his obsession with “enemies” who include Jews, blacks, Democrats and Ivy League “eggheads.”       

                                

Although Trump and Nixon appear very different in demeanor and family background (e.g. Nixon’s father was dirt poor), they share some important personal traits.       

      

Both men had a distant mother and demanding father, both endured the death of a favored older brother and both harbored deep insecurities that led to driving ambition and a “win-at-any-cost” attitude. Both men displayed an unnatural sensitivity to criticism and an obsession with striking back at perceived enemies. Both tried to conduct their affairs in deep secrecy, obtained money from dubious sources and hired unsavory characters to carry out their dirty work for them.    

 

While the basic facts of Watergate have been recounted many times most notably in the book and film of All the President’s Men, these are primarily views from the outside. Farrell’s book takes us inside the White House, detailing the daily interactions of Nixon and his closest lieutenants, Haldeman, Ehrlichman and Kissinger. We see Nixon’s restless mind, mulling foreign policy initiatives and the domestic political scene, but also returning time and again to “getting even” with his enemies, real and imagined.

 

The first years of Nixon’s presidency, 1969-70, were largely successful. He began to wind down the Vietnam War, he ended the draft, lowered the voting age, began nuclear weapons limitation talks with the Soviets and founded the Environmental Protection Agency. Although he enjoyed high favorability ratings and seemed assured of re-election, he railed against anti-war protesters and exploded in anger when critical articles appeared in the press. 

 

Sometimes, his impulsive reactions to news events were irrational.  After a Middle East airplane hijacking, he demanded the Air Force bomb Damascus, Syria. Fortunately, his staff and cabinet officers ignored this and other dangerous thoughts and Nixon usually forgot about them and moved on to other subjects.

 

Haldeman and Ehrlichman, whom the press dubbed “the Pretorian Guards” of the administration generally acted as filters, screening out Nixon’s most irrational instructions before they reached lower-level personnel.  Unfortunately, after two years, they suffered from overwork, became distracted and allowed the formation of “The Plumbers,” a group of a dozen ex-spies and free-lance thugs recruited to stop internal leaks and gather damaging material on opponents.

 

The Pentagon Papers

The calm at the White House was shattered on June 13, 1971, when The New York Times published a 5,000-word excerpt from The Pentagon Papers, a 700-page study of the origins and conduct of the Vietnam War. It had been taken from the Defense Department by one of its authors, Daniel Ellsberg.  

 

Ironically, Henry Kissinger’s immediate reaction to the story was one of relief. “This is a gold mine,” he told Nixon, “It pins it all on Kennedy and Johnson.”  But the president Nixon reacted differently. He saw it as part of a conspiracy, a plan to bring him down. He worried that other documents might be leaked, ones that revealed the dark secrets of his Vietnam War policies: the clandestine carpet bombing of Cambodia, the plans for bombing dikes in North Vietnam and the consideration of atomic weapons. 

 

Nixon now pressed harder than ever for retaliation, demanding widespread wiretapping and burglaries to obtain opposition files. Earlier restraints on the Plumbers and other operatives were lifted.  

 

At 3 a.m. on June 17, 1972, a team of burglars, headed by Gordon Liddy and E. Howard Hunt, clad in business suits and surgical gloves, was arrested after breaking into the Democratic National Headquarters in the Watergate office complex. The Washington Post assigned a pair of young reporters, Bob Woodward and Carl Bernstein, to cover the story.

 

This began the two-year long Watergate scandal that would force Nixon out of office. By July 1974, the House Judiciary Committee approved three articles of impeachment against Nixon: for obstruction of justice, abuse of power and contempt of Congress. The full House was getting ready to conduct formal impeachment hearings when Nixon resigned on August 9 and Gerald Ford became President. 

 

Karl Marx famously wrote in 1852 (commenting on the dictatorship of Napoleon III in France) that “history repeats itself, first as tragedy, then as farce.” 

 

In the course of Donald Trump’s presidency, with its illegal campaign activities and defiance of Congress, history is repeating itself.  So far, it is a farce, but it could quickly turn into a tragedy. 

 

Richard Nixon, A Life provides a fascinating, insight into a 20th century presidential crisis. In 1974, our Constitution and the division of power among the three branches of government was severely tested, but it survived intact. Now, in the 21st century, we can only hope for a similar, successful outcome.

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172247 https://historynewsnetwork.org/article/172247 0
Environmental Historian Sara Dant: "History Is As Relevant Today As Ever."

 

Sara Dant is Professor and Chair of History at Weber State University. Her work focuses on environmental politics in the United States with a particular emphasis on the creation and development of consensus and bipartisanism. Dr. Dant’s newest book is Losing Eden: An Environmental History of the American West (Wiley, 2017), a "thought-provoking, well-written work" about the interaction between people and nature over time.  She is also the author of several prize-winning articles on western environmental politics, a precedent-setting Expert Witness Report and Testimony on Stream Navigability upheld by the Utah Supreme Court (2017), co-author of the two-volume Encyclopedia of American National Parks (2004) with Hal Rothman, and she has written chapters for three books on Utah: “Selling and Saving Utah, 1945-Present” in Utah History (forthcoming), “The ‘Lion of the Lord’ and the Land: Brigham Young's Environmental Ethic,” in The Earth Will Appear as the Garden of Eden: Essays in Mormon Environmental History, ed. by Jedidiah Rogers and Matthew C. Godfrey (Salt Lake City: University of Utah Press, 2019), 29-46, and “Going with the Flow: Navigating to Stream Access Consensus,” in Desert Water: The Future of Utah’s Water Resources (2014). Dr. Dant serves on PhD dissertation committees, regularly presents at scholarly conferences, works on cutting-edge conservation programs, and gives numerous public presentations around the West.  She teaches lower-division courses in American history and upper-division courses on the American West and US environmental history, as well as historical methods and the senior seminar.

 

What books are you reading now?

 

I just finished E.C. Pielou’s A Naturalist’s Guide to the Arctic in preparation for a 12-day river trip on the Hulahula River through the Arctic National Wildlife Refuge.  I feel a real urgency to see this remarkable landscape and its plants and animals before it vanishes or becomes something entirely different as a consequence of global warming and climate change.

 

Currently, I’m reading Doug Brinkley’s epic biography of Theodore Roosevelt, The Wilderness Warrior: Theodore Roosevelt and the Crusade for America, in part to bring some historical context to my involvement with the conservation efforts of American Prairie Reserve, which is attempting to restore some of the ecosystems lost in the late 19th century on the Great Plains of Montana. It’s a book I wish I could assign in my classes because of the sparkling writing and complex historical context Brinkley provides, but it’s 1,000 pages long and I fear my students would likely stampede for the door if they saw it on the syllabus.

 

One other really different book that I read recently is Rob Dunn’s Never Home Alone: From Microbes to Millipedes, Camel Crickets, and Honeybees, the Natural History of Where We Live.  It goes into remarkable detail about the largely invisible-to-us habitat that our homes and even bodies provide and how being dirty can actually be healthy.  It makes you want to change your showerhead immediately, though.

 

What is your favorite history book?

 

I’m not sure I could really pick a “favorite,” but I can tell you about the book that motivated me to pursue environmental history: William Cronon’s Changes in the Land: Indians, Colonists, and the Ecology of New England.  I read this as part of an early America field course in graduate school and, at the time, environmental history (the interaction of people and nature over time) was a relatively new line of inquiry.  Cronon’s elegant discussions of how the introduction of capitalism and the market so transformed nature that “by 1800, Indians could no longer live the same seasons of want and plenty that their ancestors had, for the simple reason that crucial aspects of those seasons had changed beyond recognition” (169) resonate right up to the present.  I have found this book to be incredibly useful and inspirational in my own work but also in the classroom. Cronon has a real gift for explaining complex ideas in a way that makes them accessible to all readers.  I find myself returning to it again and again and each time, taking away something new.

 

Why did you choose history as your career?

 

I did so with great reluctance, actually.  I come from a long line of teachers, so naturally, I wanted to be anything but a teacher.  My undergraduate degree is in Journalism and Public Relations, which was a terrific initiation into writing directly and concisely, although I had minor in history simply because I loved it.  I did a master’s degree in American Studies with the idea that I could learn broadly about the American past by combining my intrinsic love of history with literature, culture, and economics.  But I had no idea what to do next, so I took a job at a two-year college where I was the entire history department.  That really did it for me.  I loved interacting with students, I found a way to teach environmental history that combined classroom learning with outdoor field experience, and I finally discovered that I was and always had been a historian.  With that resolved, I returned to graduate school, earned my PhD in history, and have been doing what I love - writing, researching, and teaching - ever since.

 

What qualities do you need to be a historian?

 

I think the best historians are inherently curious and tenacious.  We not only ask “why” but also “how did this happen”?  Often, though, the answers to those questions aren’t easy to find, so a good historian has to be a bit of a detective.  In environmental history, we have the advantage of drawing upon other disciplines to help answer tough questions.  If you want to know if log and tie drives occurred on a particular river in the late 19th century, for example, you need to look at journals and newspapers, naturally, but stream-flow and tree-ring data are invaluable as are coniferous tree re-growth rates and forest composition studies.  The best historians are the ones who think unconventionally about their sources. 

 

Who was your favorite history teacher?

 

This is not the typical response: the high school football coach.  Like so many students, I went to a high school where the football coach was also the history teacher.  Unlike most students, I got a brilliant history teacher - Jesse Parker, who taught me to love history and football.  He made my brain hurt.  Then when I went to graduate school, I was fortunate to have LeRoy Ashby at Washington State as my mentor.  His genuine love of history and his brilliance in the classroom inspired and inspires me.

 

What is your most memorable or rewarding teaching experience?

 

I’m not sure I can pick a specific event.  The best part of my job as a professor is watching the lights go on in a student who “gets it.”  I think the best student evaluation comments that I receive are ones where the student “never liked history” before and now is completely fired up.  I had onestudent, for example, who used the research skills he learned from a class project to completely restructure the recycling practices of the company he works for.  But it’s also equally rewarding when a non-traditional student brags about the previous evening’s dinner conversation where she got to tell her kids what really caused near-extinction for bison “and they thought I was SO smart!”  

 

What are your hopes for history as a discipline?

History is as relevant today as ever.  We have many challenges - environment, politics, social, cultural - that are the end-product of our historical arc.  Understanding how we got here, what has worked and what hasn’t in the past, gives us the best chance of moving forward successfully.  My work places particular emphasis on the creation and development of consensus and bipartisanism, and I firmly believe that people care about what they know, so understanding more about one another facilitates the kind of dialogue and communication that fosters community and sustainability.

 

How has the study of history changed in the course of your career?

History has become ever more inclusive, which makes it more challenging to tell complete stories about the past.  The best history is complicated and messy, just like the present, but figuring out how to convey that complexity can be tricky.  When I wrote Losing Eden: An Environmental History of the American West, for example, I wanted to make it accessible and compelling - I wanted to re-arrange the furniture in people’s heads - so that they could look at the world around them, the American West in particular, with renewed appreciation and clarity.  But it also meant I wasn’t going to write about wars or race relations or politics (much).  The best understanding of the past must ultimately come from reading broadly and deeply across many fields and interpretations and I think the discipline has gotten better about giving voice to the many rather than the few. 

 

What is your favorite history-related saying? Have you come up with your own?

My students know that my favorite question is: at what cost?  Who or what pays the price for the decisions we’ve made and how does that play out over time?  To me, it’s a terrific shorthand for getting at the essence of history - the study of change over time - and for ensuring a comprehensive and complicated understanding of both the past and the present.  It’s the driving question in Losing Eden and I think it’s invaluable to making history about more than just names and dates.

 

What are you doing next?

Next up are a couple of projects - a journal article and a report on the historical uses of rivers in Utah as a window into the larger role of river commerce in the interior West in the late 19th century.  The other is a long-standing project to examine the development and implementation of the Land and Water Conservation Fund - the economic engine behind many significant conservation efforts of the 20th century and a remarkable model of political bipartisanism that endures in the 21st century.

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172249 https://historynewsnetwork.org/article/172249 0
Ideologies and U.S. Foreign Policy International History Conference: Day 1 Coverage

 

On May 31, 2019, and June 1, 2019, Oregon State University hosted the “Ideologies and U.S. Foreign Policy International History Conference,” organized by Dr. Christopher McKnight Nichols, associate professor of history and director of the Oregon State University Center for the Humanities, by Dr. Danielle Holtz, a post-doctoral fellow at Oregon State University, and Dr. David Milne, a professor of modern history at the University of East Anglia, in Norwich, England.  The conference brought together academic and independent historians and political scientists from universities all-round the US and England to discuss the history of ideology and US foreign policy from many different points of view.

 

The first day of the conference began with introductions from organizers Drs. Nichols, Holtz and Milne. Dr. Nichols, in his introductions, made sure to thank all the major sponsors of the conference, which included Oregon State University, the Richard Lounsbery Foundation, the Carnegie Corporation of New York, Patrick & Vicki Stone, Citizenship & Crisis, the Oregon State University Center for the Humanities, and the Stoller Family Estate. Dr. Nichols also shared with attendees that the essays presented at the conference will, in a near future, became chapters of book that will delineate the state of the field for what he termed the “intellectual history of U.S. foreign policy,” an emerging disciplinary area that he suggested is roughly ten-to-fifteen years old. 

 

Dr. Nichols’ introductory remarks concluded that ideas and ideologies heavily shaped US foreign policy, as affirmed by Michael Hunt in his path-breaking book “Ideology and U.S. Foreign Policy” published in 1987.  Dr. Nichols stressed the need for continuous discussions about the intersection of ideology and foreign policy, where Hunt’s book can be used as the frame work for it and for moving beyond that project to build on the vibrant new directions established by recent work on the role of ideas in U.S. foreign relations broadly defined.  

 

Dr. Nichols’ remarks were followed by remarks from Dr. Holtz, who brought to the discussion an emphasis on the ideological clashes in Trump’s White House to reveal how even seemingly non-or-under-ideological Administrations can act ideologically; this, she argued, provides a reason why it is important to evaluate the intersection of ideology and foreign policy historically. She added that Trump’s aggressive posturing when dealing with those who do not agree with him, sign to a state of ideological fragmentation which interferes in how foreign policy is executed.  To help frame the multi-day conference and approach to the resulting book Holtz elaborated on philosophical and theoretical approaches to ideologies and ideology critique, drawing on concepts advanced by Louis Althusser, among others. She emphasized that often the historical record reveals that ideological debates are about “a struggle over the obvious,” which was a proposal that seemed to resonate with the audience. 

 

To Dr. Milne, who concluded the introductory remarks, there is a great deal of significance in approaching foreign policy and ideology differently then Michael Hunt, and added the importance of differentiating intentional from accidental ideology.  Milne made the case that Hunt’s three-part schema, that focused on “visions of national greatness,” “the hierarchy of race,” and a strong counterrevolutionary impulse – offers a compelling framework to think through the dynamic between ideology and U.S. foreign policy, but not the only one. He asked the contributors and audience: “Does Hunt’s definition function for your intervention or do you opt to define “ideology” differently? What might a theoretical reconfiguration of ideology do to open new avenues of inquiry for historians of ideas and U.S. foreign relations?” Citing numerous examples from the history and from the works of the contributors to the conference, Milne concluded by stating that the “work at this conference represents a broad and diverse series of interventions in the historiography” of U.S. foreign relations. Praising Dr. MichaelaHoenicke Moore’s insights in her chapter, Dr. Milne suggested this historiography “has encouraged essentializing interpretations and a reductionist impulse, searching for and finding recurring and persistent patterns, often promoted through elite culture, marginalizing and dismissing voices and movements that resisted, questioned, and rejected the call to arms.” When reading these papers and listening to the series of talks, he encouraged the audience, it is worth reflecting on the advantages and disadvantages of defining ideology and U.S. foreign relations broadly and of moving beyond what some of the older models of an “ideology-elite-shaped policy” nexus.You can watch the introductory remarks on C-SPAN.

 

 

(Dr. Christopher McKnight Nichols Introductory Remarks)

 

After the introductions, the first panel, moderated by Dr. Nichols, presented papers about “concepts of the subject-state.”  Dr. Matthew Kruer, from the University of Chicago, presented an essay called “Indian subjecthood and White Populism in British America,” where he discussed the subjecthood of Indians in relation to the British crown in the 17th and 18th centuries.  At that point in history, the crown accepted the Indian tribes in North America as subjects of the Empire while allowing them to keep their sovereignty.  The British accepted this arrangement, since in their world view, it was better to have subjects in North America rather than conquered peoples.  However, according to Kruer, all this changed in the mid 18th century after certain tribes began to challenge British authority, and white settlers, feeling endangered by the Indians, unprotected by crown, and likely unmoored by being equal to indigenous peoples in the “great chain of being” under the King, massacred Indians.  This violence against the Indians were justified by settlers indicating that being equals to Indians threatened their rights as Englishman.  This, Kruer argued, was the historical shift from subjects to citizens, it defaulted to western colonial power and as such became a de facto endorsement of white supremacy.  

 

The second essay of this panel was presented by Dr. Benjamin Coates, from Wake Forest University.  Under the title “Civilization and American Foreign Policy” Coates traces the historical rhetoric and the complex, and sometimes dualistic meaning of the word “civilization” in the US, from as early as the 18th century, all the way to the present.  He shows how it was applied over the decades in the context of the US foreign policy, specially by presidents when addressing the nation, and how the rhetorical meaning of the word morphed to keep up with ideological shifts in US foreign policy championed by US presidents.

 

The third essay of the panel was presented by Dr. Michaela Hoenicke-Moore, from University of Iowa.  The tittle of the essay was “Containing the Multitudes: Nationalism and U.S. Foreign Policy Ideas at the Grassroots Level.”  In this essay, Dr. Hoenicke-Moore argues that the voices of the people during and shortly after the Second World War, had very little, to no impact on how the US exercised its foreign policy.  She arrived at this conclusion by researching foreign policy from the bottom up.  Dr. Hoenicke-Moore does point out in the essay that a great deal of fear mongering was present in the political discourse, but in reality, after the war for instance, few people at the grassroots level, saw the Soviets as enemies, a fact that was largely ignored by those in government.  She concludes by adding that, in the end, the elites were able to establish their will, the people at the grassroots levels, were not. 

 

The last essay of this panel was presented by Dr. Mark Bradley, from the University of Chicago, and it was titled “The Political Project of the Global South.”  In this essay, Dr. Bradley argues that there is an imaginary global south that cannot fully and completely became an object in foreign policy. Thus, trying to study the global south separately, allows us to see US foreign policy differently, by looking at ideology from a different angle that would eventually bring us back to US history. Dr. Bradley then brings into question continuity and rupture.  What is more important?  Should continuity, sometimes take the back seat to rupture?  He argues that the late 20th century is a time of rupture, a time of change.  Not only because of the end of the Cold War, but because of many other structural, economical, and technology changes in the world.  By analyzing yearly UN speeches from by world leaders, Dr. Bradley is also able to see the rhetoric used by these world leaders when referring to the global south, concluding that there are changes on the rhetoric about that region of the globe.

 

After Dr. Bradley’s presentation, the Q&A portion of this panel began. There were several questions and discussions with the panel, starting with a discussion surrounding Michael Hunt’s book and how it directed the research of the panelists.  In this respect, the panelists spoke about the intersection of effect and ideology, about power and power relations and how those power relations brought about new social change.  The traditional ways of thinking of power was also discussed in the context of Hunt’s book.  The discussion then shifted to the lack of grassroots influence in the decisions made by government officials in respect to foreign policy.  The panel concluded that grassroots usually is mute in those matters, while elites tend to make their voices heard, and many times are able to get what they are after.   The panel ended the Q&A by briefly discussing the rhetoric and the how the word “civilization” is used in the US and in other parts of the world. This ended the morning proceedings.

(Doctors David Milne, Marc-Willian Palen, Nicholas Guyatt, Danielle Holtz, and Matt Karpp)

 

After lunch, the second panel was introduced by the moderator, Dr. David Milne.  The theme of the panel was “Concepts of Power” and it kicked off with an essay by Dr. Marc-William Palen, from the University of Exeter, in the United Kingdom.  Dr. Palen’s essay was titled “Competing Free Trade Ideologies in the US Foreign Policy.” In the essay, Dr. Palen traces the US free trade ideology and how it shifted in the later 19th century, early 20th century.  Dr. Palen separated the trade ideology of the US into three major phases, the Jeffersonianism of the 1840s, which had a protectionist attitude, to the cottonism of the 1900s, and neo-liberalism of the present-day free trade.  The freedom to free trade, according to Dr. Palen, has kept the peace, an approach that can be considered radical.  The value of free trade has been so high for the US, that support for dictators and other problematic governments have been part of the modus operandi of this country.  Free trade has also been a tool for punishments, in the way of increased tariffs for instance, as a shift back to the protectionism of the 19th century, which can be seen in US free trade today.  

 

The next panelist was Dr. Nicholas Guyatt, a reader in North American history at University of Cambridge, in the United Kingdom.  Dr. Guyatt’s essay was called “The Righteous Cause: John Quincy Adams and the limits of the American Anti-imperialism.”  Dr. Guyatt began his presentation of his essay by quickly explaining the Opium wars in China in 1839. In the Opium wars, the British government went to war with China to force the Chinese into agreeing with trade terms that were beneficial to Britain and terrible for China.  The Chinese were incredibly mismatched against a much more powerful British military. This war lasted until 1842.  Dr. Guyatt then shows John Quincy Adam’s take of the war.  Adams believed that Britain was right in going to war with China, because Britain, in his view, was well within its right to demand such advantageous trade agreements.  China, on the other hand, was violating a world order by challenging Britain.  Adams was a firm believer in a world order ruled by Christians, which China was not. This points to a world view held by Adams that did not place China in equal footing to white European Christian societies for the world, thus making China a colonialized space, rather than a place with rights.  Dr. Guyatt concluded by saying that Adams held a position that the US was exceptional, better than others, and he used the law to get to his objectives, or to justify his positions.  

 

Doctor Guyatt was followed by Dr. Danielle Holtz, a visiting research fellow at Oregon State University.  Dr. Holtz’s essay was titled “’An Imaginary Danger’: White Fragility and Self-Preservation in the Territories.”  Dr. Holtz traced white fragility and self-preservation to the 1840s debate in Congress involving Florida’s proposal for statehood.  In their proposal, Floridians wanted to be able to dictate who could come, and who could live in the state.  In other words, they did want the presence of African Americans in the state. Dr. Holtz argued that black bodies meant danger to white southerners, and their presence alone, was enough to trigger an instinct of self-preservation, that was reflected in organizational racism, and later in eugenics.  During the presentation of her essay to the panel, Dr. Holtz also compared the 1840s Florida debates in Congress with the current president’s immigration policies that seem to have at its core, the preservation of whiteness. 

 

The panel closed with a presentation by Dr. Matt Karp, from Princeton University.  Dr. Karpp’s essay was titled, “Free Labor and Democracy: The Early Republican Party Confronts the World.”  Dr. Karpp began the presentation of his essay by talking about his last book, where he wrote about slave holders and US foreign policy, then tied that project to the essay he was presenting.  In this essay, he looked at the Republican party of the 1850s as an anti-slavery party, and a threat to the South.  Members of the Republican party were a threat to the South’s ideological struggle to maintain free labor viable through slavery, while turning the population of the North against this so important institution.  

 

After Dr. Karpp’s presentation, the Q&A and commentary session for panel two began.  The discussion and questions to the panel revolved around the overlap of ideologies in the four different essays, ranging from the John Quincy Adams’ concern with the maintenance of the certain world order, to how power played its part on the works presented.  The panelists also discussed how we arrived at the state of “white fragility” that we see today in America, and how science and changes in infrastructure helped in the ideologies mentioned in the panel’s essays.  

            

The final panel of the day was moderated by Dr. Danielle Holtz and had as its theme “Concepts of the International.”  

 

(from left to right, Drs. Emily Cornroy-Krutz, Raymond Haberski Jr., and Penny von Eschen)

 

The first presenter was Dr. Emily Conroy-Krutz, from Michigan State University. Dr. Conroy-Krutz’s paper was titled, “’For Young People’: Protestant Missions, Geography, and American Youth at the End of the 19th Century.”  In this essay Dr. Conroy-Krutz, investigated how religious missionaries talked about Africa at the end of the 19th and beginning of the 20th centuries, and how that rhetoric informed US foreign policy.  The essay began in the 1840s, when missionaries saw other peoples of the world as savages.  She used an example where a missionary speaks of Hindus as heathen, thus telling the children that were reading this literature, that Hinduism was a horrible religion.  However, she shows that by the 1970s, the same missionaries were writing materials for children that showed life in places like Africa, as an adventure, but also as a racist ethnography.  Dr. Conroy-Krutz concluded that this “religious intelligence” was transferred into children literature in order to teach adult ideologies to children and to shape how they saw the world. 

 

The next panelist was Dr. Raymond Haberski Jr., from Indiana and Purdue Universities.  Dr. Haberski’s essay was titled “Just War as Ideology: The Origins of a Militant Ecumenism.”   In this essay, Dr. Haberski shows how religion has been a great part of American identity, and how ideology can be mascaraed by religion.  He pointed out that after the Vietnam war, there was a religious crisis that saw the emergence of ecumenical militarism.  The ecumenical militarism meant that catholic and evangelical bishops and pastors became the moral compass for the country. Always opposed to the Vietnam war, the catholic bishops, especially after the war, began to question if the US was in moral charge of the world.  This debate over the morality of war, ended up influencing foreign policy.  This influence, Dr. Haberski concluded, began to appear in foreign policy as “just war” where moral justifications for wars were sought. 

 

Dr. Penny von Eschen, from University of Virginia and Cornell University, followed Dr. Haberski.  Her essay was titled “Roads Not Taken: the Delhi Declaration, Nelson Mandela, Vaclav Havel, and the Lost Futures of 1989.”  Dr. von Eschen began the presentation of her essay by sharing that it is a result of her research for a new book that talks about the legacies of the Cold War.  One of those legacies was the meetings between president George H. W. Bush and Nelson Mandela and Vaclav Havel, and what resulted from those meetings, especially considering the ideological differences between the American president and Mandela and Havel.  Dr. von Eschen sees that the breakup of the Soviet Union was a rupture moment in which the US had to establish itself as the only global power, asserting that no other power from the East was to emerge.  This was accomplished by using an ideology that normalized violence, especially from those who surrounded president Bush, such as Dick Cheney, Robert Rumsfeld and others.  Dr. von Eschen concludes by saying that this ideology was also based on fear from the outside, fear of rogue states, which created and solidified a “us vs. them” ideology.  

 

Dr. Andrew Preston was introduced next.  Dr. Preston is from Claire College, University of Cambridge, and his paper was titled, “Fear and Insecurity in US Foreign Policy.”  In this essay, Dr. Preston takes on, as the title suggests, fear and insecurities in US foreign policy.  He uses the long-standing crisis between the US and North Korea to show how the US goes into moments of panic over tensions in the Korean peninsula, when South Korea, for instance, does not have the same reactions. South Korea, who’s capital Seoul, can be wiped out by North Korean artillery in a moment’s notice, does not share the fear and panic the US shows.  Dr. Preston pointed out that, although this fear is very present in US foreign policy, it is not an ideology, but part of the American culture, which could be seen in 1941, and be traced to the present.  He concluded by reminding everyone that despite the fears always run high on the US’s part, the situation never really changes.  

 

(form right to left, Drs. Emily Conroy-Krutz, Raymond Haberski Jr, Andrew Preston, and Christopher Nichols)

 

Dr. Christopher Nichols, associate professor of history at Oregon State University closed the presentations of this panel with an essay titled “Unilateralism as Ideology.”  In this essay and presentation, Dr. Nichols explored his views about how ideas and ideologies evolve over time, noting his own model as one premised on a vision of punctuated equilibrium. He asserts that the U.S. ideology, from the beginning and with important shifts and pivotal moments, has been defined by a core element of unilateralism. Unilateralism “as ideology” he remarked, was clearly present at the beginning, in the Declaration of Independence and in the nation’s first “Model Treaty” of 1776, designed to minimize U.S. reliance on foreign nations and to steer clear of foreign entanglements by privileging bi-lateral and non-binding agreements. The recent turn to unilateralism, Nichols remarked, is thus not remarkable, nor is it new. Unilateralism is at least evident, if not influential, in virtually all historical debates over international engagement since 1776. This then prompts several questions: Why? Nichols made the case that unilateralism has functions as both ideology and behavior, or tactic, helping a weak nation maneuver in world of larger powers and competing interests at least until the late 19th century. But another question lingers, according to Nichols: Why does or did the U.S. enter into conflicts unilaterally, when it could potentially have benefited more from multilateralism?  Dr. Nichols believes the answers to this are in the longer historical record and he asked the audience to help assess them. Unilateralism, as an impulse to place the nation first, has been foundational, linked to Washington, Jefferson, and Monroe, from 1789 through 1823, differentiated at times in terms of the U.S.’s role in the hemisphere versus around the world. In light of the longer patterns in foreign policy thought Nichols sees unilateralist policy ideas as fundamentally a product of a kind of arrogance set on a bedrock of exceptionalism. After giving several examples of unilateral decisions made by the US, from the War of 1812 to WWI, both examples of the U.S. entering conflicts with no formal allies or in terms of being only an “associate power,” even if that one-sidedness came at a great cost, up to the post-9/11 Iraq War, Dr. Nichols concludes that unilateralism is a cultural ideology that revolves around a core calculus about “vital interests” such that foreign policy decisions must always be conceived, evaluated, and implemented on U.S.’s terms. 

 

After Dr. Nichols’ presentation the Q&A session for this panel began with a question about fear in foreign policy, and if it was something unique to the US or generated by fear of potentially losing power.  All the members of the panel pondered on these questions, and it seems that there was an agreement that the fear, if not unique, was at least unusual, and likely triggered by the perception that the US’s power was declining, and the country was losing its overall status before the world.   A question regarding morality in foreign policy was asked which precipitated the assessment that a morality rhetoric was commonly attached when explaining conflicts where the country was asking its military to kill or to die.  Similarly, the morality of unilateralism seemed at stake, too.  The Q&A ended with a discussion amongst the panelists about the evangelization of children by using missionary ethnographies, and how it affected future generations, which Dr. Conroy-Krutz and Dr. Nichols discussed in terms of children as “time-shifted adults.”  The last bit of discussion was about how a “just war” was used as currency in unifying arguments to justify armed conflicts, particularly in creating a kind of theology of conflict in the wake of the attacks of 9/11.

 

After the Q&A, the conference was adjourned until 7pm, when keynote speaker James Lindsay presented the talk called “Donald Trump and Ideology,” which was part of the 2018-2019 Governor Tom McCall Memorial Lecture.  This lecture will be covered in a separate post.  

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172258 https://historynewsnetwork.org/article/172258 0
Ideologies and U.S. Foreign Policy International History Conference: Day 2 Coverage

(From Left: Mark Bradley, Jeremi Suri, Matthew Kreur, Nick Guyatt, Jay Sexton, Daniel Immerwahr, Daniel Tichenor, Daniel Bessner, Benjamin Coates, Danielle Holtz, Emily Conroy-Krutz, Penny von Eschen, Ray Haberski Jr., Imaobong Umoren, Christopher McKnight Nichols, Melani McAlister, Matt Karp, David Milne, Michaela Hoenicke-Moore, Marc-William Palen. Photo by Mina Carson)

 

What was the global significane of the Civil War? What exactly is the definition of “freedom?” How are Donald Duck, Indiana Jones, and anti-modernizationists connected? The second day of the Ideologies and U.S. Foreign Policy International History Conference was highlighted by experts’ bold answers to these ambitious questions. Two separate panels of historians and political scientists shared their research on issues related to ideologies and U.S. foreign policy. 

The first panel of the day discussed “concepts of the international” in U.S. foreign policy ideology and was moderated by Mark Bradley of the University of Chicago. Daniel Tichenor, from the University of Oregon, spoke about his paper, “Contentious Designs: Ideology and U.S. Immigration Policy.” In his paper, Tichenor related concepts of the international to a discussion of how immigration policy in the U.S. has been framed by four ideological cluster groups– Classic Restrictionists, Liberal Cosmopolitans, Free Market Expansionists, and Social Justice Restrictionists– which have existed since the foundation of the nation. Tichenor tied the commonalities and tensions within these clusters to an explanation of why the United States has historically had such little progress with immigration policies. For him, this stalemate has been as a direct result of the isolating practices of these clusters. When meaningful legislation has occurred, such as with the passing of the 1965 Immigration and Nationality Act and the1986 Immigration Reform and Control Act, it has always been during rare times of what he called, “strange bedfellow politics.” In most instances though, these clusters tend to operate in isolation from–or opposition to -- one another. As a result, immigration policy in the United States has been stagnated except for infrequent moments of compromise and incongruous alliance.

 

(From Left: Daniel Tichenor, Imaobong Umoren, Melani McAlister, Jeremi Suri. Photo by Mina Carson)

 

Imaobong Umoren, from the London School of Economics and Political Science, then spoke about her paper, “Eslanda Robeson, Black Internationalism and U.S. Foreign Policy,” an extension of research she had done for her previous book Race Women Internationalists Activist-Intellectuals and Global Freedom Struggles (University of California Press, 2018). In her talk, Umoren explored the concepts of the international through the life of Eslanda Robeson, an African-American activist (wife of celebrated singer, actor, and international activist Paul Robeson) who situated her activities as an extension of hope. Umoren spoke of her understanding ofhope as a way to grasp not only Robeson’s activism but also transformational black internationalism, more broadly, and as a key element of a “romanticized” vision of certain organizations, such as the newly formed United Nation, which Robeson held. 

 

Umoren was followed by Melani McAlister fromThe George Washington University, who spoke about her paper, “‘Not Just Churches’: American Jews, Joint Church Aid, and the Nigeria-Biafra War.” McAlister focused on the international by examining the origins of humanitarian aid during the Nigeria-Biafra War, specifically aid given by American Jews. She explained that theaid from American Jews was largely shaped by larger political factors, such as the desire to reshape the public narrative and perception of Jews after the end of WWII as well as the lack of mainstream status of American Jews in the public narrative. Her analysis of American Jews represents a larger framework of how issues such as religion, money, perception, and memory have shaped humanitarian aid in situations of crisis. 

 

The first panel concluded with Jeremi Suri from the University of Texas at Austin. In his paper,“Freedom and U.S. Foreign Policy,” he abstractly dealt with notions of the international by discussing the different ways Americans have defined the internationally ubiquitious word “freedom” throughout history. First, in early America, freedom was considered to be Freedom From, such as freedom from the British system. Policy at this time was defined by what America was not rather that what it was. The second period took place after the Civil War and was mostly about Freedom to Produce. Suri suggests that this was largely framed by Americans’ insatiable need to expand, most acutely demonstrated by William H. Seward’s acquisition of Alaska. The third period took place around the end of WWII, or perhaps from the New Deal through the end of WWII and the building up what some call the “liberal world order,” as freedom came to be redefined and redeployed as Freedom of Hegemony. In this period, according to Suri, Americans felt as though they could only be free if they dominated, which was further cemented by a push towards unipolar U.S. world hegemony. Suri concludes that freedom has rhetorically been used as a reason for action, power, dominance, and mobilization. These various uses of freedom have left its definition fluid. 

 

Freedom of Hegemony may not accurately describe the current state, especially since, according to Suri, the failures of the Iraq war shattered that definition. As a result, Freedom is currently without definition, despite the fact that it is “perhaps the” most important word of American history. Hopefully, future historians will come to define this era. Suri encouraged the audience to consider the strengths and limits of such framing, and they seemed keen to engage and discuss the topic with him. 

 

After the panelists spoke, Bradley opened up the room to questions and comments from the audience. A lively discussion emerged around Suri’s definition of Freedom. One audience member felt as though Freedom to Produce seemed very similar, if not identical to Freedom From. Suri acknowledged their similarities but clarified his point by saying “old waters don’t go away.” Old definitions of freedom still exist, but new definitions emerge and for Suri, the “relative weights” of the definition of freedom are what really mattered. This prompted another audience member to ask Suri if Freedom of Hegemony was really just Freedom of Economic Plunder, to which Suri chuckled and promptly agreed.

 

Even though the the second panel of the day grappled with “concepts of progress” in U.S. foreign relations, which was moderated by McAlister, more connections between the panels emerged such as concepts of modernization and space.

The panel began with Jay Sexton from the University of Missouri who spoke about his paper, “The Other Last Hope: Capital and Immigration in the Civil War Era.” In relating to issues of progress, space, and modernization, Sexton set out to try and answer one ambitious question: what was the global significance of the Civil War? In answering this question, Sexton examined the various stages of American economic policy as shifting from one of heavy consumption to downshifts in spending and then to the chaotic aftermath of these abrupt shifts. To illustrate this point, Sexton used the metaphor of a vacuum or Hoover. The first stage of heavy consumption in the 1840s is akin to the hoover sucking in capital and labor at a time of unprecedented growth and an expansion of “space” in the United States. The downshifts in economic spending happened in 1857 with an abrupt blowing of the hoover- the United States faced a lot of economic pull factors such as instabilities in Europe like the 1848 Irish Potato Famine. And then after a period of blowing, the hoover got turned back on, but on “turbo mode” in 1846 when the economy was unable to control its growing patterns and in the wake of African colonization (of which the United States was not a part), Americans began intensified capital and labor scrambling. Through this sucking, blowing, and sucking of the hoover, Sexton deftly explained the flows of labor and power in the United States in the Civil War era. These changes in immigration and labor flows explore how the U.S. was attempting to modernize during in these years, as well as impacts on the immigration and economy as the physical boundaries of the country are changing. 

 

(From Left: Melani McAlister, Jay Sexton, Daniel Bessner, Daniel Immerwahr. Photo by Mina Carson)

 

The second speaker was the University of Washington’s Daniel Bessner who talked about his paper, “RAND and the Progressive Origins of the Military-Intellectual Complex.” In his talk, Bessner argued that RAND (an influentialmilitary minded research think tank that was established in the 1940s) was the apotheosis of Air Theory (the pervading idea amongst military leaders and politicians after WWII which held that the progressive developments of plane technology would eventually lead to the end of warfare).Air theory was developed during a time of heightened notions of progress leading to the betterment of society, and his work will focus on the works of several RAND elites in the hopes to situate his work amongst the dearth of histories surrounding defense intellectuals, a topic which Bessner argues highlights a unique form of American state building. 

 

The final speaker of the day, Northwestern University’s Daniel Immerwahr, connected the cartoon character, Donald Duck, to anti-modernization. His talk, “Ten-Cent Ideology: Donald Duck, Comic Books, and the U.S. Challenge to Modernisation,”situated the work of Carl Barks, the man behind Donald Duck, as a challenge to modernization. According to Immerwahr, Donald Duck was extremely well circulated and read in the postwar era, and the generation that grew up reading Donald Duck grew to become anti-modernizationists. Barks’ legacy, according to Immerwahr, seems to have, at least anecdotally, helped to usher in the end of modernization. To make this point, Immerwahr usedthe example of Steven Spielberg and George Lucas, both of whom have publicly admitted to being huge fans of Donald Duck when they were children. In the film Indiana Jones, which was the collaborative work of the two, clear anti-modernization perspectives are present. Furthermore, the film takes liberties from themes and storylines found in Barks’ Donald Duck comics, all to suggest that their anti-modernist agenda was influenced by Barks’ world of Donald Duck. 

 

McAlister ended the panel with questions and comments from the audience. Lots of people had questions about Donald Duck and the practice of analyzing comics and youth literature as a lens on foreign policy ideology. Several in the audience wanted to know ifthe character had been translated into different languages.I mmerwahr reported Donald Duck became even more ingrained in some translated countries’ cultures than in the United States. 

 

An interesting set of questions and discussions also emerged about the notion of “Farbackistan,” an imagined place that Donald Duck went to on one of his journeys. In Barks’ world, Farbackistan was always portrayed as empty and with no modern amenities. Bradley keenly commented that perhaps Farbackistan, while a fictional place in the world of Donald Duck, was actually a vision of place that the people of RAND and the Civil War era policy makers were also going. During the time when RAND was really in full swing, Americans went to Vietnam, a pseudo-Farbackistan that was often depicted in American media as remote and without modern comforts. During the Civil War,  one of the two most important policies that defined the global significance of America after the Civil War, according to Sexton, was the Homestead Act, which envisioned the American West as an empty expanse of nothingness, a kind of Farbackistan, that needed to be filled with people. Other questions of the panelists seemed to focus on the emerging similarities of the three scholars’ work despite the disparate themes. From these questions it seemed that the commonality of the three talks was hinged upon notions of place and stories of progress and modernization. 

 

 

(From Left: Jeremi Suri, David Milne, Christopher McKnight Nichols, Danielle Holtz, Daniel Bessner. Photo by Mina Carson)

 

The conference concluded with an intimate circle, where scholars and community members came together to reflect on the conference and the issues at large. Attendees agreed that the presentations and papers amount to a major intervention in the field and the group talked about key takeaways, new developments in the field, and how best to shape the essays into the best possible resulting book. Throughout the discussion and reflection, it became clear over the two days that ideologies are not static, as was once suggested in part by Michael Hunt, though there was some useful discussion and debate about tracing long arcs and consistencies in thought, action, and policy versus looking more for moments of rupture. Most agreed that the presentations clinched just how much ideas and ideologies matter for understanding the history of U.S. foreign relations and the “U.S. in the world,” and, perhaps most importantly, that there is virtue in the idiosyncratic. 

 

The conference was sponsored by the Richard Lounsbery Foundation, the Andrew Carnegie Foundation, Patrick and Vicki Stone, the Oregon State University Center for the Humanities, the Oregon State University School of History, Philosophy, and Religion, and the Oregon State University School of Public Policy

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172260 https://historynewsnetwork.org/article/172260 0
"Donald Trump and Ideology:" Dr. James Lindsay Delivers the Governor Tom McCall Memorial Lecture

(James M. Lindsay giving his talk. Photo by Mina Carson)

 

In July 2017, then Secretary of State Rex Tillerson and then Secretary of Defense James Mattis invited President Trump to a meeting at the Pentagon in the famously hyper-secure room known as “the tank.”  Tillerson and Mattis were concerned that Trump did not understand global politics and wanted to give him a crash course on U.S. world leadership. However, the lesson did not have the results that Tillerson and Mattis expected. Trump reportedly commented that the world order and American role described by his advisors  was “exactly what I don’t want.” Famously, Tillerson called Trump a “moron” after the meeting. 

 

The Tom McCall Memorial Lecture was established to foster intellectual exchange. Previously, politicians, journalists, and other esteemed members of the publichave delivered the address. Lindsay’s experiences uniquely suited him to give this year’s talk. A political scientist by trade, Lindsay has served as the director for global issues and multilateral affairs on the staff of the National Security Counciland was a consultant to the United States Commission on National Security. Currently Lindsay is the Senior Vice President, Director of Studies, and Maurice R. Greenberg Chair at the Council of Foreign Relations. In addition, Lindsay has written several books and articles. He also hosts several podcasts and writes a weekly blog, The Water’s Edge, which focuses on the politics of U.S. foreign relations and the domestic policies that underpin them. 

 

The lesson that Tillerson and Mattis attempted to teach Trump is grounded in American history and political thought.  The world created by the U.S. after World War Two, the so-called “rules based order” (RBO) or “liberal world order” (LBO), institutionalized American leadership through its role in international organizations and alliances. Those alliances were necessary, according to Lindsay, in order to create a world more conducive to U.S. interests and values such as open markets, democracy, human rights, and the rule of law.  While this was a radical approach to world leadership and a new step for the nation, it was based in the belief that this system woud help other countries flourish and in doing so, protect American propserity. Mattis and Tillerson recognized that this was not a perfect world order but it brought about an unprecedented period of peace and security and overall served America’s interests.

 

In contrast, Lindsay argued that Trump tends to consider American allies as adversaries or at least as potential competitors or freeloaders.  In Trump’s worldview, American allies use the U.S. for protection while trading on terms disadvantegous to the U.S., causing job losses and worsening the American economy.  To Trump, the global system is less a network based on interdependence and cooperation and more of a zero-sum hierarchy. Trump’s ideological framework is that international politics are always conditional and transactional, similar to Trump’s area of expertise: real estate. Interestingly, this is not exactly a position of isolationism, according to Lindsay, but one of transaction and reciprocity, where, for example, Trump wants allies to spend more, give more, provide more, etc.  In short, it is a worldview premised on a belief that if the U.S. demands more (especially from its allies), it will get it. 

 

Photo by Mina Carson

 

Lindsay challenged the ideas the president uses to support his criticism of  American allies.  For instance, Trump often threatens to bring home U.S. troops if allies do not increase their NATO military spending.  However, if the U.S. withdrew from places like Germany, Japan, and South Korea, it would actually cost U.S. taxpayers more money because the host countries pay up to 50% of the costs tostation those troops.  In trade, Lindsay pointed out that the U.S. remains the world’s largest and most vibrant economy. He argued  that bilateral trade deficits are not the consequence of bad trade deals but rather  simply reflects the fact that the U.S. spends far more than it saves, that it has bad tax policies, and lacks certain technology.  

 

Since taking office, Trump has certainly acted in accordance with his worldview. He has demanded NATO allies pay more, raised tariffs on American allies, began a trade war with China, withdrew from the Iranian nuclear deal and the Paris Climate Agreement (among others), and dismissed human rights abuses.  Trump offered wins, but in truth, those never came.  He claimed to have defeated ISIS, but still has no clear strategy on how to fight terrorism.  Negotiations with North Korea have not produced the results he promised.  The trade deficit is up by 15% and the re-negotiation of some trade agreements by Trump’s administration ended up benefiting other countries, such as the European Union, and hurt U.S. agriculture. 

 

So why, Lindsey asked, is Trump not “winning” as he promised?  Lindsey aruges this is in part because Trump lacks strategy, acts and speaks impulsively, considers unpredictability to be a virtue, and often disagrees with his national security team. But the biggest reason, Lindsey believes, is that Trump fundamentally misperceives the world: the application of raw power has not worked as he expected.  Enemies, by and large, have not buckled to threats and bluster.  The unexpected consequence of the application of raw power, according to Lindsay, was miscalculating not how much pain the U.S. can inflict, but how much pain the U.S. enemies were able to take. Allies may have given a bit more in the face of Trump’s unilateralist demands, but at the same time many allies now seem to see concessions as temporary, with new demands imminent or even treaties or agreements torn up on a whim. 

 

Lindsey worries that the largest impact of Trump’s presidency is that American allies increasingly believe they can work without the U.S. Lindsay fears the future without American leadership will be less secure, leading to another global battle for hegemony. During the first years of his presidency, Trump has managed to show the world what is like to not have American leadership as he pulled out of the Paris Accords, the Trans-Pacific Partnership, and the Iran Nuclear Deal. Trump renegotiated NAFTA, cheered for Brexit, maligned NATO, and more. The world does not like that, Lindsay argued powerfully. Some global leaders have come to the White House to seek increased American leadership, only to be told by the White House that their help is not needed.  The bottom line:  American first, America alone.   

 

After Lindsay’s talk, public questions and comments were encouraged. The questions touched on nearly all aspects of Lindsay’s talk, and really demonstrated an engaged and interested public. Despite the fact that some questions were hard hitting, and some were more lighthearted, in general, the audience seemed most keen to have Lindsay’s opinion on the future state of the United States in a post-Trump world. While Lindsay seemed initially reluctant to make conjecture about the future, even quoting Yogi Berra with, “predictions are hard to make, especially ones about the future” he eventually commented that, on balance, he believes that Trump’s legacy will most likely be like that of any president, an ever fluctuating series of ebbs and flows.         

 

Photo by Mina Carson

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172259 https://historynewsnetwork.org/article/172259 0
Photographs From My First Trip to China Steve Hochstadt is a professor of history emeritus at Illinois College, who blogs for HNN and LAProgressive, and writes about Jewish refugees in Shanghai.

 

I went to China in 1989 with a group of former Jewish residents of Shanghai to celebrate the first open Passover Seder in Shanghai in decades. Our visit, which included the first Israeli journalists allowed into Communist China, was part of the more general liberalization in Chinese politics and economics. Passover began on April 20.

 

I was very happy to be on this trip, because I was just beginning to research the flight of about 18,000 Jews from Nazi-occupied Europe to Shanghai after 1938. My grandparents had been among them, leaving Vienna in 1939. I was able to do eight interviews, see the neighborhoods where my fellow travelers had lived, and even get into my grandparents' former apartment. We met Chinese scholars who were doing research on Jews in Shanghai.

 

I like to wander around new cities with my camera. In Shanghai, and a few days later in Beijing, I saw things I didn't expect: students protesting the lack of freedom and democracy in China. Nobody I asked knew what was going on. I took these photographs, and a few others. I wish I had taken many more. These images were originally on 35mm slides, and were printed by Jim Veenstra of JVee Graphics in Jacksonville.

 

A Shanghai Street Protest, Shanghai, April 22, 1989

 

I was wandering in downtown Shanghai when a wave of young people came out of nowhere and marched past me. I had no idea what they were doing. Chinese streets are often filled with bicycles and pedestrians, but this was different. Protests had begun in Beijing a week earlier, and had spread to other cities already, but we knew nothing about them.

 

 

Tiananmen Press Freedom, Beijing, April 24, 1989

 

After a few days of personal tourism and memories in Shanghai, our group flew to Beijing to be tourists. Tiananmen Square is in the center of the city, right in front of one of most important tourist sites, the Forbidden City, home of the emperors for 500 years. Students had been gathering there for over a week already. The sign in English advocating A Press Freedom was surprising, since there were very few signs in English at that time. The flowers on the monument are in memory of Hu Yaobang.

 

 

Summer Palace, Beijing, April 25, 1989

 

The next day we visited the Summer Palace on the outskirts of the old city, built in the 18th century as a lake retreat for the imperial family. Busses poured into the parking lot with school children and adult tourists. At the entrance, students displayed these signs in English for every visitor to see, requesting support for their movement. Our guides could not or would not comment on them.

 

 

 

Inside Summer Palace, Beijing, April 25, 1989

 

Inside the grounds of the Summer Palace, students were collecting funds for their cause: democracy and freedom in Chinese life. The bowl is filled with Chinese and foreign currency.

 

 

Beijing Street March, Beijing, c. April 25, 1989

 

I'm not sure exactly when or where I took this photograph. We were taken around to various sites in a bus, including some factories in the center of Beijing. We were not able to keep to the planned schedule, because the bus kept getting caught in unexpected traffic. I believe I took this photo out of the window of our bus, when it was stopped. The bicyclists and the photographer in front of the marchers show the public interest in these protests.

 

 

 

Our Chinese trip was supposed to last until April 30, but the last few days of our itinerary were suddenly cancelled, and we were flown to Hong Kong. There was no official public reaction to the protests we saw, but government leaders were arguing in their offices over the proper response. I was struck by the peaceful nature of the protests I had seen and the interest shown by the wider Chinese public. The protests spread to hundreds of Chinese cities in May, and Chinese students poured into Beijing.

 

On May 20, the government declared martial law. Student protesters were characterized as terrorists and counter-revolutionaries under the influence of Americans who wanted to overthrow the Communist Party. Those who had sympathized with the students were ousted from their government positions and thousands of troops were sent to clear Tiananmen Square. Beginning on the night of June 3, troops advanced into the center of the city, firing on protesters. Local residents tried to block military units. On June 4, Tiananmen Square was violently cleared. "Tank Man" made his stand on June 5.

 

All the Communist governments in Eastern Europe were overthrown in 1989. The Soviet Union collapsed in 1991. The Chinese government survived by repressing this protest movement. Since then all discussion of the 1989 protests is forbidden. Western tourists on Tiananmen Square are sometimes asked by local residents what happened there.

 

I wonder what happened to the students pictured in these photos.

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/blog/154219 https://historynewsnetwork.org/blog/154219 0
Roundup Top 10!  

What Naomi Wolf and Cokie Roberts teach us about the need for historians

by Karin Wulf

Without historical training, it’s easy to make big mistakes about the past.

 

Free Speech on Campus Is Doing Just Fine, Thank You

by Lee C. Bollinger

Norms about the First Amendment are evolving—but not in the way President Trump thinks.

 

 

Don't buy your dad the new David McCullough book for Father's Day

by Neil J. Young

McCullough appears to have written the perfect dad book, but it's romantic view is the book's danger.

 

 

Voter Restrictions Have Deep History in Texas

by Laurie B. Green

Texas’ speedy ratification of the 19th Amendment represents a beacon for women’s political power in the U.S., but a critical assessment of the process it took to win it tells us far more about today’s political atmosphere and cautions us to compare the marketing of voting rights laws with their actual implications.

 

 

What Does It Mean to be "Great" Amidst Global Climate Change

by David Bromwich

How can Robert Frost, Graham Greene, Immanuel Kant, and others help us understand values and climate change?

 

 

How the Central Park Five expose the fundamental injustice in our legal system

by Carl Suddler

The Central Park Five fits a historical pattern of unjust arrests and wrongful convictions of black and Latino young men in the United States.

 

 

The biggest fight facing the U.S. women’s soccer team isn’t on the field

by Lindsay Parks Pieper and Tate Royer

The history of women in sports and the discrimination they have long faced.

 

 

I Needed to Save My Mother’s Memories. I Hacked Her Phone.

by Leslie Berlin

After she died, breaking into her phone was the only way to put together the pieces of her digital life.

 

 

How to Select a Democrat to Beat Trump in 2020

by Walter G. Moss

In a Democratic presidential candidate for 2020 we want someone who possesses the major wisdom virtues, virtues that will assist him/her to further the common good. In addition, we need someone with a progressive unifying vision.

 

 

Warren Harding Was a Better President Than We Think

by David Harsanyi

An analysis of presidential rankings and a defense of Warren G. Harding.

 

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172245 https://historynewsnetwork.org/article/172245 0
Material History and A Victorian Riddle Retold

 

I spend a lot of time thinking about things. This is not to claim that I am unusually reflective or deep: quite the contrary. I mean this literally. I think a lot about stuff. 

 

I am fascinated by the ways objects and spaces shape us, reflecting and communicating our personalities. And, of course, I am not alone in this fascination. Anyone who has crashed a realtor’s open house to check out the furniture or plans their evening strolls to view their neighbors’ décor through lighted windows, appreciates the pure joy of pretending to know others through their domestic goods. Leather or chintz? Marble tile or oak plank? Curated minimalism or unstudied clutter? These are the choices that reveal us to family, friends, guests, even ourselves, giving clues about how we behave (or hope to) in private. CSI meets HGTV.

 

For me, such musings are both busman’s holiday and occupational hazard. A historian of nineteenth-century American culture, I study the significance ordinary women and men gave to furniture, art, and decoration. I want to understand how they made sense of the world and assume that, like me, they did so through seemingly mundane choices: Responsible adulthood embraced in a sturdy sofa; learnedness telegraphed, if not always realized, through a collection of books; love of family chronicled in elegantly framed photos grouped throughout the house. 

 

After all not everyone in the past wrote a political treatise or crafted a memoir, but most struggled to make some type of home for themselves, no matter how humble or constrained their choices. Less concerned about personality than character, nineteenth-century Americans believed the right domestic goods were both cause and effect, imparting morals as much as revealing the morality of their owners. For them, tastefully hung curtains indicated an appreciation of domestic privacy, but also created a private realm in which such appreciation and its respectable associations might thrive. For almost twenty years, I have lost myself in this Victorian chicken-or-egg riddle: Which came first, the furniture or the self?

 

My practice of home décor voyeurism, recreational and academic, was tested when my 88-year-old mother walked out of her home of fifty years without a backwards glance. She had told me for years that she wanted to die at home, but a new type of anxiety was supplanting the comfort of domestic familiarity. A night of confusion, punctuated by fits of packing and unpacking a suitcase for a routine doctor’s visit, convinced us both that her increasing forgetfulness was something more than the quirky charm of old age. Within a week, she moved from New York to Massachusetts, to an assisted living residence three minutes from my home. She arrived with a suitcase of summer clothes and a collection of family photos. No riddle here: My mother came first; the stuff would come later.

 

Dementia evicted my mother from her home and then softened the blow by wrapping her thoughts in cotton gauze. As I emptied her old home and set up the new one, I marveled at the unpredictability of her memory. She did not recognize my father’s favorite chair twenty years after his death, but knew that she had picture hooks in a drawer 170 miles away in a kitchen that was no longer hers. 

 

Visiting my mother in her new apartment with its temporary and borrowed furnishings, I wondered who she was without her things. This was not a moral question as it was for the long-dead people I study, but an existential one with a healthy dose of magical thinking. I told myself that there must be some combination of furniture, art, and tchotchkes able to keep my mother with me. Could I decorate her into staying the person I knew? With this hope, I would make her bed with the quilt of upholstery fabric her father had brought home from work, long a fixture in her bedroom. Next I would give a prominent place to the Tiffany clock presented to my father on his retirement and cover her walls with the lithographs, drawings, and paintings collected from the earliest days of my parents’ marriage.

 

For several months, I brought my mother more of her own things – a sustained and loving act of re-gifting. First her living room furniture, then the silver letterbox with the angels, then the entryway mirror. I replaced the borrowed lamps with ones from her New York bedroom. When she started losing weight, I showed up with a favorite candy dish and refilled it almost daily. She greeted each addition like a welcome but unexpected guest, a happy surprise and an opportunity to reflect on when they last met. As in dreams, every guest was my mother, walking in and taking a seat beside herself, peopling her own memory. Looking around, she would announce that her new apartment “feels like my home.”

 

But this was only a feeling – no more than a passing tingle of recognition on the back of the neck. Where exactly do we know each other from? Among her own things, she would ask when we needed to pack to go home. I answered, “This is home. Look at how nice your art looks on the walls. The clock is keeping good time.” Every object a metaphor: a time capsule for my mother to discover, an anchor to steady her in place, a constant silent prayer: This is home because your things are here. You are still you, because your things will remind you of what you loved best: beauty, order, family, me. 

 

My job is to know what my mother mercifully does not understand: Her home, the apartment of her marriage and my own childhood, is empty. I emptied it. Even as I assembled my mother in her new home, I dismantled her in the old one. No matter how many of her things are with her, still more are gone – passed on to family, sold, donated, thrown away by my hand: her memories in material form, scattered and tended to by others. They are like seeds blown on the wind to take root in new soil or the whispered words of a game of telephone transforming as they pass down the line: ready, beautiful metaphors for losing parts of my mother. 

 

Six months after her move, my mother came to dinner and didn’t recognize her things newly placed in my house. The good steak knives beside my everyday dishes, the Steuben bowl now filled with old Scrabble tiles, the hatpin holder with a woman’s face set on the mantle… I recycled them into my own. To be fair, they looked different now, less elegant and more playful. Sitting in my living room, my mother told me that I have such a warm home. And the next day on the phone, “I can feel the warmth of your house here in my apartment.” For the moment, we had outsmarted the riddle; each of us living with her things and concentrating on what comes next.

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172189 https://historynewsnetwork.org/article/172189 0
Women Have Fought to Legalize Reproductive Rights for Nearly Two Centuries

Image from "Marriage and Its Discontents at the Turn of the Century"

 

 

Mississippi state representative Douglas McLeod was arrested last week for punching his wife when she didn’t undress fast enough for sex. When deputies arrived, he answered the door visibly intoxicated, with a drink in his hand, and yelled, “Are you kidding me?” Police found blood all over the bed and floor, and had to reassure his frightened wife, who stood shaking at the top of the stairs, that they would protect her. In January, McLeod had co-sponsored a bill making abortions in Mississippi illegal after detection of a fetal “heartbeat” beginning at six weeks, before most women even know they are pregnant.  

 

In both of these scenarios, one thing is clear – Douglas McLeod believes he has such a right to his wife’s body (and other women’s bodies) that he is willing to violently and forcefully impose it. 

 

Even more clear is the fact that for nearly two centuries, women’s rights reformers have fought to make reproductive rights legal, precisely because of men like Douglas McLeod. 

 

Women’s rights reformers beginning in the 1840s openly deplored women’s subjugation before the law – which, of course, was created and deployed by men. Temperance advocates in the nineteenth century pointed especially to alcohol as a major cause of the abuse and poverty of helpless women and children. As one newspaper editor editorialized, “Many… men believe that their wife is as much their property as their dog and horse and when their brain is on fire with alcohol, they are more disposed to beat and abuse their wives than their animals…. Every day, somewhere in the land, a wife and mother – yes, hundreds of them – are beaten, grossly maltreated and murdered by the accursed liquor traffic, and yet we have men who think women should quietly submit to such treatment without complaint.”(1)

 

But of course women were never silent about their lowly status in a patriarchal America. As one of the first practicing woman lawyers in the United States, Catharine Waugh McCulloch argued in the 1880s that “Women should be joint guardians with their husbands of their children. They should have an equal share in family property. They should be paid equally for equal work. Every school and profession should be open to them. Divorce and inheritance should be equal. Laws should protect them from man’s greed…and man’s lust…”(2)

 

Indeed, the idea of “man’s lust” and forced maternity was particularly abhorrent to these activists. In the nineteenth century, most women were not publicly for the legalization of birth control and abortion but there were complex reasons for this rejection. In a world where women had little control over the actions of men, reformers rightly noted that legalizing contraceptives and abortion would simply allow men to abuse and rape women with impunity and avoid the inconvenient problem of dependent children. 

 

Instead, many suffragists and activists embraced an idea called voluntary motherhood. The theoretical foundations of this philosophy would eventually become part of the early birth control movement (and later the fight for legal abortion in the twentieth century). Simply put, voluntary motherhood was the notion that women could freely reject their husbands’ unwanted sexual advances and choose when they wanted to have children. In an era when marital rape law did not exist, this was a powerful way for women to assert some autonomy over their own bodies. As scholar Linda Gordon has written, it is thus unsurprising that women – even the most radical of activists – did not support abortion or contraception because “legal, efficient birth control would have increased men’s freedom to indulge in extramarital sex without greatly increasing women’s freedom to do so even had they wanted to.”(3) But the ideas underpinning voluntary motherhood promised to return a measure of power to women. 

 

Of course, the nineteenth-century criminalization of abortion and birth control in state legislatures was openly about restricting women’s freedom altogether. As Dr. Horatio Storer wrote, “the true wife’” does not seek “undue power in public life…undue control in domestic affairs,… or privileges not her own.”(4) Beginning in the 1860s, under pressure from physicians like Storer and the newly organized American Medical Association (who wanted to professionalize and control the discipline of medicine), every state in the union began passing laws criminalizing abortion and birth control. Physicians saw their role as the safeguard not only of Americans’ physical health, but the very health of the republic. They, along with other male leaders, viewed the emergent women’s suffrage movement, rising immigration, slave emancipation, and other social changes with alarm. Worried that only white, middle-class women were seeking abortion, doctors and lawmakers sought to criminalize contraceptives and abortion in order to ensure the “right” kind of women were birthing the “right” kind of babies. 

 

The medical campaigns to ban abortion were then bolstered by the federal government’s passage of the 1873 Comstock Act, which classified birth control, abortion, and contraceptive information as legal obscenity. Fines for violating the Act were steep and carried prison time. Abortion and birth control then remained illegal for essentially the next century, until the Supreme Court finally ruled in two cases – Griswold v. Connecticut (1965) and Roe v. Wade (1973), that both were matters to be considered under the doctrine of privacy between patient and physician. The efforts of the second-wave feminist movement also simultaneously transformed older ideas of voluntary motherhood, which asserted that women both didn’t have to have sex or be pregnant, into the more radical notion that women could – and should - enjoy sex without fear of becoming pregnant.  

 

Anti-abortion activists today thus know that they cannot openly advocate for broadly rescinding women’s human and legal rights. Instead, in order to achieve their agenda,  they cannily focus on the rights of the unborn or “fetal personhood,” and the false flag of “protecting” women’s health.  But it is crystal clear that the recent spate of laws criminalizing abortion in states like Georgia, Ohio, Alabama, and Douglas McLeod’s home state of Mississippi have nothing to do with babies or health. Instead they flagrantly reproduce the past history of men’s legal control over women. It is not a coincidence that women make up less than 14% of Mississippi’s legislative body – the lowest in the country. McLeod’s behavior and arrest may have taken place in May of 2019, but his actions – both at home and in the legislature – look no different than his historical male counterparts. Unlike the past, it’s just that neither he nor his colleagues are willing to admit it. 

 

(1) The Woman’s Standard (Waterloo, IA), Volume 3, Issue 1 (1888), p. 2. 

(2) “The Bible on Women Voting,” undated pamphlet, Catharine Waugh McCulloch Papers, Dillon Collection, Schlesinger Library. 

(3) Linda Gordon, The Moral Property of Women: A History of Birth Control Politics in America (University of Illinois Press, 2002).

(4) Horatio Robinson Storer, Why Not? A book for Every Woman (Boston: Lee and Shepard, 868). Quoted in Leslie Reagan, When Abortion Was a Crime. 

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172181 https://historynewsnetwork.org/article/172181 0
The President is Disrupting the U.S. Economy

Donald Trump has been president for only two of the ten years of America’s economic expansion since the Great Recession, yet he eagerly takes full credit for the nation’s advancement. It has been easy for him to boast because he had the good fortunate to occupy the White House during a mature stage of the recovery. The president’s fans attribute booming markets and low unemployment to his leadership even though Trump’s words and actions at the White House have often broken the economy’s momentum. In recent months, especially, Trump’s interference in business affairs has put U.S. and global progress at risk. 

 

An article that appeared in the New York Times in May 2019 may offer some clues for understanding why the American president has been less than skillful in managing the country’s financial affairs. Tax records revealed by the Times show that from 1985 to 1994 Donald Trump lost more than a billion dollars on failed business deals. In some of those years Trump sustained the biggest losses of any American businessman. The Times could not judge Trump’s gains and losses for later years because Trump, unlike all U.S. presidents in recent decades, refuses to release his tax information. Nevertheless, details provided by the Times are relevant to a promise Trump made during in 2016. Candidate Trump advertised himself as an extraordinarily successful developer and investor who would do for the country what he had done for himself. Evidence provided by the Times suggests that promise does not inspire confidence. 

 

Trump’s intrusions in economic affairs turned aggressive and clumsy in late 2018. An early sign of the shift came when he demanded $5.7 billion from Congress for construction of a border wall. House Democrats, fresh off impressive election gains, stated clearly that they would not fund the wall. The president reacted angrily, closing sections of the federal government. Approximately 800,000 employees took furloughs or worked without pay. Millions of Americans were not able to use important government services. When the lengthy shutdown surpassed all previous records, Trump capitulated. The Congressional Budget Office estimated that Trump’s counterproductive intervention cost the U.S. economy $11 billion. 

 

President Trump’s efforts to engage the United States in trade wars produced more costly problems. Trump referred to himself as “Tariff Man,” threatening big levies on Chinese imports. Talk of a trade war spooked the stock markets late in 2018. Investors worried that China would retaliate, inflating consumer prices and risking a global slowdown. Then Trump appeared to back away from confrontations. The president aided a market recovery by tweeting, “Deal is moving along very well . . .  Big progress being made!”

 

Donald Trump claimed trade wars are “easy to win,” but the market chaos of recent months suggested they are not. When trade talks deteriorated into threats and counter-threats, counter-punching intensified. In May 2019, China pulled away from negotiations, accusing the Americans of demanding unacceptable changes. Trump responded with demands for new tariffs on Chinese goods. President Trump also threatened to raise tariffs against the Europeans, Canadians, Japanese, Mexicans, and others. U.S. and global markets lost four trillion dollars during the battles over trade in May 2019. Wall Street’s decline wiped out the value of all gains American businesses and citizens realized from the huge tax cut of December 2017. 

 

President Trump’s confident language about the effectiveness of tariffs conceals their cost. Tariffs create a tax that U.S. businesses and the American people need to pay in one form or another. Tariffs raise the cost of consumer goods. They hurt American farmers and manufacturers through lost sales abroad. They harm the economies of China and other nations, too (giving the U.S. negotiators leverage when demanding fairer trade practices), but the financial hits created by trade wars produce far greater monetary losses than the value of trade concessions that can realistically be achieved currently. 

 

Agreements between trading partners are best secured through carefully studied and well-informed negotiations that consider both the short and long-term costs of conflict. The present “war” is creating turmoil in global markets. It is breaking up manufacturing chains, in which parts that go into automobiles and other products are fabricated in diverse countries. Many economists warn that the move toward protectionism, championed especially by President Trump, can precipitate a global recession.

 

Trump’s approach to trade had unfortunate effects early in the Great Depression. In 1930 the U.S. Congress passed the protectionist Smoot-Hawley Tariff Act that placed tariffs on 20,000 imported goods. America’s trading partners responded with their own levies. Retaliatory actions in the early 1930s put a damper on world trade and intensified the Depression. After World War II, U.S. leaders acted on lessons learned. They promoted tariff reduction and “free trade.” Their strategy proved enormously successful. Integrated trade gave nations a stake in each other’s economic development. The new order fostered seventy years of global peace and prosperity. Now, thanks to a president who acts like he is unaware of this history, the United States is promoting failed policies of the past. 

 

It is not clear how the current mess will be cleaned up. Perhaps the Chinese will bend under pressure. Maybe President Trump will agree to some face-saving measures, accepting cosmetic adjustments in trade policy and then declaring a victory. Perhaps Trump will remain inflexible in his demands and drag global markets down to a more dangerous level. Markets may recover, as they did after previous disruptions provoked by the president’s tweets and speeches. Stock markets gained recently when leaders at the Federal Reserve hinted of future rate cuts. It is clear, nevertheless, that battles over tariffs have already created substantial damage.

 

Pundits have been too generous in their commentaries on the president’s trade wars. Even journalists who question Trump’s actions frequently soften their critiques by saying the president’s tactics may be justified. American corporations find it difficult to do business in China, they note, and the Chinese often steal intellectual property from U.S. corporations. Pundits also speculate that short-term pain from tariff battles might be acceptable if China and nations accept more equitable trade terms. Some journalists are reluctant to deliver sharp public criticism of Trump’s policy. They do not want to undermine U.S. negotiators while trade talks are underway. 

 

American businesses need assistance in trade negotiations, but it is useful to recall that the expansion of global trade fostered an enormous business boom in the United States. For seven decades following World War II many economists and political leaders believed that tariff wars represented bad policy. Rejecting old-fashioned economic nationalism, they promoted freer trade. Their wisdom, drawn from a century of experience with wars, peace and prosperity, did not suddenly become irrelevant after Donald Trump’s inauguration. Unfortunately, when President Trump championed trade wars, many Americans, including most leaders in the Republican Party, stood silent or attempted to justify the radical policy shifts. 

 

Since the time Donald Trump was a young real estate developer, he has demonstrated little interest in adjusting beliefs in the light of new evidence. Back in the 1980s, when Japan looked like America’s Number One economic competitor, Donald Trump called for economic nationalism, much like he does today. “America is being ripped off” by unfair Japanese trade practices,” Trump protested in the Eighties. He recommended strong tariffs on Japanese imports. If U.S. leaders had followed Donald Trump’s advice in the Eighties, they would have limited decades of fruitful trade relations between the two countries.

 

America’s and the world’s current difficulties with trade policy are related, above all, to a single individual’s fundamental misunderstanding of how tariff’s work. Anita Kumar, Politico’s White House Correspondent and Associate Editor identified Trump’s mistaken impressions in an article published May 31, 2019. She wrote, “Trump has said that he thinks tariffs are paid by the U.S.’s trading partners but economists say that Americans are actually paying for them.” Kumar is correct: Americans are, indeed, paying for that tax on imports. This observation about Trump’s misunderstanding is not just the judgment of one journalist. Many commentators have remarked about the president’s confusion regarding who pays for tariffs and how various trading partners suffer from them. 

 

The United States’ economy proved dynamic in the decade since the Great Recession thanks in large part to the dedication and hard work of enterprising Americans. But in recent months the American people’s impressive achievements have been undermined by the president’s clumsy interventions. It is high time that leaders in Washington acknowledge the risks associated with the president’s trade wars and demand a more effective policy course. 

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172182 https://historynewsnetwork.org/article/172182 0
Political Corruption Underwrites America’s Gun-Control Nightmare Reprinted from The Hidden History of Guns and the Second Amendment with the permission of Berrett-Koehler Publishers. Copyright © 2019 by Thom Hartmann. 

At bottom, the Court’s opinion is thus a rejection of the common sense of the American people, who have recognized a need to prevent corporations from undermining self-government since the founding , and who have fought against the distinctive corrupting potential of corporate electioneering since the days of Theodore Roosevelt. It is a strange time to repudiate that common sense. While American democracy is imperfect, few outside the majority of this court would have thought its flaws included a dearth of corporate money in politics. 

—Justice John Paul Stevens’s dissent in Citizens United 

 

Parkland shooting survivor and activist David Hogg once asked, when Sen. John McCain, R-Ariz., was still alive, why McCain had taken more than $7 million from the NRA (not to mention other millions that they and other “gun rights” groups spent supporting him indirectly). 

 

McCain’s answer, no doubt, would be the standard politician-speak these days: “They support me because they like my positions; I don’t change my positions just to get their money.” It’s essentially what Sen. Marco Rubio, R-Fla., told the Parkland kids when he was confronted with a similar question. 

 

And it’s a nonsense answer, as everybody knows. 

 

America has had an on-again, off-again relationship with political corruption that goes all the way back to the early years of this republic. Perhaps the highest level of corruption, outside of today, happened in the late 1800s, the tail end of the Gilded Age. (“Gilded,” of course, refers to “gold coated or gold colored,” an era that Donald Trump has tried so hard to bring back that he even replaced the curtains in the Oval Office with gold ones.) 

 

One of the iconic stories from that era was that of William Clark, who died in 1925 with a net worth in excess, in today’s money, of $4 billion. He was one of the richest men of his day, perhaps second only to John D. Rockefeller. And in 1899, Clark’s story helped propel an era of political cleanup that reached its zenith with the presidency of progressive Republicans (that species no longer exists) Teddy Roosevelt and William Howard Taft. 

 

Clark’s scandal even led to the passage of the 17th Amendment, which let the people of the various states decide who would be their U.S. senators, instead of the state legislatures deciding, which was the case from 1789 until 1913, when that amendment was ratified. 

 

By 1899, Clark owned pretty much every legislator of any consequence in Montana, as well as all but one newspaper in the state. Controlling both the news and the politicians, he figured they’d easily elect him to be the next U.S. senator from Montana. Congress later learned that he not only owned the legislators but in all probability stood outside the statehouse with a pocket full of $1,000 bills (literally: they weren’t taken out of circulation until 1969 by Richard Nixon), each in a plain white envelope to hand out to every member who’d voted for him.

 

When word reached Washington, DC, about the envelopes and the cash, the US Senate began an investigation into Clark, who told friends and aides, “I never bought a man who wasn’t for sale.” 

 

Mark Twain wrote of Clark, “He is as rotten a human being as can be found anywhere under the flag; he is a shame to the American nation, and no one has helped to send him to the Senate who did not know that his proper place was the penitentiary, with a chain and ball on his legs.” 

 

State Senator Fred Whiteside, who owned the only non-Clark-owned newspaper in the state, the Kalispell Bee, led the big exposé of Clark’s bribery. The rest of the Montana senators, however, ignored Whiteside and took Clark’s money.

 

The US Senate launched an investigation in 1899 and, sure enough, found out about the envelopes and numerous other bribes and emoluments offered to state legislators, and refused to seat him. The next year, Montana’s corrupt governor appointed Clark to the Senate, and he served a full eight-year term. 

 

Clark’s story went national and became a rallying cry for clean-government advocates. In 1912, President Taft, after doubling the number of corporations being broken up by the Sherman Anti-Trust Act over what Roosevelt had done, championed the 17th Amendment (direct election of senators, something some Republicans today want to repeal) to prevent the kind of corruption that Clark represented from happening again. 

 

Meanwhile, in Montana, while the state legislature was fighting reforms, the citizens put a measure on the state ballot of 1912 that would outlaw corporations from giving any money of any sort to politicians. That same year, Texas and other states passed similar legislation (the corrupt speaker of the House Tom DeLay, R-Texas, was prosecuted under that law). 

 

Montana’s anticorruption law, along with those of numerous other states, persisted until 2010,when Justice Anthony Kennedy, writing for the five-vote majority on the U.S. Supreme Court, declared in the Citizens United decision that in examining more than 100,000 pages of legal opinions, he could not find “. . . any direct examples of votes being exchanged for . . . expenditures. This confirms Buckley’s reasoning that independent expenditures do not lead to, or create the appearance of, quid pro quo corruption [Buckley is the 1976 decision that money equals free speech]. In fact, there is only scant evidence that independent expenditures even ingratiate. Ingratiation and access, in any event, are not corruption.”

 

The US Supreme Court, following on the 1976 Buckley case that grew straight out of the Powell Memo and was written in part by Justice Lewis Powell, turned the definitions of corruption upside down.

 

That same year, the Court overturned the Montana law in the 2010 American Tradition Partnership, Inc. v. Bullock ruling, essentially saying that money doesn’t corrupt politicians, particularly if that money comes from corporations that can “inform” us about current issues (the basis of the Citizens United decision) or billionaires (who, apparently the right-wingers on the Court believe, obviously know what’s best for the rest of us). 

 

Thus, the reason the NRA can buy and own senators like McCain and Rubio (and Thom Tillis, R-N.C./$4 million; Cory Gardner, R-Colo./$3.8 million; Joni Ernst, R-Iowa/$3 million; and Rob Portman, R-Ohio/$3 million, who all presumably took money much faster and much more recently than even McCain) is because the Supreme Court has repeatedly said that corporate and billionaire money never corrupts politicians. (The dissent in the Citizens United case is a must- read: it’s truly mind-boggling and demonstrates beyond refutation how corrupted the right-wingers on the Court, particularly Scalia and Thomas—who regularly attended events put on by the Kochs—were by billionaire and corporate money.)

 

So here America stands. The Supreme Court has ruled, essentially, that the NRA can own all the politicians they want and can dump unlimited amounts of poison into this nation’s political bloodstream. 

 

Meanwhile, angry white men who want to commit mass murder are free to buy and carry all the weaponry they can afford. 

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172188 https://historynewsnetwork.org/article/172188 0
What We Can't Forget As We Remember Anne Frank

 

On grocery store checkout lines around the country this month, amidst the candy bars and zany tabloids, shoppers will find a glossy 96-page magazine called “Anne Frank: Her Life and Legacy.” Unfortunately, it fails to explain one of the most important but little-known aspects of the Anne Frank story—how her life could have been saved by President Franklin D. Roosevelt. 

 

The new Anne Frank publication, compiled by the staff of Life magazine, is filled with photographs of Anne and her family, and a breezy overview of her childhood, tragically cut short by the Nazi Holocaust. Today, June 9, would have been her 90th birthday. 

 

Little Anne, “thin as a wisp, curious, mercurial, and a know-it-all” at first enjoyed an idyllic life, “but outside the family circle, the world was changing,” Life recounts. Economic and social crises in Germany propelled Adolf Hitler to power in 1933, and Anne’s father, Otto, quickly moved the family to neighboring Holland for safety.

 

When World War II erupted in 1939, Life reports, Otto “frantically searched for ways to get his family away from the growing conflict” and “he hoped to emigrate to the United States.”

 

That’s all. No accounting of what happened when the Franks sought to emigrate to the United States. No explanation as to why the Roosevelt administration refused to open America’s doors to Anne Frank or countless other Jewish children. 

 

Just the one vague allusion to Otto’s “hope,” and then quickly back to the famous story of Anne hiding in the Amsterdam attic and writing entries in her diary.

 

Here’s the part of the story that Life left out.

 

Laws enacted by the U.S. Congress in the 1920s created a quota system to severely restrict immigration. Roosevelt wrote at the time that immigration should be sharply restricted for “a good many years to come” so there would be time to “digest” those who had already been admitted. He argued that future immigration should be limited to those who had “blood of the right sort”—they were the ones who could be most quickly and easily assimilated, he contended.  

 

As president (beginning in 1933), Roosevelt took a harsh immigration system and made it much worse. His administration went above and beyond the existing law, to ensure that even those meager quota allotments were almost always under-filled. American consular officials abroad made sure to “postpone and postpone and postpone the granting of the visas” to refugees, as one senior U.S. official put it in a memo to his colleagues. They piled on extra requirements and created a bureaucratic maze to keep refugees like the Franks far from America’s shores.

 

The quotas for immigrants from Germany and (later) Axis-occupied countries were filled in only one of Roosevelt’s 12 years in office. In most of those years, the quotas were less than 25% full. A total of 190,000 quota places that could have saved lives were never used at all.

 

Otto Frank, Anne's father, filled out the small mountain of required application forms and obtained the necessary supporting affidavits from the Franks’ relatives in Massachusetts. But that was not enough for those who zealously guarded America's gates against refugees. 

 

Anne’s mother, Edith, wrote to a friend in 1939: "I believe that all Germany's Jews are looking around the world, but can find nowhere to go."

 

That same year, refugee advocates in Congress introduced the Wagner-Rogers bill, which would have admitted 20,000 refugee children from Germany outside the quota system. Anne Frank and her sister Margot were German citizens, so they could have been among those children.

 

Supporters of the bill assembled a broad, ecumenical coalition--including His Eminence George Cardinal Mundelein, one of the country’s most important Catholic leaders; New York City Mayor Fiorello La Guardia; Hollywood celebrities such as Henry Fonda and Helen Hayes; and 1936 Republican presidential nominee Alf Landon and his running mate, Frank Knox. Former First Lady Grace Coolidge announced that she and her neighbors in Northampton, Massachusetts, would personally care for twenty-five of the children.

 

Even though there was no danger that the children would take jobs away from American citizens, anti-immigration activists lobbied hard against the Wagner-Rogers bill. President Roosevelt’s cousin, Laura Delano Houghteling, who was the wife of the U.S. Commissioner of Immigration, articulated the sentiment of many opponents when she remarked at a dinner party that “20,000 charming children would all too soon grow up into 20,000 ugly adults.” FDR himself refused to support the bill. By the spring of 1939, Wagner-Rogers was dead.

 

But Wagner-Rogers was not the only way to help Jewish refugees. Just a few months earlier, in the wake of Germany’s Kristallnacht pogrom, the governor and legislative assembly of the U.S. Virgin Islands offered to open their territory to Jews fleeing Hitler. Treasury Secretary Henry Morgenthau, Jr. endorsed the proposal. 

 

That one tiny gesture by President Roosevelt—accepting the Virgin Islands leaders’ offer—could have saved a significant number of Jews. But FDR rejected the plan. He and his aides feared that refugees would be able to use the islands as a jumping-off point to enter the United States itself.

 

At a press conference on June 5, 1940, the president warned of the “horrible” danger that Jewish refugees coming to America might actually serve the Nazis. They might begin “spying under compulsion” for Hitler, he said, out of fear that if they refused, their elderly relatives back in Europe “might be taken out and shot.” 

 

That's right: Anne Frank, Nazi spy.

 

In fact, not a single instance was ever discovered of a Jewish refugee entering the United States and spying for the Nazis. But President Roosevelt did not shy away from using such fear-mongering in order to justify slamming shut America’s doors.

 

The following year, the administration officially decreed that no refugee with close relatives in Europe could come to the United States.

 

Anne and Margot Frank, and countless other German Jewish refugee children, were kept out because they were considered undesirable. They didn’t have what FDR once called “blood of the right sort.” One year after the defeat of Wagner-Rogers, Roosevelt opened America’s doors to British children to keep them safe from the German blitz. Those were the kind of foreigners he preferred.

 

Life magazine’s tribute to Anne Frank is touching. The photos fill our hearts with pity. But by failing to acknowledge what the Roosevelt administration did to keep the Jews out, Life’s version of history misses a point that future generations need to remember: pity is not enough to help people who are trying to escape genocide.

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172187 https://historynewsnetwork.org/article/172187 0
What the Feud and Reconciliation between John Adams and Thomas Jefferson Teaches Us About Civility

 

Donald Trump did not invent the art of the political insult but he’s inflamed the level of vitriolic public discourse and incivility to a new low unmatched by other presidents. In a tainted tradition that has permeated our history, other presidents have not been immune to dishing out acerbic insults against one another.

 

John Quincy Adams was livid that Harvard University planned to award President Andrew Jackson with an honorary degree. He wrote in his diary that Jackson was “a barbarian who could not write a sentence of grammar and hardly could spell his own name.”

 

Franklin Pierce was not as impressed with Abraham Lincoln as history has been, declaring the day after Lincoln issued the Emancipation Proclamation that the president had “limited ability and narrow intelligence.” 

 

The list of spicy presidential insults goes on and on. While such statements are often laugh-aloud funny, they are also shocking and sobering. How can these men who have reached the pinnacle of political power be so crude and demeaning? We can learn a valuable lesson from the friendship and feud between John Adams and Thomas Jefferson, and their ultimate reconciliation.

 

In 1775, the 32-year-old Virginia born-and-bred Jefferson traveled from his mountain-top Monticello mansion to the bustling city of Philadelphia to serve as a delegate to the Second Continental Congress.

 

Sometime in June that year after Jefferson arrived in the City of Brotherly Love, he met for the first time one of the most prominent and outspoken leaders of the resistance to British domination – John Adams. The Massachusetts attorney was the soft-spoken Jefferson’s senior by seven years. But neither their opposite personalities, age differences, or geographical distance separating their homes stood in the way of the start of a remarkable relationship that would span more than a half-century. 

 

They forged a unique and warm partnership, both serving on the committee to draft a declaration of independence from British rule. According to Adams, Jefferson had “the reputation of a masterly pen,” and was therefore tasked with using his writing skills to draft the document. Jefferson was impressed with how Adams so powerfully defended the draft of the document on the floor of the congress, even though he thought Adams was “not graceful, not elegant, not always fluent in his public addresses.”

 

In the 1780s, they found themselves thrown together once again as diplomats in Europe representing the newly minted United States. These collaborators and their families were friends.

 

But by 1796, their friendship was obliterated by the rise of political parties with starkly different visions of the new American experiment. With his election that year as the nation’s second president, the Federalist Adams found himself saddled with Jefferson as his vice president representing the Democratic-Republican Party. Tensions were high between the two men. 

 

Just three months after their inauguration as the embryonic nation’s top two elected officials, Jefferson privately groused to a French diplomat that President Adams was “distrustful, obstinate, excessively vain, and takes no counsel from anyone.” Weeks later, Adams spewed out his frustration, writing in a private letter that his vice president had “a mind soured, yet seeking for popularity, and eaten to a honeycomb with ambition, yet weak, confused, uninformed, and ignorant.” 

 

When Jefferson ousted Adams from the presidency in the election of 1800, Adams was forced to pack his bags and vacate the newly constructed Executive Mansion after just a few months. At four o’clock in the morning on March 4, 1801, Jefferson’s inauguration day, the sullen Adams slipped out of the Executive Mansion without fanfare, boarded a public stage and left Washington.  The streets were quiet as the president left the capital under the cover of darkness on his journey back home. He wanted nothing to do with the man who had publicly humiliated him by denying him a second term as president, nor in witnessing Jefferson’s inauguration and moment of triumph. 

 

For the next dozen years these two giants of the American revolution largely avoided one another, still nursing wounds inflicted by the poisonous partisan politics of their era. But on July 15, 1813, Adams made an overture, reaching out to his former friend and foe, writing that “you and I ought not to die until we have explained ourselves to each other.” That letter broke the dam and began a series of remarkable letters between the two men that lasted for more than a dozen years until death claimed them both on the July 4, 1826 – the 50thanniversary of the Declaration of Independence. 

 

Not all such presidential feuds have resulted in such heart-warming reconciliations. But the story of Adams and Jefferson serves as a model of what can happen when respect replaces rancor, friendships triumph over political dogma, and we allow reconciliation to emerge from the ashes of fractured friendships. 

 

Adams and Jefferson ultimately listened to one another, explaining themselves. Listening to someone who thinks differently than we do can feel threatening and scary – almost as if by listening to their thoughts we might become infected by their opinions. So we hunker down and lob snarky tweets to attack the humanity and patriotism of others, foolishly hoping such tactics will convince them to change.

 

But what would it look like if we could agree on core values we share in common with one another? Patriotism, a safe country, a stable society, economic well-being that promotes health, education, food, and housing, ensuring that people are treated with dignity and respect.

 

We could then have vigorous and civil debates about the best policies to implement our values. We won’t always agree with everyone. There will be a wide diversity of opinions. But if we could “explain ourselves” to one another, listen deeply, forge friendships, and understand the hopes and fears and humanity of others, we might actually solve some of the problems that seem so intractable in our polarized society – a society that seems to thrive on extremism on both ends of the political spectrum.

 

Adams and Jefferson ultimately allowed their humanity and deep friendship to triumph over their politics. We can thank them and other candid and often irreverent barbs by our presidents about other presidents, because these insults cause us to reflect how we should treat one another – not only in the public square, but around the family dinner table, in our marriages, and in the workplace. 

 

Our survival as a nation depends on our ability to listen to those with very different political philosophies, to “explain ourselves” to one another, to search for broad areas of agreement with those of different political philosophies, and to reject the acidic politics of personal demonization in which we attack the humanity or patriotism of others.

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172184 https://historynewsnetwork.org/article/172184 0
Whatever Happened to an Affordable College Education?

Image: Pixabay

 

As U.S. college students―and their families―know all too well, the cost of a higher education in the United States has skyrocketed in recent decades.  According to the Center on Budget and Policy Priorities, between 2008 and 2017 the average cost of attending a four-year public college, adjusted for inflation, increased in every state in the nation.  In Arizona, tuition soared by 90 percent.  Over the past 40 years, the average cost of attending a four-year college increased by over 150 percent for both public and private institutions.  

 

By the 2017-2018 school year, the average annual cost at public colleges stood at $25,290 for in-state students and $40,940 for out-of-state students, while the average annual cost for students at private colleges reached $50,900.

 

In the past, many public colleges had been tuition-free or charged minimal fees for attendance, thanks in part to the federal Land Grant College Act of 1862.  But now that’s “just history.”  The University of California, founded in 1868, was tuition-free until the 1980s.  Today, that university estimates that an in-state student’s annual cost for tuition, room, board, books, and related items is $35,300; for an out-of-state student, it’s $64,300.

 

Not surprisingly, far fewer students now attend college.  Between the fall of 2010 and the fall of 2018, college and university enrollment in the United States plummeted by two million students.  According to the Organization for Economic Cooperation and Development, the United States ranks thirteenth in its percentage of 25- to 34-year-olds who have some kind of college or university credentials, lagging behind South Korea, Russia, Lithuania, and other nations.

 

Furthermore, among those American students who do manage to attend college, the soaring cost of higher education is channeling them away from their studies and into jobs that will help cover their expenses.  As a Georgetown University report has revealed, more than 70 percent of American college students hold jobs while attending school. Indeed, 40 percent of U.S. undergraduates work at least 30 hours a week at these jobs, and 25 percent of employed students work full-time.

 

Such employment, of course, covers no more than a fraction of the enormous cost of a college education and, therefore, students are forced to take out loans and incur very substantial debt to banks and other lending institutions.  In 2017, roughly 70 percent of students reportedly graduated college with significant debt.  According to published reports, in 2018 over 44 million Americans collectively held nearly $1.5 trillion in student debt.  The average student loan borrower had $37,172 in student loans―a $20,000 increase from 13 years before.

 

Why are students facing these barriers to a college education?  Are the expenses for maintaining a modern college or university that much greater now than in the past?

 

Certainly not when it comes to faculty.  After all, tenured faculty and faculty in positions that can lead to tenure have increasingly been replaced by miserably-paid adjunct and contingent instructors―migrant laborers who now constitute about three-quarters of the instructional faculty at U.S. colleges and universities.  Adjunct faculty, paid a few thousand dollars per course, often fall below the official federal poverty line.  As a result, about a quarter of them receive public assistance, including food stamps.

 

By contrast, higher education’s administrative costs are substantially greater than in the past, both because of the vast multiplication of administrators and their soaring incomes.  According to the Chronicle of Higher Education, in 2016 (the last year for which figures are available), there were 73 private and public college administrators with annual compensation packages that ran from $1 million to nearly $5 million each.

 

Even so, the major factor behind the disastrous financial squeeze upon students and their families is the cutback in government funding for higher education. According to a study by the Center on Budget and Policy Priorities, between 2008 and 2017 states cut their annual funding for public colleges by nearly $9 billion (after adjusting for inflation).  Of the 49 states studied, 44 spent less per student in the 2017 school year than in 2008.  Given the fact that states―and to a lesser extent localities―covered most of the costs of teaching and instruction at these public colleges, the schools made up the difference with tuition increases, cuts to educational or other services, or both.

 

SUNY, New York State’s large public university system, remained tuition-free until 1963, but thereafter, students and their parents were forced to shoulder an increasing percentage of the costs. This process accelerated from 2007-08 to 2018-19, when annual state funding plummeted from $1.36 billion to $700 million.  As a result, student tuition now covers nearly 75 percent of the operating costs of the state’s four-year public colleges and university centers.

 

This government disinvestment in public higher education reflects the usual pressure from the wealthy and their conservative allies to slash taxes for the rich and reduce public services.  “We used to tax the rich and invest in public goods like affordable higher education,” one observer remarked.  “Today, we cut taxes on the rich and then borrow from them.”     

 

Of course, it’s quite possible to make college affordable once again.  The United States is far wealthier now than in the past, with a bumper crop of excessively rich people who could be taxed for this purpose.  Beginning with his 2016 presidential campaign, Bernie Sanders has called for the elimination of undergraduate tuition and fees at public colleges, plus student loan reforms, funded by a tax on Wall Street speculation.  More recently, Elizabeth Warren has championed a plan to eliminate the cost of tuition and fees at public colleges, as well as to reduce student debt, by establishing a small annual federal wealth tax on households with fortunes of over $50 million.

 

Certainly, something should be done to restore Americans’ right to an affordable college education.

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172186 https://historynewsnetwork.org/article/172186 0
The Challenges of Writing Histories of Autism

Image: Julia is a character on Sesame Street who has autism. 

 

This is a version of an article first published in the May 2019 issue of Participations.  It is reproduced here with the kind permission of the editors.

 

Autism is a relatively new (and increasingly common) disability, and we don’t yet fully understand it.  The symptoms vary enormously from individual to individual. Severity can range from barely noticeable to totally debilitating. The condition often impairs the ability to read but can also result in “hyperlexia”, a syndrome which involves precocious reading at a very early age but also difficulties in reading comprehension. 

 

We have just begun to write the history of autism. Frankly, some of the first attempts stumbled badly, especially over the question of whether “It was there before” – that is, before the twentieth century.  That mantra was repeated several times by John Donvan and Caren Zucker in In a Different Key:The Story of Autism (2016). But they and others have found precious few halfway plausible cases in history, nothing remotely like the one in 40 children afflicted with autism reported by the 2016 National Survey of Children's Health. Donvan and Zucker claimed that the “Wild Boy of Aveyron”, the feral child discovered in France in 1800, “had almost certainly been a person with autism.” But autism impairs the ability to perceive danger, and frequently results in early deaths from drowning and other accidents, so it’s not likely that an autistic child could survive long in the wild. And there are barely a dozen examples of feral children in history, so even if they were all autistic, the condition was vanishingly rare.  

 

In Neurotribes (2015) Steve Silberman also argued that autism had been a common part of the human condition throughout history.  His book celebrated Dr. Hans Asperger as a friend and protector of autistic children, even placing his benevolent image on the frontispiece. Critics hailed that version of history as “definitive”. But recently Edith Sheffer, in Asperger’s Children: The Origins of Autism in Nazi Vienna (2018), confirmed that Asperger had been deeply implicated in the Nazi program to exterminate the neurologically handicapped. 

 

Surely if we want to write a full and honest account of the recent history of the autism epidemic, we should interview members of the autism community, defined as including both autistic individuals and their family members. This, however, presents  a number of special obstacles that I encountered when I conducted research for an article that was eventually published as “The Autism Literary Underground." Here I want to explain how we as historians might work around these barriers.

 

For starters, about a third of autistic individuals are nonspeaking, and many others experience lesser but still serious forms of verbal impairment.  But at least some nonspeakers can communicate via a keyboard, and can therefore be reached via email interviews. Email interviews have a number of other advantages: they save the trouble and expense of travel and transcription, they avoid transcription errors and indistinct recordings, and they allow the interviewer to go back and ask follow-up and clarification questions at any time.  This is not to rule out oral interviews, which are indispensable for the nonliterate. But email interviews are generally easier for autism parents, who are preoccupied with the demands of raising disabled children, many of whom will never be able to live independently. These parents simply cannot schedule a large block of time for a leisurely conversation.  When I conducted my interviews, the interviewees often had to interrupt the dialogue to attend to their children.  Perhaps the most frequent response to my questions was, “I’ll get back to you….” (One potential interviewee was never able to get back to me, and had to be dropped from the project.) Ultimately these interviews addressed all the questions I wanted to address and allowed interviewees to say everything they had to say, but in email threads stretching over several days or weeks.

 

Recent decades have seen a movement to enable the disabled to “write their own history”. In 1995 Karen Hirsch published an article advocating as much in Oral History Review, in which she discussed many admirable initiatives focusing on a wide range of specific disabilities – but she never mentioned autism. Granted, autism was considerably less prevalent then than it is today, but the omission may reflect the fact that autism presents special problems to the researcher.  In 2004 the Carlisle People First Research Team, a self-governing group for those with “learning difficulties”, won a grant to explore “advocacy and autism” but soon concluded that their model for self-advocacy did not work well for autistic individuals. Though the Research Team members were themselves disabled, they admitted that they knew little about autism, and “there was an obvious lack of members labelled with autism or Asperger’s syndrome” in disability self-advocacy groups throughout the United Kingdom.  The Research Team concluded that, because autism impairs executive functioning as well as the ability to socialize and communicate, it was exceptionally difficult for autistic individuals to organize their own collective research projects, and difficult even for nonautistic researchers to set up individual interviews with autistic subjects.

 

Self-advocacy groups do exist in the autism community, but they inevitably represent a small proportion at the highest-performing end of the autism spectrum: they cannot speak for those who cannot speak.  We can only communicate with the noncommunicative by interviewing their families, who know and understand them best. 

 

One also has to be mindful that the autism community is riven by ideological divisions, and the unwary researcher may be caught in the crossfire.  For instance, if you invite an autistic individual to tell their own story, they might say something like this:

As a child, I went to special education schools for eight years and I do a self-stimulatory behavior during the day which prevents me from getting much done. I’ve never had a girlfriend. I have bad motor coordination problems which greatly impair my ability to handwrite and do other tasks. I also have social skills problems, and I sometimes say and do inappropriate things that cause offense. I was fired from more than 20 jobs for making excessive mistakes and for behavioural problems before I retired at the age of 51.

Others with autism spectrum disorder have it worse than I do.  People on the more severe end sometimes can’t speak. They soil themselves, wreak havoc and break things. I have known them to chew up furniture and self-mutilate. They need lifelong care.[7]

 

This is an actual self-portrait by Jonathan Mitchell, who is autistic. So you might conclude that this is an excellent example of the disabled writing their own history, unflinchingly honest and compassionate toward the still less fortunate, something that everyone in the autism community would applaud. And yet, as Mitchell goes on to explain, he has been furiously attacked by “neurodiversity” activists, who militantly deny that autism is a disorder at all. They insist that it is simply a form of cognitive difference, perhaps even a source of “genius”, and they generally don’t tolerate any discussion of curing autism or preventing its onset.  When Mitchell and other autistic self-advocates call for a cure, the epithets “self-haters” and “genocide” are often hurled at them. So who speaks for autism?  An interviewer who describes autism as a “disorder”, or who even raises the issues that Mitchell freely discussed, might well alienate a neurodiversity interviewee. But can we avoid those sensitive issues? And even if we could, should we avoid them?  

 

Mitchell raises a still more unsettling question: Who is autistic? The blind, the deaf, and the wheelchair-bound are relatively easy to identify, but autism is defined by a complex constellation of symptoms across a wide spectrum – and where does a spectrum begin and end? You could argue that those with a formal medical diagnosis would qualify, but what about those who are misdiagnosed, or mistakenly self-diagnosed? What if their symptoms are real but extremely mild: would an oral historian researching deafness interview individuals with a 10 percent hearing loss? Mitchell contends that neurodiversity advocates cluster at the very high-functioning end of the spectrum, and suspects that some aren’t actually autistic:

Many of them have no overt disability at all.  Some of them are lawyers who have graduated from the best law schools in the United States. Others are college professors. Many of them never went through special education, as I did. A good number of them are married and have children. No wonder they don’t feel they need treatment.

 

Precisely because neurodiversity advocates tend to be highly articulate, they increasingly dominate the public conversation about autism, to the exclusion of other voices. Mitchell points to the Interagency Autism Coordinating Committee, an official panel that advises the US government on the direction of autism research: seven autistic individuals have served on this body, all of whom promote neurodiversity, and none favor finding a cure.  The most seriously afflicted, who desperately need treatment, are not represented, and they “can’t argue against ‘neurodiversity’ because they can’t articulate their position. They’re too disabled, you might say.”

 

The severely disabled could easily be excluded from histories of autism, unless the researcher makes a deliberate effort to include them, and in many cases we can only include them by interviewing their families. My own research relied on email interviews with self-selected respondents to a call for participants I had posted on autism websites. Though I made clear that I wanted to communicate with autistic individuals as well as with other members of their families, only the latter responded. As Jan Walmsley has rightly pointed out, consent is a thorny issue when we interview the learning disabled. I specified that I would only interview responsible adults -- that is, those who were not under legal guardianship -- but that proviso effectively excluded a large fraction of the autism community. For researchers, that may present an insurmountable difficulty.

 

Yet another ideological landmine involves the causes of autism, for many in the autism community believe it is a disorder that results from adverse reaction to vaccination.  In my own research, this was the group I chose to focus on.  The mainstream media generally treat them as pariahs and dangerous subversives, denounce them repetitively, and almost never allow them to present their views.  But that kind of marginalization inevitably raise troubling questions: Are these people being misrepresented?  What is their version of events?  And since they obviously aren’t getting their ideas from the newspapers or television networks, what exactly are they reading, and how did that reading shape their understanding of what has been inflicted on them? 

 

So I started with a simple question: What do you read? Unsurprisingly, many of my subjects had read the bestselling book Louder Than Words (2007) by actress Jenny McCarthy, where she describes her son’s descent into autism and argues that vaccination was the cause. Doctors have expressed horror that any parent would follow medical advice offered by a Playboy centerfold, but a historian of reading might wonder whether the reader response here is more complicated.  Are readers “converted” by books, or do they choose authors that they already sympathize with?  My interviewees reported that, well before they read Louder Than Words, they had seen their children regress into autism immediately following vaccination.  They later read Jenny McCarthy out of empathy, because she was a fellow autism parent struggling with the same battles that they had to confront every day.

 

Granted, my sample was quite small, essentially a focus group of just six self-selected parents.  Occasionally oral historians can (through quota sampling) construct large and representative surveys, for instance Paul Thompson’s landmark 1975 study of Edwardian Britain, but it would be practically impossible to do the same for the fissured and largely nonspeaking autism community. What oral historians can sometimes do is to crosscheck their findings against large statistical surveys. For instance, my respondents said that they read Jenny McCarthy not because she was a celebrity, but because she was an autism mom. They were corroborated by a poll of 1552 parents, who were asked whom they relied for vaccine safety information: just 26 percent said celebrities, but 73 percent trusted parents who reported vaccine injuries in their own children. To offer another illustration: vaccine skeptics are often accused of being “anti-science”, but my interviewees produced lengthy bibliographies of scientific journal articles that had shaped their views. They were supported by a survey of 480 vaccine skeptic websites, of which 64.7 percent cited scientific papers (as opposed to anecdotes or religious principles).

 

I oftenvdescribe autism as an “epidemic”. This is yet another flashpoint of controversy. Public health officials generally avoid the word, and many journalists and neurodiversity activists fiercely argue that autism has always been with us. As a historian who has investigated the question, I have concluded (beyond a reasonable doubt) that autism scarcely existed before the twentieth century, and that it is now an ever-spreading pandemic. To explain the evidence behind this conclusion would require a very long digression, though I can refer the reader to a robust demonstration. The essential point here is that any interviewer who refers to autism as an “epidemic” may alienate some of his or her interviewees.

 

So how do we handle this situation – or, for that matter, any other divisive issue?  All oral historians have opinions: we can’t pretend that we don’t. But we can follow the ethic of an objective reporter.  A journalist is (or used to be) obligated to report all sides of an issue with fairness, accuracy, and balance. He or she may personally believe that one side is obviously correct and the other is talking nonsense, but in his or her professional capacity he or she keeps those opinions to herself and assures his or her interviewees that they are free to express themselves.  One has to accept that not everyone will be reassured.  I found myself variously accused of being (on the one hand) an agent of the pharmaceutical companies or (on the other) an antivaccinationist. (I am neither.) But most of my subjects were quite forthcoming, once I made clear that the article I was writing would neither endorse nor condemn their views.

 

Of course, if any of the voices of autism are stifled, then the true and full story of the epidemic will be lost.  Some honest and well-researched histories of autism have been produced, notably Chloe Silverman’s Understanding Autism and Edith Sheffer’s Asperger’s Children. Although Silverman only employs a few interviews, her work is distinguished by a willingness to listen closely to autism parents.  And in her chilling account of the Nazi program to eliminate the mentally handicapped, Sheffer uncovered the voices of some of its autistic victims in psychiatric records. What both these books suggest is that we could learn much more about autism as it was experienced by ordinary people simply by talking to them.  Many of them protest that the media only reports “happy news” about autism (e.g., fundraisers, job training programs) and prefers not to dwell on the dark side (neurological damage, unemployment, violent outbursts, suicide), and these individuals are usually eager to tell their stories. To take one striking example, in 2005 the New York Times dismissed the theory that thimerosal (a mercury-containing preservative in some vaccines) might cause autism in a 2005 front-page story headlined “On Autism’s Cause, It’s Parents vs. Research” (suggesting that parents did no research). One of my interviewees had herself been interviewed by Gardiner Harris, one of the reporters who filed the Times story, and she offered a very different version of events:

Harris misidentified one of the two women in his opening anecdote. He described an autistic child’s nutritional supplements as “dangerous,” though they had been prescribed by the Mayo Clinic for the child’s mitochondrial disorder—facts he did not disclose. Three times Harris asked me, “How do you feel?” rather than, “What scientific studies led you to believe thimerosal is harmful to infants?"

 

Rather than rely solely on “the newspaper of record” (or any other newspaper), historians can find correctives and alternative narratives in oral interviews. Oral history has made an enormous contribution to reconstructing the history of the AIDS epidemic and the opioid epidemic, and it will be no less essential to understanding the autism epidemic.

 

 

 

 

 

 

 

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172185 https://historynewsnetwork.org/article/172185 0
On the eve of Pride 2019, D.C. LGBT Community Reflects on its own history with Lavender Scare  

 

“I really think it is so important to remember that there were people who were taking a stand in the years before Stonewall and people who really had thecourage to get the movement rolling in the 1960’s. Their efforts should be recognized.”

 

As the question and answer session after Wednesday night’s screening of The Lavender Scare was wrapping up, director Josh Howard reminded the audience of the focus of his documentary: the systematic firing and discrimination of LGBT people under the Eisenhower administration, from their perspective. The screening included a Q&A afterwards that featured  Howard, David Johnson – the historian author of the  book that inspired the film, and Jamie Shoemaker–who is featured in the film as the first person to successfully resist the law. The screening was timely as D.C.’s Pride parade is Saturday, June 8, and the 50th anniversary of the Stonewall riots is Friday, June 28.  The Lavender Scare will premiere on PBS on June 18. 

 

Most of the seats in the Avalon Theatre were filled. After the film and applause ended, Howard asked a question he likes to ask every audience at a screening: how many of you were personally affected or knew someone who was affected by the Lavender Scare? Almost everyone in the audience raised their hands. 

 

The Q&A was an open dialogue, with several people standing and telling stories of how they were personally tied to the events of the film and the movement in general. Several were connected to the central figure of the documentary, former prominent activist Frank Kameny. One man who had grown up with another prominent activist, Jack Nichols, explained, “when Jack was picketing in front of the White House, I was quite aware. In fact, Frank and Jack did some of the planning in my apartment at the time; but because I was a teacher, I couldn’t have anything to do with it, because if my picture was in the paper, then my career would’ve been over.”

 

The policy harmed the careers of some in the audience, though. “I had gone to Frank for guidance before my interview at NSA,” one gentleman recalled, “and he told me ‘don’t say anything, don’t answer anything that you’re not asked,’ and so forth. Anyway, I was not hired and I’m frankly very glad now that I was not hired.” Experiences such as those reflect just how wide-reaching the policy was; it not only removed the gay community from office, but also discouraged them from applying to positions in the first place. 

 

Frank Kameny’s impact on the D.C. community was evident. In attendance was his former campaign manager from 1971, who recalled that the day after they announced the campaign, “we received a check in the mail for $500 from actors Paul Newman and Joanne Woodward. We used that money to travel to New York to meet with Gay Activist Alliance of New York.” Similarly, one of his former colleagues on the board of the ACLU in Washington recounted that as they defended their license to meet, “the issue was whether names [of gay members] would be revealed, and while Frank was very happy and very brave to have his name revealed, he didn’t feel that he could just turn over names of other people. That’s what he was fighting against in the agencies.” 

 

While the film successfully showed the struggle faced by the LGBT community, the conversion afterwards reflected the hope that many in the community feel today. Jamie Shoemaker, who was once almost fired from the NSA, evidenced the progress that he’s seen. “All of the security clearance agencies now have LGBT groups that are very active, including the NSA. One year after I retired, they payed me to come out to give a speech about my experiences… they (the groups) are very active and it’s really a good scene in these agencies now. What a difference,” he said. The theatre was immediately filled with applause. 

 

Many expressed a desire for reparations in some form or another. David Johnson, who authored The Lavender Scare: The Cold War Persecution of Gays and Lesbians in the Federal Government, threw light on the LOVE Act, an act introduced into the Senate that would “mandate that the State Department investigate all of its firings since 1950. They would collect information from either fired employees or their families, and I think most importantly, though, it would mandate that their museum, the US Diplomacy Center, actually have a permanent exhibit on the Lavender Scare.” Once again, the room broke into applause.

 

The Capital Pride Parade will take place on Saturday, June 8th across multiple locations in Washington. The 50th anniversary of the Stonewall riots is Friday, June 28.  The Lavender Scare will premier on PBS on June 18.

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172191 https://historynewsnetwork.org/article/172191 0
Arbella Bet-Shlimon Got Into History to Counter False Perceptions of Middle East Region

 

Arbella Bet-Shlimon is Assistant Professor in the Department of History at the University of Washington, a historian of the modern Middle East,  an adjunct faculty member in the Department of Near Eastern Languages and Civilization and an affiliate of the Jackson School's Middle East Center. Arbella’s research and teaching focuses on the politics, society and economy of twentieth-century Iraq and the broader Persian Gulf region, as well as Middle Eastern urban history. Her first book, City of Black Gold: Oil, Ethnicity, and the Making of Modern Kirkuk (Stanford University Press, 2019), explores how oil and urbanization made ethnicity into a political practice in Kirkuk, a multilingual city that was the original hub of Iraq's oil industry. She received her PhD from Harvard University in 2012.

 

What books are you reading now?

 

I just wrote an obituary for my Ph.D. advisor, Roger Owen, in the latest issue of Middle East Report, and I read his memoir A Life in Middle East Studies prior to writing it. It proved to be a fascinating retrospective on the development of our field over the twentieth century. At the moment, I am digging into the work of the multilingual Kirkuki poet Sargon Boulus, and scholarship about him, as I write an article about the idea of the city of Kirkuk as a paragon of pluralism in northern Iraq. This is a topic I became interested in when I was researching my book on Kirkuk’s twentieth-century history, City of Black Gold, just published by Stanford University Press.

 

Why did you choose history as your career?

 

I decided to make history a career after I was already in graduate school in an interdisciplinary program. I started that program with a goal: to counter inaccurate and stereotyped perceptions of the Middle East among Americans. These spurious ideas were fostering cruelty to Middle Easterners at home and prolonging destructive foreign policy abroad. I concluded that researching, writing, and teaching the modern history of the region would be the best way to meet that goal. The way I stumbled into this conclusion was essentially accidental, but I’ve never looked back.

 

It was an unexpected change of direction, because I hadn’t taken a single history class in college. And history, according to most college students who haven’t taken a history class, is boring. We have an image problem. Just look at the most famous secondary school in the world: Hogwarts (from the Harry Potter universe). This is a school where one of the tenure lines has a jinx on it that leaves professors fired, incapacitated, or dead after one year, but its worst course isn’t that one. Instead, its worst course is a plain old history class, taught by a droning ghost professor who bores even himself so thoroughly that he doesn’t realize he died a long time ago. High school students (real-life ones, I mean) will frequently tell you that they hate history because it’s just memorizing lists of things, or their teacher just makes them watch videos. That’s not what history is beyond the K-12 realm, of course—neither college history nor popular history is anything like that—and there are some great K-12 history teachers who don’t teach that way. But it’s a widespread stereotype rooted in some truth. I didn’t actively dislike history prior to pursuing it full time, but it hadn’t even occurred to me to consider it a possible career.

 

What qualities do you need to be a historian?

 

Qualities that are central to any research career. For instance, a high tolerance for delayed gratification, because you can knock your head against a research question for years before the answers start to come to fruition in any publishable form. And you need to be willing to be proven wrong by the evidence you find.

 

Who was your favorite history teacher?

 

My dad was my first history teacher. I learned a lot about the history of the Middle East riding in the car as a kid.

 

What is your most memorable or rewarding teaching experience?

 

Once, at a graduation event, a graduating student told me that a conversation he’d had with me during office hours was one of the main reasons he did not drop out of college. I had no idea my words had had that impact at the time. I think we professors are often not aware of the small moments that don’t mean much to us but change a student’s life (both for the worse and for the better).

 

What are your hopes for history as a discipline?

 

Institutional support; support from the parents or other tuition funders of students who want to pursue history as their major; and stable, contracted teaching positions with academic freedom protections for those who have advanced degrees in history and wish to work in academia.

 

Do you own any rare history or collectible books? Do you collect artifacts related to history?

 

I don’t collect artifacts, but I’ve used my research funds to acquire a few things that are hard to find and have been indispensable to my work. For instance, I have copies of the full runs of a couple of rare periodicals from Kirkuk that I acquired while writing my book. They’re almost impossible to find even in global databases—and when you come across something like that in someone’s private collection, you have to get a copy somehow.

 

What have you found most rewarding and most frustrating about your career? 

 

The most rewarding thing about being a historian is when a student tells me that their perspective on the world has been transformed by taking my class. The most frustrating thing is the pressure from so many different directions to downsize humanities and social science programs.

 

How has the study of history changed in the course of your career?

 

That’s a very broad question, but I can speak specifically about my own field of Middle Eastern history. When I set out to write a PhD dissertation on Iraq, some colleagues in my cohort reacted with surprise because, they pointed out, it would be extremely difficult to conduct research there. One fellow student told me that he’d started grad school interested in Iraq but realized after the 2003 US invasion that he wouldn’t be able to go there, so he switched his focus to Egypt. Since then, though, many more conflicts have developed and brutal authoritarian rulers have become more deeply entrenched. Nobody researching the history of the Middle East today can assume that the places they are interested in will be freely accessible or that any country’s archives are intact and in situ. And even if we can visit a place, it may not be ethical to talk to people there about certain sensitive topics. At the same time, we know that we can’t just sit in the colonial metropolis and write from the colonial archives, as so many historians of a previous generation did. So I think many Middle East historians have become more methodologically creative in the past decade, asking new sorts of questions and tapping into previously underappreciated sources.

 

What are you doing next?

 

Right now, I’m trying to understand Iraq’s position in the Persian Gulf, shifting my focus toward Baghdad and its south. Historians of Iraq have written extensively about its experience as a colonized, disempowered country, but have less often examined how expansionist ideas were key to its nation-building processes throughout the twentieth century. This becomes clear from the perspective of Kirkuk. It’s also clear when looking at Iraq’s relationship with Kuwait, which Iraq has claimed as part of its territory at several points. I’m in the early stages of gathering sources on this topic.

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172183 https://historynewsnetwork.org/article/172183 0
Here Comes the D-Day Myth Again

 

Last Friday (May 31, 2019), the NPR radio program “On Point” conducted a special live broadcast from the National WWII Museum in New Orleans entitled “75th Anniversary Of D-Day: Preserving The Stories Of WWII Veterans." The host was NPR’s media correspondent David Folkenflik and the segment featured Walter Isaacson, professor of history at Tulane University, and Gemma Birnbaumassociate vice president of the World War II Media and Education Center at The National WWII Museum, as guests. This writer was not only looking forward to an engaging discussion of the successful Allied landings at the Normandy beaches on June 6, 1944, but also hoping that the guests would present the contemporary state of military history research on the significance of D-Day.

 

I was sorely disappointed. Instead of placing the invasion within the wider context of the war against Nazi Germany, Folkenflik and his guests revived the “Myth of D-Day,” that is, they reinforced the erroneous belief that D-Day was the decisive battle of the Second World War in Europe, that it marked “the turning of the tide,” and that it sealed the doom of the German Army, the Wehrmacht. Had D-Day failed, so the argument goes, Germany could have still won the war, with nightmarish consequences for Europe, the United States and the world as a whole. This myth is a legacy of the Cold War, when each side accentuated what it did to defeat Nazi Germany, the most monstrous regime in human history, and played down the contributions of the other side. Russian students today, for example, are taught the “Great Patriotic War,” which the Soviet Union won practically single-handedly, without having previously cooperated with Nazi Germany and without having had committed any atrocities – which is take a creative approach to interpreting the history of World War II, to say the least. But it also remains the case that far too many American, British and Canadian students are taught that the victory over Nazi Germany was mostly the work of the Anglo-American forces, which also is a distortion of truth. 

 

This “Allied scheme of history,” as the Oxford historian Norman Davies calls it, was most consistently presented by Gemma Birnbaum on the On Point broadcast. She not only reiterated the belief that D-Day was necessary to defeat Nazi Germany, but her words also suggested that, until then, Germany was somehow winning the war. Before the Allies invaded France, she said, the Wehrmacht “was moving all over the place.” According to her, it was only after the German defeat in Normandy that “fatigue began to set in” among German soldiers. But “fatigue” had already begun to spread throughout the Wehrmacht in the late fall of 1941, when the Red Army stopped the Germans at the gates of Moscow. It is true that the Germans continued to “move all over” Europe afterwards, but they increasingly began doing so in a backwards motion. It is depressing to consider that Birnbaum co-leads the educational department of the World War II museum in New Orleans, where she has the opportunity to pass on her myopic views of the war onto countless young people, thus ensuring the perpetuation of the D-Day myth. Not much has changed in the museum, it would seem, since 2006, when  Norman Davies commented: “Yet, once again, the museum does not encourage a view of the war as a whole. Few visitors are likely to come away with the knowledge that D-Day does not figure among the top ten battles of the war.”

 

Many military historians would now contend that, if there was indeed any “turning point” in the European war, it took place before Moscow in December 1941. For it was then that Germany lost the opportunity to win the war that it had been hoping to win. It was also at that point that the Soviets forced upon the Germans a war of attrition. As the Stanford historian James Sheehan points out, there are no decisive battles  in wars of attrition, but rather milestones along the way to victory, as the enemy is slowly but surely reduced to a condition of weakness where they can no longer continue the fight. In that sense, the other important milestones were Stalingrad (February 1942), after which it became increasingly clear that Germany was going to lose the war, and Kursk (July 1943), after which it became increasingly clear that the Russians were coming to Berlin, with or without the help of the Western Allies.

 

Any objective look at the human and material resources available to Nazi Germany by the spring of 1944, especially compared to those available to the Allies, makes the claim that D-Day saved the world from a Nazi-dominated Europe preposterous. Such arguments are not history but science fiction. We need only consider that in May 1944, the German field army had a total strength of 3.9 million soldiers (2.4 million of which were on the Eastern front), while the Soviet Red Army alone had 6.4 million troops. Moreover, while the Wehrmacht had used up most of its reserve troops by 1942, Joseph Stalin could still call up millions more men to fight. While Germany was rapidly running out of the food, fuel, and raw materials an army needs to fight a protracted war, the stupendous productive capacities of the United States, through the Lend-Lease program, made sure that the Soviet soldiers were well-fed and equipped for their final assault on Germany. Add to this the continual pounding that German industry and infrastructure was taking by the Anglo-American air war, which also forced the German military to bring back invaluable fighters, anti-aircraft artillery, and service personnel to the home front, and it becomes obvious that Germany was fated to lose the war long before any Allied soldiers reached the beaches of Normandy. The German army was defeated on the Western front, to be sure, but it was annihilated in the East. Until almost the very end of the war, somewhere between 60-80 per cent of the German divisions were stationed in the East, and that was where they were wiped out. But the Soviets paid a horrific price for their victory. According to the Military Research Office of the Federal German Army, 13,500,000 Soviet soldiers lost their lives in the fight against Nazi Germany. The United Kingdom lost some 326,000 soldiers. The United States lost 43,000 men in Europe.

 

In light of such statistics, one can only imagine how offended many Russians, Ukrainians and Byelorussians must feel today when they hear Americans congratulating themselves for having been the ones who defeated the Nazis. Nevertheless, the host of the On Point broadcast, David Folkenflik, introduced one segment with the claim that the United States had played the “dominant” role in achieving victory in World War II. Regarding the Pacific theater, there is no doubt about this. But after considering the scale of the fighting on the Eastern front of the European war, Folkenflik’s contention becomes absurd. Unfortunately, such comments are still all-too common. The English historian Giles Milton, for instance, has recently published a book entitled “D-Day. The Soldier’s Story,” in which he writes that the tide against Nazi Germany “had begun to turn” by the winter of 1942, but he still reserves the final turning for D-Day.  So it is no wonder that many Russians today feel that people in the West fail to give them the credit they deserve for achieving victory in World War II.

 

This is important to contemporary polticis: if the tensions between Russia and the United States are ever to be overcome, then there will have to be more American recognition and appreciation of the sacrifices of the Soviet peoples in World War II. Otherwise Americans will continue to make it easier for Vladimir Putin to engage in his own historical myth-making to help legitimize his increasingly authoritarian rule. To be fair, if David Folkenflik had decided to include a discussion of the Eastern Front in his broadcast, it would have lasted too long and lacked focus. Moreover, it is only to be expected that, when a nation reflects on the past, it concentrates on its own historical achievements. But that cannot be a license for spreading false historical beliefs. At least a brief mention of the Eastern front would have been merited. 

 

To acknowledge that D-Day was no “turning of the tide” in no way implies that it was not an important, or even a crucial, battle of the Second World War. Had the landings failed, as the American Allied Supreme Commander Dwight D. Eisenhower feared they might, the war could have dragged on for several more years. In that case, the Nazis would have come much closer to their goal of exterminating every last Jewish man, woman and child in Europe. Not to mention the hundreds of thousands, perhaps millions more military and civilian casualties that would have ensued. Victory in Normandy saved countless lives. In the final analysis, however, the greatest strategic consequence of the battle lies elsewhere.

 

This true significance of D-Day was briefly mentioned during the On Point episode by Walter Isaacson. (He was also the only participant who did not engage in overt exaggeration of D-Day’s importance for defeating Nazi Germany.) Isaacson made the most sensible comment of the entire program when he pointed out that, had D-Day failed, a lot more of Europe would have fallen under the control of the Soviet Union that actually did. In truth, without D-Day, the Soviet T-34 tanks would not only definitely have crossed the river Rhine, but they most likely also would have reached the French Atlantic coast. As the English military historian Anthony Beevor has discovered “a meeting of the Politburo in 1944 had decided to order the Stavka [Soviet High Command] to plan for the invasion of France and Italy. . .  The Red Army offensive was to be combined with a seizure of power by the local Communist Parties.” D-Day may not have been necessary to defeat Nazi Germany, but it was needed to save western Europe from the Soviet Union. As Beevor observes, “The postwar map and the history of Europe would have been very different indeed” if “the extraordinary undertaking of D-Day had failed.”

 

By all means, then, we should commemorate the heroism and sacrifices of the Anglo-American soldiers who fought and died on D-Day. They all made an important contributions to liberating western Europe and achieving victory over Nazi Germany. But national pride must never be allowed to distort historical reality. The successful Allied landings in Normandy accelerated Germany’s defeat, but they didn’t bring it about. The German military historian Jörg Echternkamp puts it well: “From the beginning of the two-front war leads a straight path to the liberation of Europe from Nazi domination roughly one year later. Nevertheless the German defeat had already at this time long since been sealed on the eastern European battlefields by the Red Army. This is all-too easily concealed by strong media presence of D-Day today." The credit for vanquishing Adolf Hitler’s armies should go first and foremost to the Soviet Red Army. Again, Norman Davies is correct when he writes: “All one can say is that someday, somehow, the present fact of American supremacy will be challenged, and with it the American interpretation of history.” For now, however, as the On Point broadcast has shown, popular understanding of D-Day in the Unites States continues to be more informed by myth than reality. 

 

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172164 https://historynewsnetwork.org/article/172164 0
How Should Historians Respond to David Garrow's Article on Martin Luther King, Jr.?

 

 

Pulitzer Prize winner and noted historian David Garrow made headlines last week after Standpoint published his article on the FBI’s investigation of Martin Luther King, Jr. Their documents allege King’s  involvement in numerous extra-marital affairs, relations with prostitutes, and presence during a rape. In response, scholars have questioned the reliability of the FBI records Garrow used to make such claims. These documents, and the resulting controversy, should also lead scholars to ask questions about the ways in which historians can and should address the history of gender and sexuality when it intersects with the histories of the civil rights movement, American religion, and the development of the surveillance state.

 

First, King and many of the clergy involved in the civil rights movement took a different approach toward interacting with women than some other well-known preachers, particularly Billy Graham. In 1948, evangelist Billy Graham and his staff agreed to a compact known as the Modesto Manifesto. This informal compact dealt with a number of issues from distributions of revival offerings to relations with local churches to what would become more colloquially known as the Billy Graham rule: men on the team would never be alone with a woman who was not their wife. While the rule may have kept the evangelist, who was noted in the press for his fair looks, and much of his team on the straight and narrow, it no doubt limited the opportunities of women within his organization and marked women as dangerous to men, particularly preachers.

 

The Billy Graham rule would have been impractical for the civil rights movement. The work of women was essential to the growth and success of the movement, and it would have been nearly impossible for civil rights leaders, such as King, to avoid being in contact with women and still have had a thriving movement.  Sociology professor Belinda Robnett established that for the civil rights movement, it was very often women who linked leaders of organizations like King, to supporters of the movements on the local level. These bridge leaders recruited more activists to the cause and ensured the general running of civil rights organizations. Some of the women named in Garrow’s essay served as bridge leaders, and as a consequence were especially vulnerable to such charges in an era where Graham’s rule was influential. 

 

Those with traditional moral values reading David Garrow’s recent article on the alleged sexual proclivities of Martin Luther King Jr., might come to the conclusion that if King had instituted the Billy Graham rule, he never would have had the opportunity for extramarital affairs. They might imagine that there would have been no possibility that the FBI could have made such allegations, true or false. That however is unlikely to have been the case. While King’s moral failings are perhaps best left for he and his creator to resolve, it is certain that given the climate at the FBI at the time, and given J. Edgar Hoover’s special animus toward King, as Garrow described in this work, that there would have been continual attempts to try to establish some kind of moral failing with which to undermine one of America’s two most famous preachers.

 

The most controversial claim in these documents is a reference to an oddly edited document purporting to be a summary of electronic surveillance in which an unnamed Baptist minister forcibly raped a female parishioner while King looked on. While Garrow questions some documents, according to a Washington Post article, he seems to have less questions about the authenticity of this summary. King advisor Clarence Jones points out that while this rape should be condemned if true, if itdid occur, why did Hoover not turn over the evidence to other officials? It would have provided Hoover with the opportunities he had been seeking to undermine one of America’s most recognized preachers.

 

Jones of course is asking a question that all civil rights historians should ask but we should also ask other questions.  How do these documents often reflect a callous disregard for women? If this incident was true, why did the FBI not seek justice for this unnamed woman? And, if it is not true, how little did Hoover’s men value women that they thought an incident like this could be easily invented and the duplicity go unnoticed, and how did that impact their investigation of King? We should also ask if the Billy Graham rule set American expectations for the private behavior of very public clergy.

 

Women’s bodies are often sexualized, and black women’s bodies even more so.  In these documents, it is clear that the FBI placed an emphasis on what they deemed as these women’s immoral or abnormal sexual choices ranging from oral sex to adultery to prostitution to lesbian partners.  Even when they perhaps should have, the agents express little to no concern for the women, but rather the concern is for the state.  These women’s bodies mattered to the FBI only when they may have been in a position toplay a part in compromising America’s foremost black preacher and make him susceptible to communist influence, or when those same bodies offered the FBI an opportunity to expose that preacher’s failings.

 

For some of the women named in the documents used in the Garrow article, the evidence of sexual activity isscant, merely referring to them as girlfriends or women who wanted to know why King hadn’t come by when he was in their area. In another instance, an orgy between King, a prostitute, and a well-known female gospel singer is described.  For historians to focus on these instances now, with so much of the evidence from biased sources, and some of it still under seal, feels a bit like participating in historical slut shaming. For these women, whatever their sexual choices were over fifty years ago, there is no escape. Salacious details, real or fiction, lay forever in the National Archives.

 

In this case, much of what we’d like to know in regards to these controversies will not be revealed until the court order sealing the records expires in 2027, and may not be resolved even then. T. E. Lawrence once wrote that “the documents are liars.” It is the task of every historian to determine to what extent that is true, but it is also the task of every historian to examine the ways in which to documents may tell unplanned truths about our past, even if that makes us uncomfortable. 

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172128 https://historynewsnetwork.org/article/172128 0
Roundup Top 10!  

A Black Feminist’s Response to Attacks on Martin Luther King Jr.’s Legacy

by Barbara Ransby

We should not become historical peeping Toms by trafficking in what amounts to rumor and innuendo.

 

About the FBI’s Spying

by William McGurn

What’s the difference between surveillance of Carter Page and Martin Luther King?

 

 

What D-Day teaches us about the difficulty — and importance — of resistance

by Sonia Purnell

For four years, a few French citizens fought a losing battle. Then they won.

 

 

After Tiananmen, China Conquers History Itself

by Louisa Lim

Young people question the value of knowledge, a victory for Beijing 30 years after the crackdown on student protests.

 

 

How True-Crime Stories Reveal the Overlooked History of Pre-Stonewall Violence Against Queer People

by James Polchin

The history of such crimes tends to be lost.

 

 

Hitler told the world the Third Reich was invincible. My German grandfather knew better

by Robert Scott Kellner

As a political organizer for the Social Democrats, Kellner had opposed the Nazis from the beginning, campaigning against them throughout the duration of the ill-fated Weimar Republic.

 

 

How racism almost killed women’s right to vote

by Kimberly A. Hamlin

Women’s suffrage required two constitutional amendments, not one.

 

 

Who Will Survive the Trade War?

by Margaret O’Mara

History shows that big businesses profit most when tariffs reign.

 

 

Of Crimes and Pardons

by Rebecca Gordon

The United States was not always so reluctant to put national leaders on trial for their war crimes.

 

 

Trump Is Making The Same Trade Mistake That Started The Great Depression

by John Mauldin

Similar to today, the Roaring 1920s saw rapid technological change, namely automobiles and electricity.

 

 

 

The Making of the Military-Intellectual Complex

by Daniel Bessner and Michael Brenes

Why is U.S. foreign policy dominated by an unelected, often reckless cohort of “the best and the brightest”?

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172190 https://historynewsnetwork.org/article/172190 0
Why 2019 Marks the Beginning of the Next Cycle of American History

 

A century ago, historian Arthur Schlesinger, Sr. argued that history occurs in cycles. His son, Arthur Schlesinger, Jr., furthered this theory in his own scholarship. As I reflect on Schlesinger’s work and the history of the United States, it seems clear to me that American history has three 74-year-long cycles. America has had four major crisis turning points, each 74 years apart, from the time of the Constitutional Convention of 1787 to today.

 

The first such crisis occurred when the Founding Fathers met in Philadelphia in 1787 to face the reality that the government created by the Articles of Confederation was failing. There was a dire need for a new Constitution and a guarantee of a Bill of Rights to save the American Republic. The founding fathers, under the leadership of George Washington, were equal to the task and the American experiment successfully survived the crisis. 

 

For the next 74 years, the Union survived despite repeated disputes over American slavery. Then, in 1861, the South seceded after the election of Abraham Lincoln and the Union’s refusal to allow this secession led to the outbreak of the Civil War. In this second crisis, exactly 74 years after the Constitutional crisis of 1787, two-thirds of a million people lost their lives and, in the end, the Union survived.

 

The war was followed by the tumultuous period of Reconstruction and the regional sectionalism that had led to the Civil War continued. As time passed, with the growth of the industrial economy, the commitment to overseas expansion, and widespread immigration, the United States prospered over the next three quarters of the century until the Great Crash on Wall Street and the onset of the Great Depression under President Herbert Hoover in 1929. The economy was at its lowest point as Franklin D. Roosevelt took the oath of office in 1933.

  

World War II broke out in 1939—exactly 74 years after the end of the Civil War (1865). While America did not officially enter the war for two years, it is clear that the danger of the Axis Powers (Nazi Germany, Fascist Italy, Imperial Japan), on top of the struggles of the Great Depression, marked a clear crisis in American history.  Fortunately, America had the leadership of Franklin D. Roosevelt to lead us through the throes of the Great Depression and World War II. 

 

Once the Second World War ended in 1945, America entered a new period that included the Cold War with the Soviet Union and tumult in America due to the Civil Rights Movement and opposition to American intervention in wars in  Korea, Vietnam, and the Middle East.  The saga of Richard Nixon and Watergate seemed to many to be the most crisis-ridden moment of the post World War II era. But the constitutional system worked, and the President’s party displayed courage and principle and accepted that Nixon’s corruption and obstruction of justice meant he had to go.  Certainly, Watergate was a moment of reckoning, but the nation moved on through more internal and external challenges.

 

2019 is exactly 74 years after 1945 and it is clear that America is once again in a moment of crisis. As I have written before, I believe that today’s constitutional crisis is far more serious and dangerous than Watergate. Donald Trump promotes disarray and turmoil on a daily basis, undermines our foreign policy and domestic policy, and is in the process of working to reverse the great  progress and accomplishments of many of his predecessors going back to the early 20th century. The past 74 years have produced a framework of international engagement – the World Trade Organization and free trade agreements, the United Nations and conflict resolution, and a series of treaties like the Non Proliferation Treaty and Paris Climate Agreement. Nearly all of these accomplishments of the past 74-year cycle are now under threat. 

 

The rise of Donald Trump is not an isolated phenomenon as similar leaders have come to power in much of the world in the past couple of years. This has occurred due to the technological revolution and the climate change crisis.  Both trends have convinced many that the post-1945 liberal world order is no longer the solution to global issues and that authoritarian leadership is required to deal with the economic and security challenges that the world faces. Charismatic figures claim to have the solutions to constant crisis by stirring racism, nativism, anti-Semitism, Islamophobia, misogyny, and xenophobia.  

 

In some ways, this is a repeat of what the world faced in the late 1930s, but as this is the present instead of the past, we have no certainty that the major western democracies can withstand the crises and preserve democratic forms of government.  As America was fortunate to have George Washington, Abraham Lincoln, and Franklin D. Roosevelt in earlier moments of turmoil and crisis, the question now is who can rise to the occasion and save American prosperity and the Constitution from the authoritarian challenge presented by Donald Trump.

]]>
Thu, 27 Jun 2019 02:03:42 +0000 https://historynewsnetwork.org/article/172129 https://historynewsnetwork.org/article/172129 0