Who’s to Blame for Political Violence?

LA Times: Who’s to blame for political violence? My take on the Steve Scalise shooting.

The more we blame speech for violence, the more likely we are to use violence to stop speech.

On Wednesday morning, a Trump-hating Bernie Sanders volunteer shot five people at a Republican practice for the annual congressional baseball game. One of them was the third-ranking House Republican, Louisiana Congressman Steve Scalise. We could blame Democrats and Sanders supporters for this crime, if we wanted to imitate past liberal tactics. But the rush to score partisan points by using incidents of violence to discredit your political opponents is not only all too common but also cheap and dishonest.

The blame for violent acts lies with the people who commit them, and with those who explicitly and seriously call for violence. People who just use overheated political rhetoric, or who happen to share the gunman’s opinions, should be nowhere on the list.

In 1995, Bill Clinton famously used Timothy McVeigh’s bombing of a federal building in Oklahoma City to tar Newt Gingrich and Rush Limbaugh and turn the public against small-government Republicans. The 2011 shooting of Congresswoman Gabrielle Giffords led to an orgy of Republican-blaming, mostly based on the fact that Sarah Palin had released a map of 20 vulnerable Democratic districts with a set of crosshairs to mark each target. Never mind that the shooter had never seen the map and turned out to have no Republican connections and few conservative-sounding ideas. (Scalise’s shooter, by contrast, used his social media account to endorse and spread partisan arguments).

Since President Trump’s inauguration, several House Republicans have been targets of violence. A woman was arrested for trying to run Tennessee Congressman David Kustoff off the road after a healthcare town hall; a man was arrested for grabbing North Dakota Congressman Kevin Cramer at a town hall; a 71-year-old female staffer for California Congressman Dana Rohrabacher was knocked out at a protest and the FBI arrested a man for making death threats against Arizona Congresswoman Martha McSally.

Everyone can see that the political climate has gotten a lot nastier lately. Americans used to despise politicians they disagreed with; now they hate the people who vote for them. Fewer and fewer people can tolerate friendships with political adversaries, and polls show more and more Americans — yes, especially Democrats — have trouble respecting anyone who voted for the other candidate. Donating to the wrong cause can get your business boycotted, and a stray tweet can bring down the online rage mobs.

All the talk of “resistance” and “treason,” plus the apocalyptic rhetoric about the climate and healthcare, certainly doesn’t lower the country’s temperature. But drawing a line from rhetoric to violence will only make matters worse. Each half of the country deciding that the other half is literally responsible for murder will only deepen that divide.

Every political and religious cause will inevitably attract some zealots who take strong words too far. It’s fair to blame a movement for the violence it inspires if — and only if — its leaders actually, seriously urge and celebrate and perpetrate violent acts, as the leaders of groups like Islamic State do.

But even at a time when American political figures call each other fascists and traitors and rant about resisting tyranny, there remains a world of difference between our political factions and Islamic State. If you hear someone shoot their mouth off, just remember it’s still only their mouth.

The more we blame speech for violence, the more likely we are to use violence to stop speech. Blurring the lines between bullets and tweets eventually will leave us with more bullets. Nobody forced Scalise’s shooter to pick up a gun over politics; he did that himself. It cheapens the moral consequences of that decision to credit angry words with an assist.

Democracy and free speech need room for people to exaggerate and vent. It wasn’t right when Democrats blamed Republicans instead of the Arizona shooter for the Giffords attack, and it wouldn’t be right for Republicans to return the favor just to get even. Keep the blame where it belongs.

No, Ronald Reagan Didn’t Launch His 1980 Campaign in Philadelphia, MS

RS: No, Ronald Reagan Didn’t Launch His 1980 Campaign in Philadelphia, MS

Republican presidential candidate Ronald Reagan, left, moves through the crowd shaking hands at the Neshoba County Fair in Philadelphia, Mississippi on Sunday, August 3, 1980. There crowd was estimated at 20,000. (AP Photo/Jack Thornell)

One of the wearying things about arguing with liberal/progressives is that they never stop trying to rewrite history; a bogus claim that is debunked only stays debunked if you keep at debunking it year after year after year. So it is with the hardy perennial effort to tar the reputation of Ronald Reagan by claiming that his 1980 presidential campaign and subsequent two-term presidency was tainted from the outset by having kicked off his campaign with a speech about “states’ rights” in Philadelphia, Mississippi – Nation editor Katrina Vanden Heuvel was retailing this one on ABC’s Sunday show The Week just two weeks ago, trying to compare Reagan to Donald Trump:

There’s all this nostalgia about Ronald Reagan. Ronald Reagan announced his candidacy in Philadelphia, Mississippi, the site for where three civil rights workers were killed by white supremacists.

There are many other sources that assert this as fact – see, for example, this Huffington Post column from April by Nicolaus Mills, Professor of American Studies, Sarah Lawrence College:

[I]n going to Patchogue, Long Island this coming Thursday to speak at a controversial Republican fundraiser, Trump is taking a page out of the Ronald Reagan playbook. He’s following the path that Reagan took in 1980 when he began his presidential campaign at the Neshoba County Fair in Philadelphia, Mississippi.

Long Island? Forget it, he’s rolling. More examples from one presidential cycle to the next can be found from David Greenberg at Slate, William Raspberry in the Washington Post, Paul Krugman and Bob Herbert in the New York Times, and so on. Wikipedia even has a page for “Reagan’s Neshoba County Fair “states’ rights” speech”.

Where to begin? This particular canard has so many things wrong with it, I feel obligated to set them all down in sequence. Hopefully, doing so here should – at least for a little while – collect the context in one place.

Continue reading No, Ronald Reagan Didn’t Launch His 1980 Campaign in Philadelphia, MS

No, Donald Trump Can’t “Burn It Down.” Washington Would Go On The Same.

RS: No, Donald Trump Can’t “Burn It Down.” Washington Would Go On The Same.

The wrong man for the job.

Of all the arguments made in favor of a vote for Donald Trump to be President of the United States, or at any rate the Republican nominee, probably the most seductive is the argument that Trump will “burn it down”: replace the business-as-usual Washington political establishment with a bull-in-a-china-shop outsider who will do something different. A great many Americans across the political spectrum are deeply frustrated with our system, for many reasons – some of them very good reasons, others understandable ones. Trump speaks to their frustrations, which is a major reason why he has won 37.1% of the popular vote so far in the Republican primaries; so does Bernie Sanders, which is a major reason why Sanders has won 41.1% of the popular vote so far in the Democratic primaries. But even setting aside the many reasons why Trump is highly unlikely to win a general election, anyone who understands the problems with how Washington works also knows that Trump is almost uniquely unsuited to actually change them.

Continue reading No, Donald Trump Can’t “Burn It Down.” Washington Would Go On The Same.

The Vindication of Rick Perry

RS: The Vindication of Rick Perry

The Court of Criminal Appeals of Texas, the state’s highest criminal court, today threw out the entirety of the bogus criminal indictment against former Governor Rick Perry. The indictment was always a farce, and worse. Farce, because it suggested that Democrats would go much further than Republicans ever would to destroy a political opponent; worse, because it actively sought to criminalize good government by charging Perry with a crime for attempting to use his power of the purse to compel Democrats to get rid of a corrupt, alcoholic District Attorney who tried to abuse her office to get out of a drunk driving rap. The entire episode is a vivid reminder of why Rick Perry has been one of this nation’s most admirable leaders over the course of his career, and a man who deserved better in his runs at national office.

A lower appeals court had thrown out half of the indictment, but the Court of Criminal Appeals opinion disposes of the whole abusive case, and is worth reading if you’re into the kinds of separation of powers issues that Justice Scalia championed for years on the U.S. Supreme Court, and which the Texas courts take more seriously as a result of explicit language in the Texas Constitution:

The powers of the Government of the State of Texas shall be divided into three distinct departments, each of which shall be confided to a separate body of magistracy, to wit: Those which are Legislative to one; those which are Executive to another, and those which are Judicial to another; and no person, or collection of persons, being of one of these departments, shall exercise any power properly attached to either of the others, except in the instances herein expressly permitted.

The court began by ruling that it would make an exception to its normal rules regarding “as applied” pretrial constitutional challenges to an indictment (i.e., arguments that the statute was unconsititutional only as it applies to this situation, not as to every possible set of facts) because of the importance of separation of powers to good government:

If a statute violates separation of powers by unconstitutionally infringing on a public official’s own power, then the mere prosecution of the public official is an undue infringement on his power. And given the disruptive effects of a criminal prosecution, pretrial resolution of this type of separation of powers claim is necessary to ensure that public officials can effectively perform their duties.

Turning to Count I of the indictment’s charge that Perry misused public money by vetoing the budget of the DA’s Public Integrity Unit in order to require it to show some public integrity of its own, the court emphasized that the public purposes to which a veto is put cannot be criminalized without destroying the veto power:

The Legislature cannot directly or indirectly limit the governor’s veto power. No law passed by the Legislature can constitutionally make the mere act of vetoing legislation a crime…the governor cannot by agreement, on his own or through legislation, limit his veto power in any manner that is not provided in the Texas Constitution…When the only act that is being prosecuted is a veto, then the prosecution itself violates separation of powers…A governor could be prosecuted for bribery if he accepted money, or agreed to accept money, in exchange for a promise to veto certain legislation, and a governor might be subject to prosecution for some other offense that involves a veto. But the illegal conduct is not the veto; it is the agreement to take money in exchange for the promise.

Count II charged Perry with “coercion of a public servant” for threatening the veto before he issued it, in order to pressure the DA to step down, as she should have. The lower appeals court had concluded that this statute applied in this manner would be massively overbroad in criminalizing completely legitimate politics:

The court of appeals recited a number of hypothetical situations offered by Governor Perry to illustrate the improper reach of the statute:

• A manager could not threaten to fire or demote a government employee for poor performance.
• A judge could not threaten to sanction an attorney for the State, to declare a mistrial if jurors did not avoid misconduct, or to deny warrants that failed to contain certain information.
• An inspector general could not threaten to investigate an agency’s financial dealings.
• A prosecutor could not threaten to bring charges against another public servant.
• A public university administrator could not threaten to withdraw funding from a professor’s research program.
• A public defender could not threaten to file a motion for suppression of evidence to secure a better plea bargain for his client.

The court agreed that the statute would indeed criminalize these acts. The court also offered its own hypotheticals: that the statute would appear to criminalize a justice’s threat to write a dissenting opinion unless another justice’s draft majority opinion were changed, and the court’s clerk’s threat, when a brief is late, to dismiss a government entity’s appeal unless it corrects the deficiency.

A cynic would note that these examples cut rather too close to home for the judges.

The Court of Criminal Appeals agreed that the First Amendment to the U.S. Constitution would be violated by the prosecutor’s broad view of what could be criminalized in a public official’s veto threats. The court noted that more specific situations of real misconduct like bribery were already covered by other statutes, and added its own list of real-world political give-and-take (which it linked to news reports of ordinary Texas politics) that would become crimes:

Th[e statute covers officials who] include[] the Governor, Attorney General, Comptroller, Secretary of State, Land Commissioner, tax-assessor collectors, and trial judges. Many threats that these public servants make as part of the normal functioning of government are criminalized:

• a threat by the governor to veto a bill unless it is amended,
• a threat by the governor to veto a bill unless a different bill he favors is also passed,
• a threat by the governor to use his veto power to wield “the budget hammer” over a state agency to force necessary improvements,
• a threat by the comptroller to refuse to certify the budget unless a budget shortfall is eliminated,
• a threat by the attorney general to file a lawsuit if a government official or entity proceeds with an undesired action or policy,
• a threat by a public defender to file, proceed with, or appeal a ruling on a motion to suppress unless a favorable plea agreement is reached,
• A threat by a trial judge to quash an indictment unless it is amended.

Of these, the only example involving anything unusual is the one in which the comptroller actually followed through with her threat not to certify the budget. At least some of these examples, involving the governor and the attorney general, involve logrolling, part of “the ‘usual course of business’ in politics.”

Another indication of the pervasive application that the statute has to protected expression is that the last example we listed above occurred in this very case. Concluding that quashing Count II would be premature, the trial court ordered the State to amend Count II of Governor Perry’s indictment. But a trial court has no authority to order the State to amend an indictment; the State has the right to stand on its indictment and appeal any dismissal that might result from refusing to amend. The trial court’s order that the State amend the indictment was, in practical terms, a threat to quash Count II if it were not amended. And the trial court’s exact words are of no moment because the statute refers to a threat “however communicated.”

The regular and frequent violation of the statute by conduct that is protected by the First Amendment suggests that the statute is substantially overbroad.

In theory, because the dismissal of Count II was on federal Constitutional grounds, the prosecutor could appeal that ruling to the 8-member U.S. Supreme Court, but it appears that this is the end of the line. Rick Perry stood his ground for honest government and was branded a criminal for doing so, long enough to help hobble his 2016 Presidential campaign. Everyone involved in that effort should be ashamed of themselves. But tonight, Governor Perry can hold his head high, as he has been completely vindicated.

Bomb Aladdin!

RS: PPP Polls Shows Why Issue Polling Is So Unreliable

One of the favorite shticks of Democrat pollster Public Policy Polling (PPP) is to ask questions designed to make Republican voters look bad. This kind of “troll polling” flatters all the usual sorts of people who love to laugh at what yokels the GOP’s supporters are, and as yet no Republican-leaning pollster has gotten into the regular business of giving Democrats a taste of the same medicine. If the last few years have taught us anything, it’s not to trust individual polls that can’t be checked against a polling average, but by definition these are all one-off polls. But there’s a deeper issue here that the latest PPP trolling question illustrates: that average Americans are far too trusting of pollsters, and the ability of pollsters to exploit that trust shows why polling on individual issues is untrustworthy.

Here’s the latest poll question that has PPP’s followers floating on a cloud of smug this morning:

Continue reading Bomb Aladdin!

How and Why Ronald Reagan Won

RS: How and Why Ronald Reagan Won

Fifty-one years ago today in Los Angeles, a 53-year-old political amateur, Ronald Reagan, gave a half-hour nationally-televised speech, “A Time For Choosing,” on behalf of Barry Goldwater’s campaign in the following week’s presidential election. 16 years later, Reagan would win 44 states and an almost double-digit popular vote margin of victory, kicking off the most successful and conservative Republican presidency in U.S. history, leading to a 49-state landslide in 1984 and the election of his Vice President for a “third Reagan term” in 1988, the only time in the past 70 years that a party has held the White House for three consecutive terms.

Given the extent to which Reagan’s legacy still dominates internal debates within the GOP and the conservative movement, it’s worth asking ourselves: What did he accomplish? How did he do it? And what can we learn from him today?

Continue reading How and Why Ronald Reagan Won

Donald Trump Tries To Backtrack After Insulting Iowa Voters As Brain-Damaged Corn-Huffers

RS: Donald Trump Tries To Backtrack After Insulting Iowa Voters As Brain-Damaged Corn-Huffers

“Will this be the gaffe that finally sinks Donald Trump” has been a popular parlor game for some months now, and while polling shows that Trump is accumulating unfavorables and hard-core opponents in Iowa and seems to be stalling in persuading any additional supporters, his long-awaited collapse in the polls has yet to materialize and increasingly looks like it is more likely to be a gradual bleed than the implosion of a supernova. Thus, Trump has survived insulting Sen. John McCain (R-AZ)’s war servicebadly misunderstanding Christianitymaking crude remarks about Megyn Kellygoing full 9/11 Truther, and repeatedly embracing far-left-wing talking points and positions on a whole host of issues.

But in all Trump’s feuds, smack-talking and insults, the one thing he had not done previously was insult the voters. Until today:

 

“@mygreenhippo #BenCarson is now leading in the #polls in #Iowa. Too much #Monsanto in the #corn creates issues in the brain? #Trump #GOP”

Now, if you’re not real familiar with Twitter, this is not Trump’s own words – he is quoting, with apparent approval for his audience of 4.6 million Twitter followers, a tweet by a Twitter user named @mygreenhippo whose Twitter bio links to the website http://www.sexaddiction.tv, a link I dare not click on to find out what it leads to. But leave aside who the original tweeter is (two weeks ago Trump retweeted a Dutch white supremacist, which is sadly characteristic of a small but extraordinarily vocal subset of his online fans) – I’m quite sure that Trump pays no attention to the identity of people tweeting at him, and while doing so would be the wiser course, it’s not really his responsibility to investigate.

But Trump rather clearly was endorsing the sentiment: that Carson pulling ahead of him 28-20 in one poll (the latest Quinnipiac Iowa poll, the first Iowa poll in three weeks) means that something is wrong with Iowa voters, who have therefore earned a vintage Trump put-down. If you think insulting the voters of Iowa is no small deal, ask former Iowa Congressman Bruce Braley, whose bid for the Senate last year was dramatically upended by video of him deriding Chuck Grassley as an Iowa farmer. Or ask Scott Walker, who in March fired strategist Liz Mair over tweets critical of Iowa’s caucus and voters that predated her hiring by Walker. Maybe Trump genuinely shares this level of scorn for Iowans – the man’s a Manhattan real estate mogul, after all, and his speaking style suggests a man who believes he is always putting one over on you – but just as likely, he was just slipping into his typical pattern of handing out schoolyard insults to anyone who disresepects The Donald.

Moreover, the tweet in question showcases a second of Trump’s unsavory characteristics, his tendency to embrace any old conspiracy theory, in this case fear of Monsanto-produced GMOs, a popular bugaboo on the anti-science Left. Smearing the voters and diving into the left-wing fever swamps is an impressive twofer.

Perhaps recognizing that this was a disastrously poor decision, Trump – who is famous for never apologizing for anything – three hours later offered the closest to an apology he is likely to deliver in this campaign:

The young intern who accidentally did a Retweet apologizes.

Now, while many politicians do indeed have interns tweeting, Trump has rather clearly been doing his own, unfiltered and in his own distinctive voice, for years now, and has given off every indication in interviews that he’s the man with his finger on the Tweet button. Maybe the tweet was an intern’s tweet, maybe not, and maybe the “apologizes” is intentionally tongue-in-cheek a la Monty Python, but characteristically, Trump won’t take responsibility directly for anything.

Maybe the more interesting question is whether we will see more of this kind of reaction as more bad polling news arrives in the future. As Noah Rothman notes, Trump’s campaign message at this point is hugely dependent on bragging about the polls, such that bad polling news could feed on itself and undermine the whole basis of his appeal:

Much of Trump’s extemporaneous stump speeches focus on his roost at the top of polls of Republican primary voters. He contends ad nauseam that the United States is in decline and does not “win anymore.” You’re expected to accept the premise and choose not to ask for specifics about what has been lost in the ill-defined contest. “We’ll have so much winning, you’ll get bored with winning,” Trump adorably quipped…Tragically for Trump, however, he might soon be robbed of his claim to be the bearer of endless victories. This leads us to the big question: Can the Donald Trump campaign endure a loss?

Jeb strategist Mike Murphy – even discounting for his obvious self-interest – makes a similar point:

[N]othing changes like momentum from polling. I often joke that if I ever had the horrible, malicious job of being Head of the PRC’s Intelligence Service and they said, “All right, here’s $20 billion, screw around with the U.S,” one of the first things I’d go do is bribe media pollsters. because you totally control the thinking of the D.C. press corps based on polls. Right now, if four polls had come out saying Trump at seven and Jeb at 29, all the media commentary—without either guy changing a thing they’re doing—would be the exact opposite. Well, Jeb’s low-key style is clearly resonating with voters, it’s exactly what people are looking for, I can just hear it now. Well, Trump’s bombastic style clearly has backfired, we could see… And by the way, the same people would be totally comfortable completely switching their opinions in a minute because most of them are lemmings to these, in my view, completely meaningless national polls.

Walker 18.8
Bush 9.0
Paul 8.5
Trump 8.5
Carson 8.3
Huckabee 7.5
Cruz 7.3
Rubio 6.8
Santorum 4.3

Today, Walker is out, and Jeb, Paul, Huckabee and Santorum are down to less than half their poll averages from late July. In 2012, when the Iowa Caucuses were a month earlier (January 3), Herman Cain was over 30 in the RCP average in late October, Newt Gingrich was at 31 on December 11, and on December 26, 2011, Rick Santorum stood in sixth place at 7.7% of the vote. Yet Santorum won. And if Iowa polls are dicey and volatile, national and later-state polls are even more useless, in part because they are affected by what happens in Iowa and New Hampshire. The Washington Post’s “Past Frontrunners” list notes the national poll standing at this point of the race of some past frontrunners who went on to lose:

2004: Wesley Clark +5
2008 (D): Hillary Clinton +27.3
(R): Rudy Giuliani +9.2
2012: Herman Cain +0.5

And look at what happened in the South Carolina polls in 2012 after Iowa and then New Hampshire:

Here, there are reasons to think more bad news in Iowa may be headed Trump’s way, which could wash out the last 0.7 points of his lead in the RCP average in Iowa. Rothman notes that Iowa is the one place where Trump actually has negative ads running against him, a $1 million Club for Growth ad campaign (I noted last month that the Summer of Trump poll surge was partly dependent on the fact that nobody was in the field running ads yet). And organizing is key in Iowa, yet Trump is the only candidate in the race who hasn’t even bothered to purchase a voter file, and his “campaign has spent more on hats and T-shirts than on field staff members in Iowa, New Hampshire and South Carolina.” So likely-voter screens alone may sap his standing as the polling grows more rigorous. The most respected gold-standard Iowa pollster, J. Ann Selzer’s DeMoines Register poll, will be announcing the results of another Iowa poll Friday, and if it similarly shows Trump out of the lead, it will bear watching if he has another spasm of lashing out at the voters.

Laura Ingraham Gets Punked By Donald Trump

RS: Laura Ingraham Gets Punked By Donald Trump

Donald Trump presents a threat – we can debate how big a threat, but a threat – to the conservative movement and the Republican Party. As conservatives and Republicans, we need a battle plan that has an endgame that gives Trump a way out of the primaries without running a third-party campaign, and brings his current supporters into the fold to not only support the Republican presidential nominee, but ideally help us select a more conservative nominee, party and platform. So there are limits to how much we should be attacking voters who are listening to Trump and like what they hear.

But there’s a difference between followers and leaders. If you’re in a position of leadership – elected officials, organizers, fundraisers, cable and talk radio hosts, pundits, columnists, bloggers – and you’re actively encouraging the Trump phenomenon, you are neither a conservative nor a Republican anymore and should not expect anyone to take you seriously again. Today’s case in point: Laura Ingraham.

Continue reading Laura Ingraham Gets Punked By Donald Trump

The Rise & Fall of the Confederate Flag in South Carolina

RS: The Rise & Fall of the Confederate Flag in South Carolina

South Carolina Governor Nikki Haley, in the aftermath of the Charleston church shooting, held a press conference Monday with Sen. Tim Scott (R-SC), Sen. Lindsey Graham (R-SC), and legislators from both parties calling for the removal of the Confederate flag from its place on the grounds of the state capitol. This is a good thing. Despite persistent efforts to use the flag as a partisan club, it is worth recalling some history on the matter.

Continue reading The Rise & Fall of the Confederate Flag in South Carolina

The Breakers Broke: A Look Back At The Fall 2014 Polls

RS: The Breakers Broke: A Look Back At The Fall 2014 Polls

As promised in my first cut after the election, a more detailed walk by the numbers through the 2014 Senate and Governors race polling and my posts on the subject to illustrate that the election unfolded pretty much along the lines I projected on September 15, when I wrote that “[i]f…historical patterns hold in 2014, we would…expect Republicans to win all the races in which they currently lead plus two to four races in which they are currently behind, netting a gain of 8 to 10 Senate seats.
This was not a consensus position of the models projecting the Senate races at the time; Sam Wang, Ph.D. wrote on September 9 that “the probability that Democrats and Independents will control 50 or more seats is 70%” and described a 9-seat GOP pickup as “a clear outlier event.” On September 16, the Huffington Post model had a 53% chance of the Democrats keeping the Senate, while the Daily Kos model on September 15 had the Democrats with a 54% chance of retaining their Senate majority. Nate Cohn at the New York Times on September 15 gave the GOP just a 53% chance of adding as many as 6 seats, with Republicans having just a 35% chance of winning in Iowa, 18% in Colorado and 18% in North Carolina, and a 56% chance of winning in Alaska. The Washington Post on September 14 had the Democrats favored in Alaska, with a 92% chance of winning Colorado and a 92% chance of winning North Carolina. Even Nate Silver and Harry Enten’s FiveThirtyEight Senate forecast, which was more optimistic than some of the others for Republicans at the time, gave the GOP just a 53% chance of making it to a 6-seat gain as of September 16.
In this case, at least, my reading of history was right, and was a better predictor of the trajectory of the race than the models or the contemporaneous polls they were based on. That won’t always be true; it wasn’t in the 2012 Presidential race. It may or may not prove true in the 2016 Presidential race, where the historical trends overwhelmingly favor Republicans. But after 2012 we were greeted with an onslaught of triumphalism for polls, poll averages and poll models, and what 2014 illustrates is not only that – as we already knew – the models are only as good as the polls, but also that there remains a place for analysis and historical perspective and not just putting blind faith in numbers and mathematical models without examining their assumptions (a point that some of the more cautious analysts, like Silver and Enten, tried to their credit to stress to their readers during the 2014 season).
It also validates my broader view that subjects like polling are best understood when you have an adversarial process of competing arguments rather than deference to a consensus of experts. Because poll analysis down the home stretch involves a high degree of emotional involvement in partisan wins and losses – and most people who get involved in arguments about polling have strong partisan preferences – it’s next to impossible to avoid the pull of confirmation bias, the tendency to credit arguments you want to see win and discredit those you want to see defeated. Certainly mathematical models and poll averages can offer a check against bias, but inevitably they also rest on assumptions that incorporate bias as well. It remains broadly true, as I pointed out repeatedly in and after the 2012 election, that liberal poll analysts and Democratic pollsters tend to do a better job in years when Democrats do well, and that conservative poll analysts and Republican pollsters tend to do a better job in years when Republicans do well, because in each case they are more likely to credit the assumptions that prove accurate. Nate Silver just published a fascinating post on how the 2014 pollsters tended to “herd” towards each other’s results, which tends to exacerbate the problem of being skewed in one or another direction in any given year – more proof of streiff’s view of the herd mentality of pollsters and Erick’s view of polls weighting towards 2012 models without an adequate baseline, and another strike against expert “consensus” thinking and in favor of the virtues of examining your assumptions. The best corrective for the reader to apply to these biases is to listen to both sides, examine the plausibility of their assumptions, and then go back later and evaluate their results.

Continue reading The Breakers Broke: A Look Back At The Fall 2014 Polls

A Sad and Desperate Attack on Chris Christie

Things Only Democrats Are Allowed To Say

The Online Left, in its customary fashion, has contrived to feign outrage over RGA Chairman Chris Christie for saying this at a Chamber of Commerce event in DC:

“Would you rather have Rick Scott in Florida overseeing the voting mechanism, or Charlie Crist? Would you rather have Scott Walker in Wisconsin overseeing the voting mechanism, or would you rather have Mary Burke? Who would you rather have in Ohio, John Kasich or Ed FitzGerald?”

Progressives claimed to be shocked, shocked by these remarks. Steven Benen of the Rachel Maddow Show Blog suggested that “Christie almost seemed to be endorsing corruption” and quoted the always credulous Norman Ornstein characterizing Christie as secretly meaning, “How can we cheat on vote counts if we don’t control the governorships?” Benen followed up with a second piece, quoting this Christie clarification

“Everybody read much too much into that,” he said. “You know who gets to appoint people, who gets to decide in part what the rules are, I’d much rather have Republican governors counting those votes when we run in 2016 as Republicans than I would have Democrats. There was no specific reference to any laws.”

Benen’s hyperventilating conclusion?

[T]aking the two sets of Christie comments together, it’s difficult to think of a charitable interpretation… Christie…wants an elections process in which Republicans control the “voting mechanisms,” Republicans appoint the elections officials, Republicans help dictate “what the rules are” when it comes to Americans casting ballots, and Republicans are “in charge of the state when the votes are being counted.”

In other words, Christie doesn’t want a non-partisan elections process. The governor and likely presidential candidate wants the exact opposite…It might very well be the most controversial thing Christie has ever said in public. That he sees this as unimportant – his intended “clarification” only added insult to injury – speaks volumes about Christie’s cynical, partisan vision of how democracy is supposed to work.

A “non-partisan elections process,” as if the Republicans Christie named were running against…well, something other than Democrats.

Racing to outdo Benen, Brian Beutler of the New Republic wailed, “Chris Christie Just Exposed His Entire Party’s Deceitful Voter Suppression Plan,” and asserted the usual Democratic shibboleth that voter fraud or other improprieties by Democrats in the voting process are impossible and inconceivable.

The most ridiculous part of this garment-rending is the implicit suggestion that Democrats don’t say exactly the same sort of thing that Christie said – that they don’t trust the other side’s conduct of elections, and want their partisans to vote to give them a greater role in protecting their side in the process. No sentient adult could claim with a straight face that Democrats never say this – indeed, the whole point of both Benen’s and Beutler’s articles is to suggest that only Democrats should be trusted to govern the elections process. Nor does one need to look hard for evidence. Consider Bill Clinton, surely still a prominent Democrat, stumping in June for the hapless Ohio Democrat Ed FitzGerald:

“Would you rather have a governor who wants to shift the tax burden onto the middle class, has aggressively pushed this voter-suppression agenda, and done a variety of other things, or one who was an FBI agent, a mayor, a county executive …?” he asked, holding up a sheet of paper as if it were a resume.

…Bemoaning the low election turnout of 2010 that saw the narrow defeat of Democratic Gov. Ted Strickland, he said the party has to make the case that midterm elections matter just as much as presidential elections.

“[Republicans] want to make every presidential election look more like the mid-term election by restricting the electorate, and if at all possible, want to restrict the midterm elections even more,” he said. He was referring to Republican-passed legislation that, among other things, has reduced early and absentee voting days.

Democrats say this kind of thing all the time on the stump. But it goes much further than rhetoric. Democratic fundraisers have not been shy about bankrolling efforts to win control of state election offices for precisely the purpose of controlling the vote counting process. Between 2006 and 2011, the Soros-funded Secretary of State Project existed for that explicit purpose:

A small tax-exempt political group with ties to wealthy liberals like billionaire financier George Soros has quietly helped elect 11 reform-minded progressive Democrats as secretaries of state to oversee the election process in battleground states and keep Republican “political operatives from deciding who can vote and how those votes are counted.”

Known as the Secretary of State Project (SOSP), the organization was formed by liberal activists in 2006 to put Democrats in charge of state election offices, where key decisions often are made in close races on which ballots are counted and which are not.

The group’s website said it wants to stop Republicans from “manipulating” election results.

“Any serious commitment to wresting control of the country from the Republican Party must include removing their political operatives from deciding who can vote and whose votes will count,” the group said on its website, accusing some Republican secretaries of state of making “partisan decisions.”

Eventually, as is the way with such organizations, the SOSP faded after some sunlight was shed on it, but in 2014, its heirs live on:

[T]he Democratic group…iVote, [is] part of a highly partisan and increasingly expensive battle over an elected position…Thirty-nine states elect their secretary of State, and because the job includes overseeing the administration of elections, Republican and Democratic PACs have emerged to fight for control of the position. In addition to iVote, a second Democratic PAC called SOS for Democracy and a Republican group named SOS for SOS have also begun raising money for secretary of State races in November.

The election-year focus on secretaries of State results from the flood of outside political spending that began in earnest in 2012 and is now flowing to races further down the ballot. It also grows out of a wave of controversial GOP-led voter identification legislation, challenged in court by Democratic groups arguing that they are intended to disenfranchise poor and minority voters.

The PACs’ effort also is part of a growing political belief that no detail is too small to be ignored in gaining an edge on an opponent. If that means trying to elect your candidate as Ohio secretary of State so that he or she can set early voting hours in Cuyahoga County, that is worth the effort.

“It is the long game. And it’s really important. These are the kind of things that we need to do instead of sitting back and playing defense,” says Jeremy Bird, former field director for the 2012 Obama campaign and now one of the organizers of iVote.

Secretaries of State “have a pivotal role to play in how elections are run,” says Rick Hasen, a law professor at the University of California-Irvine and author of Election Law Blog. “It’s very inside baseball, it’s very esoteric, but for people who are inside it makes a big difference.”

Left-wing blog DailyKos has touted these efforts and sought to enlist its readers in funding them:

Want to make sure every vote counts? Get involved in these key races for secretary of state…While their powers vary considerably from state to state, secretaries of state have a good deal of influence over how voter ID laws are carried out, who gets to vote early, what areas may or may not have enough voting machines on election day, and who gets to stay on the voter rolls….

Secretary of state races were largely ignored for years, and they still tend to attract low voter interest. However, both parties have begun to understand how important these elections are. This cycle, the Republicans have a PAC called “SOS for SoS” that will spend millions to try and win these offices in critical states. Democrats have two main committees, “SoS for Democracy” and “iVote.” Both sides understand that it is essential to get involved now in secretary of state races now, before it is far too late. As then-Florida Secretary of State Kathrine Harris proved in 2000, these races can often resonate far beyond state lines.

For progressives looking to fight back in the War on Voting, this is the central battleground.

It seems that Benen and Beutler believe that it’s entirely legitimate for Democrats to want partisan Democratic control of the voting mechanism, and to organize, campaign and fundraise to ensure partisan Democratic control of the voting mechanism, but wholly illegitimate when Republicans do the same thing. This is premised not only on the idea that Democrats are trustworthy vote-counters and Republicans are not, but also, at the level of lawmaking, on their assumption that there can be no possible legitimate policy debate over the balance between ensuring the integrity of elections by preventing illegal votes from being cast that dilute the votes of legal voters, and ensuring the right of all legal voters to vote (once).

Which is a ridiculous position, in addition to being one that is out of step with public polls that consistently show things like voter ID to be overwhelmingly popular, even among every racial and ethnic segment of the population (voter ID was endorsed by a bipartisan national commission co-chaired by Jimmy Carter in 2005, and upheld by the Supreme Court in an opinion by the liberal Justice John Paul Stevens in 2008). American history is littered with cases of widespread fraud in the elections process; within living memory, we had the notorious 1982 Illinois governor’s race (decided by 0.14 points after the Republican candidate had led by 15 points in the polls), in which a federal investigation that resulted in 63 convictions found that at least 100,000 fraudulent votes had been cast in Chicago alone, some 10 percent of the city’s entire vote. Chicago was long so notorious for voter fraud that only yesterday, President Obama – who himself won his first election in Chicago by having his opponent thrown off the ballot through a signature-challenging process – joked to a Wisconsin audience that “You can only vote once — this isn’t Chicago, now.” Going further back, biographer Robert Caro has detailed how no less a figure than Lyndon Johnson won his first Senate election in 1948 through some fairly brazen forms of fraud:

Mr. Caro confirmed the charges made at the time by Stevenson supporters that county officials had cast the votes of absent voters and had changed the numbers on the tallies. For example, he said, Jim Wells County provided an extra 200 votes for Johnson merely by changing the 7 in ”765” to a 9.

And in “1984, Brooklyn’s Democratic district attorney, Elizabeth Holtzman, released a state grand-jury report on a successful 14-year conspiracy that cast thousands of fraudulent votes in local, state, and congressional elections….The grand jury recommended voter ID, a basic election-integrity measure that New York has steadfastly refused to implement.”

Every year, year in and year out, there are documented cases of fraud and corruption in the election process. The 2005 Baker-Carter Report noted “that the U.S. Department of Justice had conducted more than 180 investigations into election fraud since 2002. Federal prosecutors had charged 89 individuals and convicted 52 for election-fraud offenses, including falsifying voter-registration information and vote buying.”

Now, fraud on the scale of the 1982 Chicago case is not something we’re likely to see again, at least not on a regular basis. As Jim Geraghty notes, there’s an element of defeatism bordering on paranoia that surfaces among conservatives this time every year, convinced that Democrats are just going to steal the election anyway no matter what we do and no matter how many votes it takes, and that’s just not borne out by the facts. The reality is that election fraud matters only on the margins. But the margins do matter. Florida 2000 is the most famous case – the presidency turned on a 537-vote margin of victory, and Bush won the Election Day count, the automatic recount, and the legal challenges in the trial court, the intermediate appellate court and the U.S. Supreme Court, but not without the Florida Supreme Court trying to rewrite the state’s election laws to give Al Gore a shot at yet another different count, an experience that left many Republicans deeply suspicious of efforts to just keep counting the votes until a different result turned up. And that’s more or less what happened in the recounts in the 2004 Washington governor’s race and the 2008 Minnesota Senate race. You want consequential? Had Sen. Al Franken (D-MN) not won that Senate race, there would never have been a 60th vote in the Senate to pass Obamacare.

Recount mischief wasn’t the only problem in Florida in 2000:

One of the most comprehensive studies of the 2000 presidential election, “Democracy Held Hostage,” was conducted by the Miami Herald — it found that 400 votes were cast illegally in heavily Democratic Broward County when poll workers allowed voters to vote who were not on the precinct voting rolls. And another 452 were cast illegally by felons in Broward. In Volusia County — which supported Gore — 277 voters voted who were not registered, including 73 voters at predominately black Bethune-Cookman University, which voted heavily for Gore.

The Herald review of votes in 22 counties (with 2.3 million ballots) found that 1,241 ballots were cast illegally by felons who had not received clemency. Of these voters, 75% were registered Democrats. And the Herald study counted only those who had been sentenced to prison for more than a year.

The Washington race was an agony of recounts – there were three counts, and only when Democrat Christine Gregoire pulled ahead of Republican Dino Rossi after a bunch of extra ballots turned up in Democrat-controlled King County (Seattle) did they stop the count. Gregoire won by 129 votes, and subsequent investigation revealed that more convicted felons voted in that race than the margin of victory.

The 2008 Franken-Coleman race in Minnesota, again, saw the Republican Election Day leader lose to the recount-winning Democrat, and by a margin of 312 votes – but “a conservative watchdog group matched criminal records with the voting rolls and discovered that 1,099 felons had illegally cast ballots. State law mandates prosecutions in such cases; 177 have been convicted so far, with 66 more awaiting trial [as of 2012].” As Byron York noted, “that’s a total of 243 people either convicted of voter fraud or awaiting trial in an election that was decided by 312 votes.”

Then there’s the 2010 Connecticut governor’s race:

In the close governor’s race in Connecticut in 2010, a mysterious shortage of ballots in Bridgeport kept the polls open an extra two hours as allegedly blank ballots were photocopied and handed out in the heavily Democratic city. Dannel Malloy defeated Republican Tom Foley by nearly 7,000 votes statewide — but by almost 14,000 votes in Bridgeport.

Even aside from endless controversies over chicanery with poll-closing times and recount mechanisms, there’s plenty more evidence out there showing that the opportunity exists to game the system (both legally and illegally), and that people have a sufficient incentive to do so that some get caught every year – the prosecutions alone (which almost always end in convictions) illustrate that this is more than just partisan propaganda:

-Just last month, a Democratic legislator in – yes – Bridgeport was arrested and charged with 19 counts of voter fraud.

-This month, a former Democratic member of the LA City Council was convicted of voter fraud along with his wife.

In 2013, a former Maryland Democratic congressional candidate pleaded guilty to voting illegally in Congressional elections in 2006 and 2010 while living in Florida.

Also in 2013, a former Hamilton County, Ohio poll worker and Obama supporter pleaded guilty to “four counts of illegal voting – including voting three times for a relative who has been in a coma since 2003.” She “admitted she voted illegally in the 2008, 2011 and 2012 elections.” She was recently honored by Al Sharpton at a “voting rights” rally.

Hans von Spakosvsky summarized some of the other recent prosecutions in Monday’s Wall Street Journal:

In the past few months, a former police chief in Pennsylvania pleaded guilty to voter fraud in a town-council election. That fraud had flipped the outcome of a primary election….A Mississippi grand jury indicted seven individuals for voter fraud in the 2013 Hattiesburg mayoral contest, which featured voting by ineligible felons and impersonation fraud. A woman in Polk County, Tenn., was indicted on a charge of vote-buying—a practice that the local district attorney said had too long “been accepted as part of life” there.

This Pocket Full of Liberty post from February rounds up other examples, including a Milwaukee man prosecuted for voting five times, 12 indictments of Georgia Democrats for absentee ballot fraud, a dozen arrests in New Jersey, and more than 80 referrals for prosecutions in Iowa.

-Soren Dayton has covered a number of these cases here at RedState, including multiple indictments and guilty pleas in a voter fraud scandal involving Democrats in Troy, New York in 2009eight arrests by the FBI for absentee ballot fraud in Florida in 2011, a series of voter fraud convictions in Alabama, a 65-count indictment in Indiana, and these two classics (click through for the links):

My favorite example is the 2003 East Chicago (Indiana) Democratic mayoral primary. There were 32 convictions. The election results were also thrown out by the Indiana Supreme Court. Note that that last link is to a story in the Chicago Tribune, my home-town paper, that discusses the conviction of the “reform” candidate in that election, with the splendid sentence, “On Thursday, a federal judge sentenced former Mayor George Pabey to five years in prison, the third consecutive East Chicago mayor to come to grief in a federal courtroom.” This case galvanized support for a voter ID law in Indiana that was eventually argued in the US Supreme Court, where the opinion upholding the law was written by former Justice Stevens. Some noted at the time that Justice Stevens, who was normally a reliable liberal vote, grew up in Chicago.

Then there’s another favorite case, that of Ophelia Ford. Mrs. Ford is the sister of former Democratic Congressman Harold Ford, Sr., sister of former State Rep. John Form, now serving time in federal prison for bribery, and the aunt of former Democratic Congressman Harold Ford, Jr….In this case, Mrs. Ford, a Democrat, defeated an incumbent Republican by 13 votes. The local newspaper, the Commercial Appeal, smelled something and dug. In the end, the State Senate vacated the election on a vote of 26-6, and three people plead guilty to felonies. In that case, the judge noted that the guilty plea actually prevented a full record of the fraud from being documented. But the guilty pleas did involve both dead and moved people voting.

-East St. Louis has had repeated issues with voter fraud, justifying its entry on this lengthy list:

Nonaresa Montgomery was found guilty by a jury late today of perjury in a trial in St. Louis Circuit Court in the St. Louis vote fraud trial…Montgomery, a paid worker who ran Operation Big Vote during the run-up to 2001 mayoral primary, …part of a national campaign — promoted by Democrats — to register more black voters and get them to vote in the November elections.

Montgomery is accused of hiring about 30 workers to do fraudulent voter-registration canvassing. They were supposed to have canvassed black neighborhoods and recorded names of potential voters to be contacted later to vote in the Nov. 7 election. And they were paid by the number of cards they filled out. Instead of knocking on doors, however, they sat down at a fast-food restaurant and wrote out names and information from an outdated voter list.

-In a 2013 New York City investigation, “undercover agents claimed at 63 polling places to be individuals who were in fact dead, had moved out of town, or who were in jail. In 61 instances, or 97 percent of the time, they were allowed to vote.”

-A recent academic study found some evidence that significant numbers of non-citizens may have voted in recent elections, although the study’s methodology suggests its findings should not be treated as conclusive.

A 2014 analysis by the Providence Journal found that “20 of Rhode Island’s 39 municipalities, from the largest city to the smallest town, had more registered voters than it had citizens old enough to vote.”

-The new state elections director of New Mexico shocked observers in 2007 when he “recounted several conversations he’d had over the years with people who told him they’d used other people’s identities to cast multiple votes.”

A 2011 report by the Milwaukee Police Department noted why voter fraud is so hard to detect and prosecute:

Although investigators found an “illegal organized attempt to influence the outcome of an election in the state of Wisconsin,” nothing was done to prosecute the various Democrat and liberal staffers who committed the vote fraud [because b]ased on the investigation to date, the task force has found widespread record keeping failures and separate areas of voter fraud. These findings impact each other. Simply put: it is hard to prove a bank embezzlement if the bank cannot tell how much money was there in the first place. Without accurate records, the task force will have difficulty proving criminal conduct beyond a reasonable doubt in a court of law.

Joe Biden’s neice, who worked in New Hampshire in 2012 just for the election, voted there, and her case raised concerns about the state’s lax residency requirements.

-A 2014 North Carolina investigation of the voter rolls found that:

765 voters with an exact match of first and last name, DOB and last four digits of SSN were registered in N.C. and another state and voted in N.C. and the other state in the 2012 general election.

35,750 voters with the same first and last name and DOB were registered in N.C. and another state and voted in both states in the 2012 general election.

155,692 voters with the same first and last name, DOB and last four digits of SSN were registered in N.C. and another state – and the latest date of registration or voter activity did not take place within N.C.

I could go on and on, but you get the point.

I recently analyzed close statewide elections from 1998 to 2013 (elections for Senate and Governor as well as the statewide contests in the Presidential races) and found that, while Democrats and Republicans were split 50/50 in winning races decided by 1-4 points, Democrats won 20 out of 27 races decided by less than 1 point. Election fraud, even in combination with manipulation of the recount process and other elements of control of the voting mechanisms, is not necessarily the only possible explanation for this disparity; it could be partly the disparity in operational competence at getting the vote out, or it could be a statistical fluke. But certainly the pattern is one that justifiably raises concerns among Republicans about getting a fair shake at the margins of vote-counting.

There’s a structural irony here. Democrats often argue that they need more federal involvement in elections because they mistrust the states. But Republicans tend to seek state supervision of local handling of elections because of the fact that, in almost every state, Democrats tend to depend on winning big margins in areas (usually urban areas) where a lot of Democratic voters are packed together, and consequently the local officials are not just Democrats, but the kind of Democrat who has never had to answer to a Republican voter for anything. And there is a long history, going back to the early 19th Century, of urban Democratic political machines being the worst offenders in any review of electoral shenanigans. Thus, even in deep-red states, the real issue is not Republican monopoly of control over elections, but having Republicans somewhere in the process who can act as a check on local Democrats.

And the hyperbole over voter ID and other election-law issues obscures the fact that the burden they impose is quite minimal and unlikely to keep very many people from the polls, as even President Obama conceded in an interview last week on Al Sharpton’s radio show:

“Most of these laws are not preventing the overwhelming majority of folks who don’t vote from voting,” Obama said during an interview with Rev. Al Sharpton. “Most people do have an ID. Most people do have a driver’s license. Most people can get to the polls. It may not be as convenient’ it may be a little more difficult.”

…”The bottom line is, if less than half of our folks vote, these laws aren’t preventing the other half from not voting,” Obama said. “The reason we don’t vote is because people have been fed this notion that somehow it’s not going to make a difference. And it makes a huge difference.”

This is why Obama’s own Justice Department is reduced to arguing that eliminating same-day registration causes black voters to stay home because they “tend to be . . . less-educated voters, tend to be voters who are less attuned to public affairs” and early voting is “well situated for less sophisticated voters, and therefore, it’s less likely to imagine that these voters would — can figure out or would avail themselves of other forms of registering and voting.”

None of this is to say that these issues are all one-sided in favor of the Republican arguments. There are many aspects of voting law, election law and election practice that involve weighing competing concerns about integrity versus access, about low-tech human error versus less transparent and more tamper-prone machine counting, about voter convenience versus taxpayer expense (where I live in New York City, we have vast numbers of polling places and a cop or more at every one, but you couldn’t possibly afford to run the system like that if we had early voting, much less weeks of it; and every day of early voting multiplies the expense and burden of any system to supervise the integrity of the vote). And there’s a fair argument that Republicans around the country have been too easily satisfied with pushing voter ID laws as a solution to in-person voter fraud, without giving adequate attention to the integrity of mail-in or absentee ballots, which present a greater risk of fraud and tend to result in more prosecutions.

But then, Republicans and conservatives aren’t the ones whose arguments depend on the assumption that the other side has no legitimate case to make and no legitimate role in these debates. Make no mistake: that is the assumption at the core of the hysteria directed at Chris Christie for daring to say what Democrats say and do constantly.

Originally posted at RedState

Mid-September Polls Are Not The Last Word On Senate Races

RS: Mid-September Polls Are Not The Last Word On Senate Races
The perennial question about election polls is back again, if ever it left: how far can we trust them? Should we disregard all other evidence but what the current polling of individual Senate races tells us – which is, at this writing, that if the election was held today, Republicans would gain 6 seats in the Senate to hold a narrow 51-48 majority? As usual, a little historical perspective is in order. It is mid-September, with just over seven weeks to Election Day, and as discussed below, all the fundamental signs show that this is at least a mild Republican “wave” year. A review of the mid-September polls over the last six Senate election cycles, all of which ended in at least a mild “wave” for one party, shows that it is common for the “wave party” to win a few races in which it trailed in mid-September – sometimes more than a few races, and sometimes races in which there appeared to be substantial leads, and most frequently against the other party’s incumbents. Whereas it is very uncommon for the wave party to lose a polling lead, even a slim one, after mid-September – it has happened only three times, one of those was a tied race rather than a lead, and another involved the non-wave party replacing its candidate on the ballot with a better candidate. If these historical patterns hold in 2014, we would therefore expect Republicans to win all the races in which they currently lead plus two to four races in which they are currently behind, netting a gain of 8 to 10 Senate seats.

Continue reading Mid-September Polls Are Not The Last Word On Senate Races

73 Rules For Running For President As A Republican

RS: 73 Rules For Running For President As A Republican

We do not yet know who the Republican presidential nominee will be in 2016. We do not even know for certain who the candidates will be, although several are visibly positioning themselves to run. We all have our own ideas about who should run and what the substance of their platforms should be. But even leaving those aside, it’s possible to draw some lessons from the past few GOP campaign cycles and offer some advice that any prospective candidate should heed, the sooner the better. Some of these rules are in a little tension with each other; nobody said running for President was easy. But most are simply experience and common sense.

1-Run because you think your ideas are right and you believe you would be the best president. Don’t stay out because your chances are slim, and don’t get in because someone else wants you to. Candidates who don’t have a good reason for running or don’t want to be there are a fraud on their supporters.

2-Ask yourself what you’re willing to sacrifice or compromise on to win. If there’s nothing important you’d sacrifice, don’t run; you will lose. If there’s nothing important you wouldn’t, don’t run; you deserve to lose.

3- If you don’t like Republican voters, don’t run.

4-Don’t start a campaign if you’re not prepared for the possibility that you might become the frontrunner. Stranger things have happened.

5-If you’ve never won an election before, go win one first. This won’t be the first one you win.

6-Winning is what counts. Your primary and general election opponents will go negative, play wedge issues that work for them, and raise money wherever it can be found. If you aren’t willing to do all three enthusiastically, you’re going to be a high minded loser. Nobody who listens to the campaign-trail scolds wins. In the general election, if you don’t convey to voters that you believe in your heart that your opponent is a dangerously misguided choice, you will lose.

7-Pick your battles, or they will be picked for you. You can choose a few unpopular stances on principle, but even the most principled candidates need to spend most of their time holding defensible ground. If you have positions you can’t explain or defend without shooting yourself in the foot, drop them.

8-Don’t be surprised when people who liked you before you run don’t like you anymore. Prepare for it.

9-Be sure before you run that your family is on board with you running. They need to be completely committed, because it will be harder than they can imagine. Related: think of the worst possible thing anyone could say about the woman in your life you care about the most, and understand that it will be said.

10-You will be called a racist, regardless of your actual life history, behavior, beliefs or platform. Any effort to deny that you’re a racist will be taken as proof that you are one. Accept it as the price of admission.

11-Have opposition research done on yourself. Have others you trust review the file. Be prepared to answer for anything that comes up in that research. If there’s anything that you think will sink you, don’t run.

12-Ask yourself if there’s anything people will demand to know about you, and get it out there early. If your tax returns or your business partnerships are too important to disclose, don’t run. (We might call this the Bain Capital Rule).

13-Realize that your record, and all the favors you’ve done, will mean nothing if your primary opponent appears better funded.

14-Run as who you are, not who you think the voters want. There’s no substitute for authenticity.

15-Each morning, before you read the polls or the newspapers, ask yourself what you want to talk about today. Talk about that.

16-If you never give the media new things to talk about, they’ll talk about things you don’t like.

17-Never assume the voters are stupid or foolish, but also don’t assume they are well-informed. Talk to them the way you’d explain something to your boss for the first time.

18-Handwrite the parts of your platform you want voters to remember on a 3×5 index card. If it doesn’t fit, your message is too complicated. If you can’t think of what to start with, don’t run.

19-Voters may be motivated by hope, fear, resentment, greed, altriusm or any number of other emotions, but they want to believe they are voting for something, not against someone. Give them some positive cause to rally around beyond defeating the other guy.

20-Optimism wins. If you are going to be a warrior, be a happy warrior. Anger turns people off, so laugh at yourself and the other side whenever possible, even in a heated argument.

21-Ideas don’t run for President; people do. If people don’t like you, they won’t listen to you.

22-Your biography is the opening act. Your policy proposals and principles are the headliner. Never confuse the two. The voters know the difference.

23-Show, don’t tell. Proclaiming your conservatism is meaningless, and it’s harder to sell to the unconverted than policy proposals and accomplishments that are based on conservative thinking.

24-Being a consistent conservative will help you more than pandering to nuts on the Right. If you can’t tell the difference between the two, don’t run.

25-Winning campaigns attract crazy and stupid people as supporters; you can’t get a majority without them. This does not mean you should have crazy or stupid people as your advisers or spokespeople.

26-Principles inspire; overly complex, specific plans are a pinata that can get picked to death. If you’re tied down defending Point 7 of a 52 point plan that will never survive contact with the Congress anyway you lose. Complex plans need to be able to be boiled down to the principles and incentives they will operate on. The boiling is the key part.

27-Be ready and able to explain how your plans benefit individual voters. Self-interest is a powerful thing in a democracy.

28-If you haven’t worked out the necessary details of a policy, don’t be rushed into releasing it just because Ezra Klein thinks you don’t have a plan. Nobody will care that you didn’t have a new tax plan ready 14 months before Election Day.

29-Don’t say things that are false just because the CBO thinks they’re true.

30-If you don’t have a position on an issue, say that you’re still studying the issue. Nobody needs an opinion on everything at the drop of a hat, and you’ll get in less trouble.

31-When in doubt, go on the attack against the Democratic frontrunner rather than your primary opponents. Never forget that you are auditioning to run the general election against the Democrat, not just trying to be the least-bad Republican.

32-Attacking your opponents from the left, or using left-wing language, is a mistake no matter how tempting the opportunity. It makes Republican voters associate you with people they don’t like. This is how both Newt Gingrich and Rick Perry ended up fumbling the Bain Capital attack.

33-Be prepared to defend every attack you make, no matter where your campaign made it. Nobody likes a rabbit puncher. Tim Pawlenty’s attack on Romneycare dissolved the instant he refused to repeat it to Romney’s face, and so did his campaign.

34-If your position has changed, explain why the old one was wrong. People want to know how you learn. If you don’t think the old one was wrong, just inconvenient, the voters will figure that out.

35-If a debate or interview question is biased or ridiculous, point that out. Voters want to know you can smell a trap. This worked for Newt Gingrich every single time he did it. It worked when George H.W. Bush did it to Dan Rather. It will work for you.

36-Cultivate sympathetic media, from explicitly conservative outlets to fair-minded local media. But even in the primaries, you need to engage periodically with hostile mainstream media outlets to stay in practice and prove to primary voters that you can hold your ground outside the bubble.

37-Refuse to answer horserace questions, and never refer to “the base.” Leave polls to the pollsters and punditry to the pundits. Mitt Romney’s 47% remark was a textbook example of why candidates should not play pundit.

38-Hecklers are an opportunity, not a nuisance. If you can’t win an exchange with a heckler, how are you going to win one with a presidential candidate? If you’re not sure how it’s done, go watch some of Chris Christie’s YouTube collection.

39-Everywhere you go, assume a Democrat is recording what you say. This is probably the case.

40-Never whine about negative campaigning. If it’s false, fight back; if not, just keep telling your own story. Candidates who are complaining about negative campaigning smell like losing.

41-“You did too” and “you started it” get old in a hurry. Use them sparingly.

42-If you find yourself explaining how the Senate works, stop talking. If you find yourself doing this regularly, stop running.

43-Never say “the only poll that matters is on Election Day” because only losers say that, and anyway even Election Day starts a month early now. But never forget that polls can and do change.

44-Voters do not like obviously insincere pandering, but you cannot win an election by refusing on principle to meet the voters where they are. That includes, yes, addressing Hispanic and other identity groups with a plan for sustained outreach and an explanation of how they benefit from your agenda. Build your outreach team, including liaisons and advertising in Spanish-language media, early and stay engaged as if this was the only way to reach the voters. For some voters, it is.

45-Post something as close to daily as possible on YouTube featuring yourself – daily message, clips of your best moments campaigning, vignettes from the trail. You can’t visit every voter, but you can visit every voter’s computer or phone.

46-Never suggest that anybody would not make a good vice president. Whatever they may say, everyone wants to believe they could be offered the job.

47-If you’re not making enemies among liberals, you’re doing it wrong.

48- If you don’t have a plausible strategy for winning conservative support, you’re in the wrong party’s primary.

49-The goal is to win the election, not just the primary. Never box yourself in to win a primary in a way that will cause you to lose the election.

50-Don’t bother making friends in the primary who won’t support you in the general. Good press for being the reasonable Republican will evaporate when the choice is between you and a Democrat.

51-Some Republicans can be persuaded to vote for you in the general, but not in the primary. Some will threaten to sit out the general. Ignore them. You can’t make everyone happy. Run a strong general election campaign and enough of them will come your way.

52-Don’t actively work to alienate your base during the primary. Everyone expects you to do it in the general, and you gain nothing for it in the primary.

53-Don’t save cash; it’s easier to raise money after a win than to win with cash you saved while losing. But make sure your organization can run on fumes now and then during dry spells.

54-If you’re not prepared for a debate, don’t go. Nobody ever had their campaign sunk by skipping a primary debate. But looking unprepared for a debate can, as Rick Perry learned, create a bad impression that even a decade-long record can’t overcome.

55-The Iowa Straw Poll is a trap with no upside. Avoid it. Michele Bachmann won the Straw Poll and still finished last in Iowa.

56-Ballot access rules are important. Devote resources early to learning, complying with them in every state. Mitt Romney didn’t have to face Newt Gingrich or Rick Santorum in Virginia – even though both of them live in Virginia – because they didn’t do their homework gathering signatures.

57-If you can’t fire, don’t hire. In fact, don’t run.

58-Hire people who are loyal to your message and agenda, and you won’t have to worry about their loyalty to your campaign.

59-Don’t put off doing thorough opposition research on your opponents. By the time you know who they are, the voters may have decided they’re somebody else.

60-You can afford to effectively skip one early primary. You can’t skip more than that. You are running for a nomination that will require you to compete nationally. (Call this the Rudy Giuliani Rule).

61-Use polling properly. Good polling will not tell you what to believe, but will tell you how to sell what you already believe.

62-Data and GOTV are not a secret sauce for victory. But ignoring them is a great way to get blindsided.

63-Don’t plan to match the Democrats’ operations and technology, because then you’re just trying to win the last election. Plan to beat it.

64-Political consultants are like leeches. Small numbers, carefully applied, can be good for you. Large numbers will suck you dry, kill you, and move on to another host without a backward look.

65-Never hire consultants who want to use you to remake the party. They’re not Republicans and you’re not a laboratory rat.

66-This is the 21st century. If you wouldn’t want it in a TV ad, don’t put it in a robocall or a mailer. Nothing’s under the radar anymore.

67-Always thank your friends when they back you up. Gratitude is currency.

68-Every leak from your campaign should help your campaign. Treat staffers who leak unfavorable things to the press the way you would treat staffers who embezzle your money. Money’s easier to replace.

69-Getting distance from your base in the general on ancillary issues won’t hurt you; they’ll suck it up and independents will like it. Attacking your base on core issues will alienate your most loyal voters and confuse independents.

70-If you are convinced that a particular running mate will save you from losing, resign yourself to losing because you’ve already lost.

71-Don’t pick a VP who has never served in Congress or run for president in his or her own right. Even the best Governors have a learning curve with national politics, and even the best foreign policy minds have a learning curve with electoral politics. And never steal from the future to pay for the present. Your running mate should not be a Republican star in the making who isn’t ready for prime time. In retrospect, Sarah Palin’s career was irreparably damaged by being elevated too quickly to the national level.

72-Never, ever, ever take anything for granted. Every election, people lose primary or general elections because they were complacent.

73-Make a few rules of your own. Losing campaigns imitate; winning campaigns innovate.

Reflections on the American Revolution, Part III of III: The Militia


How did thirteen colonies, with a barely functioning central government and a thrown-together, underfunded and poorly supplied army of constantly fluctuating size and composition, win the Revolutionary War? One reason was the colonies’ ability to rely on their common citizens to supplement the Continental Army with local militia. I’ve looked previously at the demographic and physical conditions and foreign alliances that shaped the war and the generals who led the armies. Let’s conclude this tour of the American Revolution with the militia.

The Militia: Americans then and now have had a romantic attachment to the citizen militia, epitomized by the Massachusetts “minutemen.” The importance of the militia as both a bulwark against tyranny and a line of national defense was, of course, famously the backdrop for the Second Amendment and other militia-related clauses in the Constitution (including allowing Congress to arm them and the President to command them at need “to execute the Laws of the Union, suppress Insurrections and repel Invasions”). Yet it was ultimately the Continental Army, not the militia, that had to do the bulk of the work needed to win the war. Nonetheless, the story of the American victory cannot be told without the militia.

Massachusetts: The militia’s finest hour came at the beginning, before there was a Continental Army: Concord and Bunker Hill. At Concord, in April 1775, the sudden appearance of the Massachusetts militia in significant force, firing largely from behind the cover of trees and stone walls, drove the (mostly inexperienced) redcoats back to Boston with surprising casualties. At Bunker Hill two months later, Massachusetts militia entrenched largely on high ground and firing from behind fortifications and stone walls inflicted a staggering casualty rate of almost 50% on the British regulars (even higher among the officer corps); the militia then beat a mostly orderly retreat when they were finally overcome. Those two battles left the British besieged in Boston, where they would remain for nearly a year until dislodged by Henry Knox’s artillery in March 1776. Bunker Hill also traumatized the British command, haunting their thinking about attacks on entrenched positions for the rest of the war. When the Continental Army was assembled to carry on the siege, much of its manpower and officer corps was drawn from the militia, including key leaders like Knox and Nathanael Greene. Moreover, the artillery that liberated Boston had been seized by militia in 1775 when Ethan Allen and Benedict Arnold, leading the Vermont militia (the Green Mountain Boys) in an expedition supported by Massachusetts and Connecticut militia, captured the lightly-defended Fort Ticonderoga. And without the militia, the army in 1775 would have been unarmed. The Continental Army being chronically short on supplies and having no official, standard weapon, recruits early in the war fought with whatever guns they brought to the army, either their own or those supplied by the state governments – but while that system was essential to forming an army from scratch, Washington found it unsatisfactory to carry on the war. As a 1981 U.S. Army study described the situation:

It was the policy of the Continental Congress in 1775 to “hire” arms, which meant encouraging each new soldier to bring his own gun, a practice that had been common in militia service. Having established this policy, Congress then left the task of equipping the troops to the Commander in Chief. More often than not, however, the men arrived at camp without arms. When Washington undertook to form a Continental Army from the forces before Boston in 1775, he initiated the first of several measures designed to arm his troops. He began by seeking to retain for the use of the new Continental force the muskets that the men hurrying to the defense of their country had brought to Cambridge. He ordered that no soldier upon the expiration of his term of enlistment was to take with him any serviceable gun. If the musket was his private property, it would be appraised, and he would be, given full value for it. All arms so taken and appraised were to be delivered into the care of the Commissary of Military Stores. To make doubly sure that the weapons would be retained for Army use, Washington threatened to stop the last two month’s pay due a soldier if he carried away his gun.

+++

Among the factors contributing to the shortage of arms in the spring of 1776 was the carelessness of the soldiers in maintaining their arms in good working order. An examination of the weapons of the army in New York revealed them to be in shocking condition. Washington issued an order to the regimental commanders to have the arms put in good order as soon as possible and to see that each musket was equipped with a bayonet. Those soldiers who had lost the bayonets they had been issued were to pay for new ones, and if any soldier had allowed his gun to be damaged by negligence, the cost of its repair was to be deducted from his pay. This order by no means eliminated negligence in caring for weapons. It persisted throughout the war….
To promote better care of weapons, Washington substituted a policy of purchasing arms for that of hiring them. During the first two campaigns of the war, it was the custom to encourage both the enlisted soldier and the militiaman to bring their own guns. But Washington soon came to link that policy with the lack of care the soldiers gave their muskets, for under it “a man feels at liberty to use his own firelock as he pleases.” Owners of guns took little care of them, retained them when their service expired, and even disposed of them whenever they pleased. As early as January 1776 Washington had indicated that he was ready to purchase any arms offered by a colony or an individual.

The system of hiring, however, continued until February 1777 when Washington initiated preparations for the next campaign. He informed Governor Trumbull of Connecticut that he now wanted guns purchased from owners on the account of the United States. Purchase, he wrote, would result in better care of the weapons and would eliminate many of the bad consequences of hiring arms.

There were other warning signs of the militia’s limitations in 1775 as well: the militia at Bunker Hill had strategic depth but failed to use it, being too poorly organized to bring reserve units into the fight in time, and the Green Mountain Boys didn’t linger to garrison Fort Ticonderoga once its liquor supplies had run out. An army constituted for the long haul would have to do better.

New Jersey: Problems persisted, but so did the militia’s contributions. Washington was disappointed when more New Jersey and Pennsylvania militia didn’t show up to assist his campaigns in the region between late 1776 and the summer of 1778. But the New Jersey militia played a valuable role in the series of skirmishes known as the New Jersey Forage War in the winter of 1776-77. Acting sometimes alone and sometimes with modest support from the Continental Army, the militia repeatedly staged ambushes and opportunistic attacks on British and Hessian detachments looking for food and forage for their animals, inflicting a slow bleed of casualties and leaving the enemy jittery and under-supplied: a classic guerrilla campaign, although the word hadn’t been coined yet. The New Jersey militia would eventually even draw praise from Washington, long a critic of militia, for its ongoing role in assisting Greene in turning back the final Hessian efforts in 1780 to assail Washington’s position in Morristown; Washington wrote of the militia after the Battles of Connecticut Farms and Springfield that “The militia deserve everything that can be said on both occasions. They flew to arms universally and acted with a spirit equal to anything I have seen in the course of the war.”

Saratoga: Militia were also important to the pivotal Saratoga campaign. Allen and Arnold’s capture of Fort Ticonderoga had cut the British lines of communication in two, severing Guy Carleton’s Canadian forces from the Thirteen Colonies. General Burgoyne’s expedition, marching south from Canada, was designed to turn the tables. His aim was to seize control of the Hudson River valley and link up with Howe and Clinton in New York, reuniting the British forces while cutting New England off from the rest of the colonies. It started well, as such things often do; Burgoyne seized the forts in early July and scattered the Continental Army’s forces in the region with barely a fight. But Burgoyne didn’t count on the patriot militia.

Burgoyne’s plan called for him to link up with Barry St. Leger, who was marching southeast down the Mohawk River that runs through Western and Central New York and flows into the Hudson just north of Albany. The plan – and reason for the two British forces to march separately – was for St. Leger to gather with him the Iriquois Six Nations and the Loyalist militia. St. Leger laid siege to Fort Stanwix, which controlled the Mohawk River; to relieve the siege, local militia leader Nicholas Herkimer hastily raised about 800 militia, a few dozen Oneida Indians (one of the two Iriquois tribes that sided with the colonists) and wagonloads of supplies. St. Leger chose to meet Herkimer with a thousand men, the bulk of which were Mohawk and Seneca Indians, who ambushed Herkimer as his column wound through a densely wooded ravine on August 6, 1777. The result was the savagely bloody Battle of Oriskany, depicted above. The militia was caught by surprise, several key officers were killed in the opening volley, and Herkimer had his leg broken falling from his horse (he would die of the wounds a few days later). But the militia fought on, Herkimer directing the battle while propped against a tree and regrouping his men to counterattack after a downpour. The battle ended in a British victory, with enormous American casualties that broke Herkimer’s militia. But heavy losses from the battle demoralized St. Leger’s Indian allies and Loyalist militia, who had expected to play a support and ambush role and let the British and Hessians do the heavy lifting, and instead found themselves fighting a desperate, cornered militia at close quarters. Most of St. Leger’s support melted away, greatly weakening his force and leading to its ultimate failure to capture Fort Stanwix (which was relieved by Benedict Arnold on August 22).

While St. Leger was bogged down on his right, Burgoyne faced a second militia threat from his left that ultimately cost him nearly 1,000 casualties, more than 10 percent of his expedition. Approximately 2,000 New Hampshire, Massachusetts and Vermont militia under John Stark (a veteran of Bunker Hill who had served for a time under Washington in the Continental Army before returning home), who raised this force in a little over a week, set out to harass Burgoyne’s advance. Burgoyne sent a detachment of Hessians – considered some of Europe’s best professional troops – to gather supplies and intercept Stark before he could do more damage or link up with the Continental Army. At the ensuing August 16, 1777 Battle of Bennington (actually located in present-day New York near Bennington, Vermont), Stark’s militia faced the Hessians in a pitched battle, albeit with the advantage that the Hessians arrived in two groups of around 600, allowing Stark to defeat them in detail with a large numerical advantage. Stark’s militia surrounded the elite Hessian dragoons holding an elevated redoubt; the Hessian commander, Friedrich Baum, was mortally wounded in a last, desperate saber charge, and hundreds of his men surrendered. Few of the Hessians made it back to Burgoyne’s army.

As Burgoyne marched south, weakened by the failure of St. Leger, the loss of the Hessians and the defection of his Native American allies and with the Americans felling trees in his path, the Continental Army under Horatio Gates was bolstered by the arrival of thousands of militia, to the point where Burgoyne may have been outnumbered more than two-to-one at the second and final Battle of Saratoga. Militia units fought in the line of battle with the Continentals at Saratoga, which rivals Yorktown as the most important American victory of the war. More important than anything the militia did at Saratoga itself, their presence on the battlefield gave weight to the Continental forces that Burgoyne could not overcome. His surrender on October 17, 1777 permanently ended the effort to divide the colonies and link up with the British forces in Quebec, and was crucial to bringing France into the war.

The South: In the South, the militia had to come more directly to the rescue of the regulars. When the British moved the focus of their offensive operations to the South in 1779, they found a Continental Army much less well prepared and led than Washington’s army in the north. Cornwallis routed the defenders of Savannah in 1779 and Charleston in May, 1780, followed shortly by Tarleton’s massacre of a smaller Continental Army force at Waxhaws. Horatio Gates attempted to replicate his victory at Saratoga by rallying the militia around a new Continental Army force, but was wiped out by Cornwallis’ army (under Lord Rawdon) at Camden on August 16, 1780 (Washington regarded Camden as another foolhardy attempt to rely on militia). Between Charleston and Camden, Cornwallis had captured over 6,000 prisoners, including most of the Continental Army left in the South. The road seemed open to claim the prizes of North Carolina and Virginia.

It didn’t work out that way. Heavy-handed Loyalist militias, first under Christian Huck and later Patrick Ferguson, combined with Tarleton’s brutality at Waxhaws, enraged the population of the Carolinas and Eastern Tennessee. The first militia victories, at Ramsour’s Mill in North Carolina in June and the killing of Huck in South Carolina in July, were small, almost spontaneous engagements (although a study of the records of the militia who fought Huck showed that a number were Continental Army veterans and most had been fighting the British in one form or another since 1775). A landmark of the growing resistance came in October 1780, when a muster of nearly a thousand militia from the Carolinas, Virginia and Tennessee cornered Ferguson in the forest at King’s Mountain near the North/South Carolina border, killing Ferguson and destroying his Loyalist militia. In November, Tarleton’s feared British Legion – including hundreds of British regulars – were bloodied and beaten by the militia at Blackstock’s Farm, South Carolina. There were scores of other, smaller ambushes and militia-on-militia engagements in this period, some with the character of a blood feud.

The militia’s victories in the Carolinas begat more American recruitment and more caution for Cornwallis, buying time for Greene to enter the southern theater in late 1780 and re-organize the regulars. But with only a small regular force of a few thousand men, Greene still needed plenty of help from the militia. At Cowpens, South Carolina on January 17, 1781, a combined force of militia and Daniel Morgan’s crack riflemen broke the back of Tarleton’s British Legion, killing or capturing more than 80% of Tarleton’s 1,150-man force and effectively ending British control over South Carolina. Probably less than half of the American force at Cowpens was Continental regulars. The major engagement of the campaign came at Guilford Court House, North Carolina on March 15, 1781, at which Greene (while nominally losing the battle) inflicted sufficient casualties to convince Cornwallis (himself down to less than 2,000 men) to fall back to Virginia, where he would consolidate his forces only to meet his great defeat. As at Saratoga, while the fiercest fighting was done by the Continental regulars, the militia were important at Guilford Court House for their sheer numbers; Greene outnumbered Cornwallis more than two-to-one with a force that was probably around 70-80% militia.

The West: Finally, the Western theater of the war was almost entirely conducted by militia; beyond Western New York and Pennsylvania, there simply wasn’t much the Continental Army could do to support operations in the West. The one time in 1781 when the army sent a detachment to assist George Rogers Clark in his campaigns in what became the Northwest Territory, they were defeated en route. This left Clark, a Virginia militia commander, to seize outposts in present-day Illinois and Indiana using Virginia and Kentucky militia. The militia also conducted both offensive and defensive campaigns in the West against the Native American tribes. (The Spanish also made use of militia in the West and South during the war, both in the defense of St. Louis and in Bernardo de Galvez’ campaigns in Louisiana and the Floridas).

The Militia, Assessed: The militia were never an adequate substitute for a regular army. Bennington and Bunker Hill notwithstanding, they were often not useful in conventional engagements, especially offensive operations. They maneuvered poorly (e.g, the failure of the militia to arrive in proper position to support the Continental Army at Germantown and Trenton), a key weakness in 18th century warfare, and when not fighting from cover like stone walls or trees they were notorious for breaking formation and running when charged by the enemy. Continental Army commanders had no end of frustration trying to get militia companies to carry out orders and assignments, or even to determine in advance how many militia would show up when mustered. Washington himself had despised the militia as useless ever since his experiences with the Virginia militia in the French and Indian War (beware of Washington quotes about the militia and the right to bear arms that you may see on the internet; several of these are apocryphal and at odds with his actual thinking). Militia units were usually more effective fighting other militia or Native Americans than regular soldiers. And being amateurs who often had families to support, they preferred to stick close to home; Clark was never able to get enough volunteers from the Kentucky militia to carry out his grand plan of a march on Detroit.

The 1779 Penobscot expedition, in which a force composed mainly of Massachusetts and Maine militia (supported by a small detachment of marines) was to make an amphibious landing in Maine and assault a British fort, was a textbook example of the kind of complex operation completely unsuited to militia: despite superior numbers compared to the enemy and some initial momentum, the unwieldy joint command co-ordinated poorly with its Continental Navy support, the Maine militia turned out in smaller numbers than expected, and the militia maintained an ineffective siege and cut and ran when counter-attacked. The commanders of the expedition, including Paul Revere, ended up being hauled before a court-martial, and Maine remained in British hands the rest of the war.

Getting the most out of militia units in battle required tactical flexibility. Daniel Morgan, at Cowpens, ordered the first line of the North Carolina militia to fire two volleys from an advance position and then make an orderly retreat to the rear, with the second line firing three volleys then doing the same; the regulars in the third line would absorb the British charge. Morgan had no faith that the militia could withstand a charge without breaking, and quipped that he made sure not to make a stand near a swamp so the militia couldn’t disappear into it at the first sign of the enemy. Herkimer, at Oriskany, had to order his men in the midst of battle to start fighting in pairs, taking turns shooting while the other reloaded, because they were vulnerable to tomahawk attacks while reloading.
But for all their drawbacks, the ability to put militia units in the field was undeniably important, at times crucial, to the colonial cause. The main reason is the balance of manpower. The British, as I noted earlier, usually had 25-30,000 soldiers to work with, of whom 22-25,000 were either British or Hessian regulars. The size of the Continental Army at various points in time can be hard to ascertain due to spotty records, desertions, illness and short enlistments, but its main body seems to have peaked with about 20,000 around the Battle of Brooklyn, and Washington usually fought with about 10-12,000 men at his larger engagements; aside from the large force assembled at Saratoga, the army rarely had more than 5,000 men in any other place, and more often the commanders outside Washington’s immediate vicinity had only a few thousand regulars to work with. The Continental Army usually fought with smaller groups of regulars than its adversaries, it lost more battles than it won, and when Washington’s main army wasn’t present, it almost never won a significant engagement without the presence of militia. The army simply couldn’t defend most of the countryside. The militia was a force multiplier that prevented the British from consolidating control, which in turn would have forced Washington to seek active battles he couldn’t win. But with the support of the militia, the Americans had the advantage: the British couldn’t easily replenish their manpower, which had to be requested from London and shipped across the ocean (this is why they relied on their own Loyalist militia), while the Americans could do so on short notice whenever local authorities felt the need, without even consulting Congress. Besides numbers, the militia harassed the British supply lines, also a vulnerability for an army operating thousands of miles overseas.

And the militia bought time. In the North, the militia confronted and bottled up the British in Boston and seized their Hudson River forts at a time when there was no regular army. In the South, the militia kept up the fight after the regulars had been crushed, buying time for Greene. In New York, the decentralized ability to rapidly raise militia companies to bleed and eventually outnumber Burgoyne’s army was essential to the pivotal Saratoga campaign after the regulars had been dispersed by Burgoyne’s advance.

The militia didn’t win the war, and would never have won it alone. But it is hard to see how there is a Yorktown, a Treaty of Paris and an independent United States without the efforts of thousands of militia from 1775 to 1782.

Reflections on the American Revolution, Part II of III: The Generals


How did America win its independence? In Part I of this essay, I looked at the population trends, foreign alliances, and equipment and weather conditions under which the American Revolution was fought. Let’s add some thoughts on the leaders of the principal combatants: the American and British generals. The American command was far from perfect – but the war could have turned out very differently if the American side had not had the advantages of leadership it did, first and foremost the singular character of George Washington.

Washington, Washington: Any history of the Revolutionary War has to consider the unique leadership of George Washington. 43 years old when he assumed command, Washington came to the war with combat leadership experience from the French and Indian War, training as a surveyor that prepared him well to deal with maps and terrain, a decade of active fox hunting that had made him an excellent horseman, and experience in the Virginia House of Burgesses that had educated him in practical politics. Physically, Washington was a man of great strength, vigor and endurance and almost supernatural good luck. Washington’s robust constitution survived smallpox, diphtheria, multiple bouts of malaria, pleurisy, dysentery (in 1755, the 23-year-old Washington had to ride to Braddock’s defeat on a padded saddle due to painful hemorrhoids), quinsy (an abcess of the tonsils that laid him out in 1779) and possibly typhoid. In the rout of the Braddock expedition, Washington had two horses shot from under him and four bullet holes in his coat, yet neither then nor at any time during his military career was Washington wounded despite often being in the thick of battle and presenting an enormously conspicuous target (one of the tallest men in the Continental Army, in the most brilliant blue uniform, mounted on horseback).
But he had his weaknesses: he’d never had command of anything as large, diverse and complex at the Continental Army (whose very name bespoke its ambitions), and while Washington was smart, adaptable, detail-oriented and sometimes inspired, he was not a naturally brilliant military mind: his errors throughout the New York campaign would illustrate that he was no Napoleon, just as – in more fateful ways – Napoleon was no Washington.

I’ve noted before the success of Washington’s frequent tactic of hit-and-run attacks followed by retreats and more retreats. Washington’s overall long-term strategy ended up being one of simply enduring in the field, never putting his whole army at risk until he had the enemy trapped. But it’s crucial to always bear in mind that this strategy ran contrary to everything in Washington’s temperament. By nature, he was an aggressive, audacious military man who loved the offensive. Frequently throughout the war, Washington developed complex and daring offensive plans. Sometimes, as at Trenton in December 1776 and the following year’s effort at a coup de main at Germantown in October 1777, he put those plans in action. The attack at Germantown was designed to catch Cornwallis’ 9,000-man army by surprise with a numerically superior force and destroy it while it was divided from the rest of Howe’s army quartered at Philadelphia. The plan, calling for four columns to fall on the British more or less simultaneously, was too complex and ambitious (the largest Continental Army column arrived late and the two militia columns had little effect) and ended in defeat. But like the 1968 Tet Offensive, it was a morale and propaganda winner for the Americans just to mount such an assault. It raised the Continental Army’s morale, stunned the British command (which had thought Washington beaten and in retreat after the prior month’s defeat at Brandywine that had cleared the way for the occupation of Philadelphia) and, together with the victory at Saratoga, it helped persuade the French that the American war effort was serious and had staying power. Washington’s audacity on this occasion paid dividends even in defeat.

But at least as often, Washington allowed his war council (composed of his subordinates and, after the arrival of the French, Gen. Rochambeau, who made clear that he would defer to Washington’s ultimate decisions) to talk him out of his own overly ambitious plans even after he had drawn them up at length: a hazardous amphibious assault on Boston during the 1775-76 siege (complete with, in one version of the plan, a vanguard of soldiers on ice skates attacking across the frozen harbor); a march on the British war purse at New Brunswick with an army exhausted after Trenton and Princeton in January 1777; an attack on New York in 1780 or 1781 when Rochambeau wanted to chase Cornwallis to Yorktown instead. His willingness to listen to the counsel of cooler heads is what separated the practical Washington from more tactically brilliant but ultimately undone-by-hubris generals from Napoleon to Robert E. Lee.

Relatedly, Washington learned from his mistakes. The desire for decisive battle and protection of politically important turf had led him to risk annihilation of the largest army he would have during the war at the Battle of Brooklyn; thereafter, he would not stage a do-or-die stand to protect any particular spot of land. Washington had signed off on the disastrous 1775 invasion of Quebec; he would resist all further entreaties to stage a second offensive.

If Washington’s decision-making was sometimes imperfect, his temperament and leadership were flawless. Washington was neither deluded nor emotionless; time and again, his correspondence showed him verging on despondency at the condition of his army and the perils it faced, and we know he was capable of towering rages. But in the presence of his men (who were apt to get too high after heady victories and too low in defeat) and occasionally his adversaries, he never projected anything but steady confidence and endurance. Washington was not, perhaps, a nice man; even his close associates tended to regard him as the same distant marble statue we see him as today (Hamilton once bet a colleague at the Constitutional Convention a dinner if he’d go slap Washington on the back and act familiar with him; Washington pried his hand off and froze him with such a stare he told Hamilton afterwards he wouldn’t try it again for anything). But Washington put tremendous, conscious effort into acting the part of a great man at all times in order to become one. Washington had his vices, chief among them his ownership of slaves, but his virtues were almost a textbook of the qualities needed of the leader of a long, dangerous struggle through major adversity: perseverance, discipline of himself and others, attention to detail, fairness, integrity, resourcefulness, physical courage, endurance of hardship, and an unblinking practicality. There’s a great story about Washington breaking up a snowball fight that escalated into an enormous brawl between soldiers from Massachusetts and newly-arrived Virginia riflemen in Harvard Yard during the siege of Boston, possibly with racial overtones due to the presence of black soldiers in the Massachusetts regiment; a young observer recounted:

Reinforced by their friends, in less than five minutes more than a thousand combatants were on the field, struggling for the mastery.
At this juncture General Washington made his appearance, whether by accident or design I never knew. I only saw him and his colored servant…both mounted. With the spring of a deer, he leaped from his saddle, threw the reins of his bridle into the hands of his servant, and rushed into the thickest of the melee, with an iron grip seized two tall, brawny, athletic, savage-looking [Virginia] riflemen by the throat, keeping them at arm’s length, alternately shaking and talking to them.
In this position the eye of the belligerents caught sight of the general. Its effect on them was instantaneous flight at the top of their speed in all directions from the scene of the conflict. Less than fifteen minutes time had elapsed from the commencement of the row before the general and his two criminals were the only occupants of the field of action.

You didn’t mess with George Washington. But men would follow him anywhere.

Greene and Knox: The Continental Army’s leaders were a mixed bag, and more than a few of those who served with distinction are largely forgotten today. If there are two names besides George Washington that every American schoolchild should learn from the Revolutionary War, it’s Nathanael Greene and Henry Knox. Of all the Continental Army’s generals, only Washington, Greene and Knox served the entire duration of the war. While men like Charles Lee, Horatio Gates, Ethan Allen and Benedict Arnold contributed more than their share of ego, drama, and backbiting, both Greene and Knox were unswervingly, uncomplicatedly loyal both to Washington and the cause he fought for. In the long run, that served them better than any thirst for glory. Greene was offered the post of Secretary of War under the Articles of Confederation; when he declined, Knox took the job and continued to hold it under Washington’s presidency.

As soldiers, Greene and Knox were emblematic of one of the major characteristics of early America: they were self-educated, learning most of what they knew of military matters from books. Formal education was spotty in the colonies; even Washington, as a wealthy Virginia planter, never went to college and was mainly what we would call today “home schooled.” Yet early Americans didn’t let a lack of schooling bar them from the quest for knowledge. Ben Franklin had nearly no formal education at all, but by the closing decades of his public life was arguably the world’s most respected intellectual. Knox was educated at Boston Latin, but unschooled in war; his military experience was five years in an artillery company of the Massachusetts militia. Greene had neither schooling nor military experience, but read whatever he could get his hands on. At the outset of the war, they were young small businessmen: Knox a 25 year old bookseller from Boston, Greene a 32-year-old Quaker from Rhode Island who ran the family forge. Both prepared for combat by reading books on military strategy and tactics; had there been a “War for Dummies” in 1775, they would have read it without embarrassment. (Washington, too, ordered a number of military volumes when heading off to Philadelphia in 1775; as Victor Davis Hanson has noted, one of the distinctive features of Western civilization is a long written tradition of military science, allowing the widespread dissemination of the latest ideas about warmaking). Yet, self-educated though they were, they knew what they were missing: Knox spent years agitating for the establishment of an American military academy to teach the art of war, which would eventually be founded at West Point under the Jefferson Administration.

Knox pulled off perhaps the most remarkable and dramatic feat of the war in the winter of 1775-76, when a team led by he and his brother made the long, snow-covered trek from Boston to Fort Ticonderoga in upstate New York, loaded up all its heavy artillery, returned with every single artillery piece intact, and then in one night in March set up the guns on Dorchester Heights, the peninsula overlooking Boston from the south. The British were staggered, and forced to evacuate. The full tale, as told by David McCullough in 1776, is as amazing as anything in American history, and I can’t hope to do it justice here. Knox would go on to prove himself time and again as the chief artillery expert in the Continental army from Boston to Trenton (where his guns commanded the center of the town) all the way through Yorktown (where the shelling of Cornwallis’ encampment brought him to his knees), and would be present for all of Washington’s major engagements. Knox’ amateurism led him astray on occasion; a few of the guns under his command exploded on their handlers in Boston and again later in New York, and he is generally credited with the misguided decision to send waves of troops against a barricaded position at Germantown on the basis of an inflexible application of the maxim (which he’d probably picked up from a book) about not leaving a fortified position to your rear. But his overall record was one of practicality, resourcefulness and unwavering dedication to the cause.

As for Greene, he too can be found at all Washington’s major battles of the first half of the war, as Washington’s operational right-hand man; the Quartermaster General of the Army after Valley Forge; the commander (with Lafayette) of the first joint operation with the French, a failed effort to break the occupation of Newport, Rhode Island; and finally Washington’s choice to assume command of the southern theater of the war after the serial failures of Robert Howe at Savannnah, Benjamin Lincoln at Charleston and Horatio Gates at Camden. The fiasco at Camden ended the military career of Gates, the victor of Saratoga, and left the Continental Army in the South in shambles, but it would prove Greene’s finest hour. Greene had little time to rebuild the shattered army; he rarely commanded more than a few thousand men, and often had to rely on the aid of the local militia. And yet, with a major assist from those militia, he staged a brilliant series of retreats and maneuvers to keep Cornwallis from taking control over the region or from capturing and crushing his army. It was Greene who said of this campaign, “We fight, get beaten, rise, and fight again.” After the costly March 1781 Battle of Guilford Court House, Cornwallis came to the decision that he needed to stop chasing Greene around the Carolinas and head north to Virginia, setting in motion the fateful chain of events that led to Yorktown.
Unfortunately, and characteristically of life in the 18th century, many of the leading figures of the Continental Army and Revolutionary militia did not live that long after the war’s end, including Greene, who died of sunstroke at age 43. Charles Lee died in 1782, Lord Stirling in 1783, Greene in 1786, Ethan Allen in 1789, Israel Putnam in 1790, John Paul Jones in 1792, John Sullivan and Francis Marion in 1795, Anthony Wayne in 1796, and Washington himself in 1799. While numerous places in the United States today bear their names (here in New York, Greene as well as Putnam, Sullivan and militia leader Nicholas Herkimer are the namesakes of counties), their popular memories today are less vivid than Revolutionary War figures like Alexander Hamilton who had more prominent political roles. But nobody aside from Washington himself contributed more to victory than Greene and Knox.

The European Adventurers: The American cause was, of course, aided as well by a handful of Continental European volunteers – Marquis de Lafayette, Baron von Steuben, Casimir Pulaski, Tadeusz Kosciuszko, Baron de Kalb (this is aside from some of the American leaders like John Paul Jones and Charles Lee who were native to the British Isles). Two of those, Pulaski and de Kalb, were killed in battle in the early unsuccessful battles of southern campaign, Pulaski at Savannah and de Kalb at Camden. Both Lafayette and Kosciuszko would return to try – with mixed success – to lead their own homelands to a republican future; Jones would serve in Catherine the Great’s navy after the war, terrorizing the Turkish navy and becoming an honorary Cossack in the process. Only von Steuben would enjoy a quiet life in his adopted country.

Lafayette’s exploits, especially during the Yorktown campaign, were significant and memorable, and in a general sense he contributed to the cementing of the alliance with France. And Pulaski played a key role in organizing the American cavalry. But von Steuben was likely the most important to the Continental Army’s victory.

In contrast to the self-educated citizen soldiers running the American army, von Steuben came from the opposite end of the 18th century military spectrum: born a sort of aristocrat and a Prussian army brat, he had served as a staff officer on the professional Prussian general staff, the first of its kind in the world, and been instructed by Frederick the Great himself. Unlike some of the other Europeans – but like Jones, who fled to America because he was wanted for the murder of a sailor he had flogged to death – von Steuben was no starry-eyed idealist. He was an unemployed professional soldier, deeply in debt, who came to the American cause only after running out of prospective employers in Germany, and was trailed by an unverified rumor that he was fleeing prosecution for being “accused of having taken familiarities with young boys.” He was passed off to Congress, perhaps knowingly and possibly with the complicity of Ben Franklin (who recognized his value), as one of Frederick the Great’s generals rather than a captain on the general staff, and even his aristocratic title had been inflated and possibly invented. He spoke little or no English and often asked his translator to curse at the soldiers on his behalf.

But whatever his background, von Steuben’s discipline and professional rigor was crucial. He established badly-needed standards for sanitary conditions in the army, introduced training in use of the bayonet, and taught the men the sort of manuevers that were essential to 18th century warfare. He is, on the whole, credited with the improved drill and discipline that emerged from Valley Forge and was displayed in the 1778 Battle of Monmouth. Monmouth, in combination with the French entry into the war, induced the British to mostly abandon the strategy of trying to hunt down Washington’s army and focus instead on offensive operations in the South. Von Steuben’s field manual was still the U.S. Army standard until the War of 1812. If Greene and Knox are emblems of traditional American virtues, the Continental Army’s debt to von Steueben and the other Europeans is emblematic of America’s adaptability and openness to the contributions of new arrivals.

The British Command: While there were many important figures on both sides of the war – I’ve only scratched the surface here on the American side – essentially all the important decisions on the British side were made by six generals: Gage, Howe, Clinton, Cornwallis, Burgoyne, and (in Quebec in 1775-76) Guy Carleton (Carleton also briefly commanded the British evacuation of New York in 1783 at the war’s end). Where they went wrong provides an instructive contrast with Washington’s command.

All six were professional military men, veterans of the Seven Years’/French and Indian War: Clinton, Cornwallis and Burgoyne had fought only in Europe, while Howe and Carleton had fought in the Quebec campaign that culminated in Wolfe’s capture of the fortified city, and Gage had been a part of the Braddock expedition and thus seen Washington up close in action. And by and large, with the arguable exception of Gage, they fought with tactical skill and professionalism against the Americans. Yet they have gone down in history as architects of a great failure, weak in comparison to predecessors like Wolfe and dwarfed by the likes of Wellington who succeeded them. Aside from Carleton, only Cornwallis really survived the war with his domestic reputation and career intact, going on to years of highly influential service as a colonial administrator in Ireland and India that shaped the Empire in important ways. Howe was the only other one of the six besides Cornwallis to command troops in combat again, for a time during the early Napoleonic Wars.

The British failure was partly a matter of the personalities involved, but also one of basic strategic incoherence. They never really had a fully thought-out strategy. Only Clinton and Cornwallis really seemed to understand the paramount importance of putting Washington’s army out of business early in the war, and their aggressive plans of flanking attacks and hot pursuits were frequently overriden by Gage and Howe, who were less apt than Washington to heed the good advice of their subordinates. Washington learned from his mistakes; Howe, in particular, did not, on multiple occasions settling down to stationary positions when he should have been finishing off Washington.

The British could have adopted a scorched-earth approach like Sherman in the Civil War; General James Grant urged the burning of major cities in the North, and in the southern campaign Banastre Tarleton’s forces (including Loyalist partisans) did what they could to spread terror in the countryside, including some notorious examples of bayoneting wounded or surrendering Americans. Cornwallis near the end of the war in Virginia would set thousands of slaves free as a foreshadowing of Lincoln’s Emancipation Proclamation, albeit solely for tactical purposes. But, as regular forces facing guerrilla insurgencies often do, they took a halfway path that was the worst of both worlds: heavy-handedness and occasional atrocities were crucial to raising the militia against Burgoyne in New York and Cornwallis and Tarleton in the Carolinas, yet they failed to pursue a sufficiently merciless approach to annihilate the Continental Army or destroy its economic base of support.

Like the Americans, the British were riven by petty jealousies and contending egos; unlike the Americans, they never had a Washington to keep those divisions from impeding operations, and unlike the Americans, their civilian government was too far away to provide supervision. Burgoyne’s appointment to lead the Saratoga expedition alienated both Carleton, who resigned in protest, and Clinton. In the case of Clinton, while he was usually right about tactics (notably his preference for outflanking the militia from the rear at Bunker Hill and for encircling Washington in New York), his flaw (which probably contributed to his advice being ignored) was his inability to work well with others. Though not entirely through faults of his own, it was Clinton’s failure to arrive with timely reinforcements that led to the surrenders of Burgoyne at Saratoga and Cornwallis at Yorktown.

The human element of good generalship can be fortuitous, but it is also a product of the civilian and military cultures that produce armies. In the long run, the Americans had a clearer strategy, greater unity of purpose and command and more adaptable leadership, and that made the difference.

In Part III: the role of the militia.

Reflections on the American Revolution (Part I of III)

I’ve recently been reading a fair amount on the American Revolution, especially David McCullough’s 1776 (which should be required reading for every American).* The more you read of the Revolutionary War, the more there is to learn, especially about the vital question of how the colonists pulled off their victory over the vastly wealthier and more powerful Great Britain. The standard narrative of the American Revolution taught in schools and retained in our popular imagination today overlooks a lot of lessons worth remembering about where our country came from.

The Population Bomb: In assessing the combatants and indeed the causes of the war, it’s useful – as always – to start with demographics. There was no colonial-wide census, but this 1975 historical study by the US Census Bureau, drawing on the censuses of individual colonies and other sources, breaks out the growth of the colonial population from 1630 to 1780, and the picture it paints is one of explosive population growth in the period from 1740 to 1780:


The black population was principally slaves and thus – while economically and historically important – less relevant to the political and military strength of the colonies. But as you can see above, the main driver of population growth was the free white population rather than the slave trade.

Authoritative sources for the British population during this period are harder to come by (the first British census was more than a decade after the first U.S. Census in 1790); most sources seem to estimate the population of England proper between 6 and 6.5 million in 1776 compared to 2.5 million for the colonies. Going off this website’s rough estimated figures for the combined population of England and Wales (Scotland had in the neighborhood of another 1.5 million people by 1776), the colonies went from 5% of the British population in 1700 to 20% in 1750, 26% in 1760, 33% in 1770, and 40% in 1780:

It was perhaps inevitable that this shift in the balance of population between the colonies and the mother country would produce friction, and of course such a fast-growing population means lots of young men ready to bear arms. Men like Franklin and Washington were already, by 1755, envisioning the colonies stretching across the continent for the further glory of the then-nascent British Empire; 20 years later, both were buying Western land hand over fist and picturing that continental vision as a thing unto itself.

The distribution of population among the individual colonies was somewhat different from today. Virginia (encompassing present-day West Virginia) was by far the largest colony and, along with the Carolinas, the fastest-growing, while Massachusetts, Maryland and Connecticut were much larger – and New York much smaller – relative to the rest of the colonies than today:

This is one reason why Maryland gained a reputation as the “Old Line State”: it had the manpower to supply a lot of the Continental Army’s best troops. Connecticut was, in fact, seen as a crucial economic engine of the war, the most industrialized of the colonies at the time and mostly undisturbed by combat. That said, when you look solely at the white population, the southern states loom less large, and the crucial role of Pennsylvania and Massachusetts comes into focus:

The smaller colonies present a similar picture:


Note that Rhode Island, alone, lost population during the war, due to the 1778-1780 British occupation of Newport. That occupation had lasting effects. According to a 1774 census, Newport’s population before the war was more than twice that of Providence (more than 9,000 to less than 4,000) and it was a booming seaport; the city’s population dropped by more than half to 4,000, and it never really recovered its status as a port, losing business permanently to New York and Boston. Another lasting side effect: Rhode Island, founded by Roger Williams as a haven of religious tolerance and welcoming even to Jews and Quakers, forbade Catholics from living in the colony, but after the British abandoned Newport in 1780 and the French garrison took up residence, the grateful Rhode Islanders permitted the French troops to celebrate the first Mass in Rhode Island; today, it is the most heavily Catholic state in the union.

Britain’s population would surge in the 1790s, and by about 1800 there were a million people in London alone, the first city in world history confirmed to exceed that threshold. But that remained in the future; at the time, France’s population of 25 million and Spain’s of some 10 million would easily exceed that of George III’s domain. Moreover, like its colonies, England had a longstanding aversion to standing armies; while the Napoleonic Wars would ultimately compel the British Army (including foreign and colonial troops) to swell to a quarter of a million men by 1813, a 1925 analysis found that “[a]t the outbreak of the Revolution, the total land forces of Great Britain exclusive of militia numbered on paper 48,647 men, of which 39,294 were infantry; 6,869 cavalry; and 2,484 artillery,” with 8,580 men in America. And those forces were always stretched; according to this analysis of Colonial & War Office figures, the British never had much more than 15,202 redcoats in the American theater (including the Floridas, where they fought Spain), and never exceeded 30,000 troops in total, counting “Hessians” (companies of professional soldiers hired from the Hesse-Hanau, Hesse-Kassel, Brunswick and other German principalities) and American Loyalists (a/k/a “Tories”):

The Close Call: More modern American wars like the Civil War and World War II eventually developed a momentum that made victory effectively inevitable, as America’s crushing material advantages came to bear on the enemy. By contrast, the Revolutionary War was, from beginning to end, a near-run thing (to borrow Wellington’s famous description of Waterloo). At every stage and in every campaign of the war, you can find both British and American victories, as well as a good many battles that were fought to a draw or were Pyrrhic victories for one side. The length of the 7-year war in North America was a burden for the increasingly war-weary British, for a variety of reasons, but a long war was also a great risk for the Americans: the longer the war ran on, the harder it was in terms of both finances and morale to keep the all-volunteer Continental Army in the field. Whole units dissolved en masse at the end of their enlistments throughout the war, and there were mutinies in the spring of 1780 and again in January 1781. As late as 1780, Benedict Arnold’s treason and debacles at Charleston and Camden, South Carolina put the American cause in jeopardy of being rolled up by the British, causing America’s European allies to strike a separate peace. At one point or another in the war, the then-principal cities of most of the colonies – Massachusetts (Boston), Pennsylvania (Philadelphia), New York (New York), Virginia (Richmond and Charlottesville), Rhode Island (Newport), South Carolina (Charleston), Georgia (Savannah), Delaware (Wilmington) and New Jersey (Trenton, Princeton, Perth Amboy, New Brunswick) were captured and occupied by the British. Only Connecticut, Maryland, North Carolina and New Hampshire remained unconquered, as well as the independent Vermont Republic (Maine, then governed by Massachusetts, was also under British control for much of the war; the failed Penobscot Expedition was aimed at its recapture, and ended with a disastrous naval defeat). In the spring of 1781, Thomas Jefferson – then the Governor of Virginia – escaped capture by Cornwallis’ men by a matter of minutes, fleeing on horseback as the government of the largest colony was dispersed. It was only the complex series of events leading to Yorktown in the fall of 1781 – Cornwallis retreating to Virginia after being unable to put away Nathanael Greene’s Continentals and the North Carolina militia, Washington escaping New Jersey before the British noticed where he was going, Admiral de Grasse bottling up Cornwallis’ escape route in the Chesapeake by sea, Henry Clinton failing to come to Cornwallis’ aid in time – that created the conditions for a decisive victory and finally forced the British to throw in the towel.

Moreover, a great many individual battles and campaigns throughout the war turned on fortuitous events ranging from fateful decisions to apparently providential weather. It is no wonder that many of the Founding generation (like many observers since) attributed their victory to the hand of God.

Weather and Suffering: Both the Continental Army and its British and Hessian adversaries endured conditions that no armies before or since would put up with, including a staggering menu of extreme weather ranging from blizzards to colossal thunderstorms to blazing summer heat. Ancient and medieval armies would not campaign in freezing cold and snow; modern armies (like the combatants at Leningrad and the Marines in the retreat from Chosin Resovoir) would at least face them with something closer to proper clothing and shelter. But both sides in the war suffered chronic shortages: the British from lack of food for their men and forage for their animals, the Americans from lack of clothing (especially shoes), shelter and ammunition. The British lost more sailors to scurvy in the war than soldiers to combat, and during the long siege of Boston they had recurring problems with their sentries freezing to death at night. Smallpox, malaria and other diseases were endemic and especially hard on European troops with no prior exposure (one of Washington’s great strokes of good judgment was having his army inoculated against smallpox, a disease he himself had survived and which left him pock-marked and probably sterile**). The British were rarely able to make use of their cavalry due to a lack of forage, and their infantry had other equipment problems:

[T]he flints used by the British soldier during the war were notoriously poor. Colonel Lindsay of the 46th lamented that the valor of his men was so often “rendered vain by the badness of the pebble stone.” He exclaimed indignantly against the authorities for failing to supply every musket with the black flint which every country gentleman in England carried in his fowling piece. In this respect the rebels were acknowledged to be far better off than the king’s troops. A good American flint could be used to fire sixty rounds without resharpening, which was just ten times the amount of service that could be expected from those used by the British forces. Among the rank and file of the redcoats, the saying ran that a “Yankee flint was as good as a glass of grog.”

The war was conducted during the Little Ice Age, a period of low global temperatures (it’s a myth that “climate change” is a new phenomenon or must be caused by human activity), and the winters of the period (especially 1779-80) were especially brutal. American soldiers and militia forded waist-deep icy rivers to reach the Battle of Millstone, marched miles without boots in snowstorms on Christmas Night after crossing the icy Delaware to reach the Battle of Trenton, and even tried (insanely) to lay siege to the fortified Quebec City in a driving snow on New Year’s Eve. These were only a few of the examples of Americans marching great distances in weather conditions that would defeat the hardiest souls. The British performed their own acts of endurance and valor; drive over the George Washington Bridge some time and look at the cliffs of the Palisades, and picture Cornwallis’ men scaling them at night to attack Fort Lee. Other battles were fought in heavy wool uniforms in the broiling heat, from Bunker Hill to much of the southern campaign, or in rains that left gunpowder useless, or – on the eve of the Battle of Brooklyn – colossal lightning strikes that killed groups of American soldiers in Manhattan. In the 1776 siege of Sullivan’s Island, the British were shocked to discover that their cannonballs wouldn’t splinter the soft palmetto wood from which the American fort was constructed, leaving the British ships to take a pounding from American artillery.

Except for Quebec, the weather – however hostile – nearly always managed to favor the American cause, rescuing the Americans when the hand of fate was needed most. McCullough recounts the especially significant shifts in the wind and fog that allowed Washington’s army to escape in the night, undetected, across the East River after the catastrophic Battle of Brooklyn, while the blizzard at the Americans’ backs was key to their surprise at Trenton.

The Allies: Most educated Americans still recall that France came to the aid of the fledgling nation after the victory at Saratoga, and played a significant role in tipping the scales in the war. In World War I, Pershing’s refrain of “Lafayette, we are here” was still a popular invocation of that collective memory. Besides French money and supplies and French land and naval combat at Yorktown, the French also stretched the British defenses with extensive campaigns in the Caribbean and with a threatened invasion of England. But as important as the French alliance was, the emphasis on France understates the role that other of America’s allies and Britian’s enemies played in the Revolution.

First and foremost, at least as history is taught here in the Northeastern U.S., the Spanish role in the Revolutionary War is scandalously underplayed. There are reasons for this: Spain was a less impressive international power in the late 18th Century than France and would become drastically less so by the end of the Napoleonic Wars in 1815, and unlike the French, the Spanish rarely fought shoulder-to-shoulder with Americans or within the Thirteen Colonies. But Spain performed three vital roles in the war. First, under Bernardo de Galvez (namesake of Galveston, Texas, among other places), the Spanish Governor of the Louisiana Territory, the Spanish shipped significant war materiel up the Mississippi River through the American agent Oliver Pollock, supplementing the French aid that kept the American cause afloat. Second, after Spain’s 1779 declaration of war against Britain, Galvez opened a significant second front against the British-held Floridas (which then included, in the territory of West Florida, much of what is now the Gulf Coast of Georgia, Alabama and Mississippi). Galvez was arguably the most successful commander of the war in North America, his multi-national, multi-racial force sweeping through the British defenses, preempting any British move on New Orleans and culminating the capture of Pensacola (then the capital of East Florida) in the spring of 1781. This campaign resulted in the Floridas being transferred from Britain to Spain in the resulting peace treaty; the absence of a British foothold on the southern border of the U.S. would have lasting consequences, and the Floridas would end up being sold by Spain to the United States in 1819. And third, the Spanish played a pivotal role in the Yorktown campaign, not only raising more funds in Cuba for the campaign but also providing naval cover in the Caribbean that allowed Admiral de Grasse to sail north and close off the Chesapeake just in the nick of time. (Spain also conducted a long, costly siege of Gibraltar that ended unsuccessfully and a successful assault on Minorca, both of which spread British manpower thin between 1778 and 1783).

The other main fighting allies of the American colonists were two of the Iriquois Six Nations in upstate New York, the Oneida and Tuscarora (the other four fought with the British), as well as a few other tribes on the western frontier. But other sovereigns caused the British additional problems. The Kingdom of Mysore, a French ally in Southern India, went to war with Britain (the Second Anglo-Mysore War) in 1780, inflicting thousands of casualties with innovative rocket artillery at the September 1780 Battle of Pollilur. The Dutch, who frustrated John Adams’ efforts to arrange financial assistance and an alliance until after Yorktown, nonetheless ended up dragged into the Fourth Anglo-Dutch War beginning in December 1780. (Some things never change: Adams was accused of unilateral “militia diplomacy” for ignoring diplomatic protocols and negotiating with the Dutch without consulting the French, but crowed after inking the deal in 1782 that “I have long since learned that a man may give offense and yet succeed.”). The Russians, then moving towards an alliance with Great Britain against the French, nonetheless pointedly refused to get involved; Catherine the Great refused a 1775 request in writing from George III that she send 20,000 Cossacks to America (necessitating the hiring of Hessians instead) and eventually joined the League of Armed Neutrality with the Dutch and others to resist British naval embargoes (the step that brought the British and Dutch to blows). Catherine II thought the British were fools for provoking the conflict and predicted from the outset that the Americans would win. All in all, the international situation by the end of 1780 left the British increasingly isolated and drove the strategic imperative to seek out a decisive battle in Virginia – an imperative that led Cornwallis directly into a trap of his own devising but which the American, French and Spanish forces sprung with great skill and coordination.

In Part II: Washington and the other American and British generals. In Part III: the role of the militia.

Continue reading Reflections on the American Revolution (Part I of III)

Mitt Romney, Friend in Need

The Obama campaign has spent months laboring to get this election to be about anything but the president’s record and the candidates’ policy proposals. As often happens in campaigns, this requires painting caricatures with no connection to the facts. The Obama camp has worked hard to make Mitt Romney out as a bad, unfeeling, cold-hearted rich guy who only cares about his own bottom line. Romney himself hasn’t helped the matter by being such a stiff, tin-eared speaker who actually looks and sounds like a walking stereotype; political communication is not among his skills. But the reality is that Romney’s biography shows him to be a real-life Good Samaritan who has walked the walk of caring for his fellow man not only with his own money but with his own time and his own hands. I’ve had my share of political complaints about Romney, but on this score, the critics should be ashamed of themselves: Romney is a genuine role model of what private citizens can do to assist those in need.

Continue reading Mitt Romney, Friend in Need

The Southern Strategy Myth and the Lost Majority

I recently finished reading Sean Trende’s excellent book The Lost Majority, which is a must-read for anyone attempting to intelligently discuss its subject: how winning political coalitions are built, maintained and undone in the modern American two-party system. Trende covers a range of topics. At the level of political science theory, he dismantles the theory of periodic realigning elections. In his historical analysis, he may surprise you by arguing that the most enduring coalition of the past century was assembled not by McKinley, FDR, or Reagan but Dwight Eisenhower. Looking to the recent past and future, he convincingly demonstrates that Obama’s 2008 coalition was always more fragile than Democrats at the time believed, and that there remain obstacles to the John Judis/Ruy Teixeira theory of an Emerging Democratic Majority. Trende’s major point is that all such predictions of enduring partisan majorities (he cites many dating back over the past century and a half) ignore the fact that political coalitions inevitably draw together factions with different interests and ideologies, and frictions within those coalitions inevitably offer opportunities for the other party to regain support.

But one of the historical narratives that Trende covers in depth is of particular interest because it remains a crucial part of partisan mythology today: the enduring myth of the Southern Strategy. On the occasion of Mitt Romney’s address to the NAACP, it is worth revisiting that myth today.

Continue reading The Southern Strategy Myth and the Lost Majority

The Growth Deficit and Spending Fairy Tales

The United States faces a number of economic and fiscal challenges in the short and long terms. But the single biggest is the Growth Deficit: the problem of government spending and government debt growing faster than the private sector. That deficit needs to be reversed; we are on an unsustainable path unless we start producing a Growth Surplus. And Republicans and conservatives need to put more effort into emphasizing the importance of the Growth Deficit to the public.

The Obama Administration seems to recognize that this is a political vulnerability, as it has lately been spinning the notion that the last few years have not actually grown federal spending. Below the fold, I’ve collected a number of charts that illustrate why this is nonsense. But first, a word on how we should be measuring our solvency.

Continue reading The Growth Deficit and Spending Fairy Tales

The Perils of Complexity

As a practicing lawyer, I naturally have a professional interest in vague and/or complex legal rules that require lots of expensive legal research, training and experience to understand and explain. But complexity isn’t just costly to consumers of legal services, and thus a burden on business as well as on citizen access to the courts. It’s also a drag on the economy and on personal liberty, as businesses and ordinary citizens must choose between paying lots of compliance lawyers or steering too wide of increasingly large gray areas. It risks in particular the unfair, arbitrary and sometimes corrupt or discriminatory abuse of the criminal justice system to prosecute things that were hard to foresee as violations of the law. And it demeans democracy, as the actual making of law is done by judges and bureaucrats rather than citizen-elected legislators.
One of the greatest virtues of Justice Scalia in his quarter-century on the Supreme Court (he celebrates 25 years on the High Court in September) has been his structural critique of, and systemic assault on, unnecessary legal complexity. In three opinions this morning, he focused attention on three different aspects of that same problem – one of which was graphically illustrated by yesterday’s news regarding the widespread practice of waivers under Obamacare. And last week’s news regarding the indictment of John Edwards illustrates how the failure to heed Scalia’s wise observations has made a hash of efforts by campaign finance “reformers” to regulate political speech in the United States.

Continue reading The Perils of Complexity

A History of Team Defense (Part I of II)

Part II here.
Who are the best defensive teams of all time? Individual defensive statistics in baseball – as in other team sports – have been crudely kept and poorly understood for years, with the more sophisticated modern methods only being gathered for the past decade or two. As a result, even statistically-oriented baseball fans have tended to answer questions about defense as much by reputation and anecdote as anything. The lack of a statistical framework tends to make defense a bit invisible in our memories; even most knowledgeable fans have no more concrete sense of, say, Ty Cobb as a defensive player than they do of Turkey Stearnes as a hitter. My goal in this essay is to a little bit to remedy that on the team level.
We do have one measurement of team defense that endures over time and thus can be used as a baseline for measuring team defense: Defensive Efficiency Rating (DER). I’d like to walk you through the history of the best and worst teams in each league, and the league average, in DER from the dawn of organized league ball in 1871 down to this season. As usual, I’ll try to explain here what I’m measuring in terms that make sense to readers who may not be all that familiar with the ‘sabermetric’ literature, although I make no claim to be current myself on every study out there, and welcome comments pointing to additional studies.
What is DER?
DER is, put simply, the percentage of balls in play against a team that are turned into outs. The exact formulas used to compute DER can vary a bit, and while Baseball-Reference.com – which I used for this study – computes DERs all the way back to the start of organized baseball in 1871, its description of the formula is a bit vague:

Percentage of balls in play converted into outs
This is an estimate based on team defensive and pitching stats.
We utilize two estimates of plays made.
One using innings pitched, strikeouts, double plays and outfield assists.
And the other with batters faced, strikeouts, hits allowed, walks allowed, hbp, and .71*errors committed (avg percent of errors that result in an ROE)
Total plays available are plays made + hits allowed – home runs + error committed estimate.

All methods for computing DER look at the percentage of balls in play that become hits; it appears that Baseball-Reference.com’s formula also counts the outs that result from double plays or outfield assists, both clear examples of outs created by good defense, as well as counting against the defense the one thing that fielding percentages always recorded – errors – but only where they put a man on base. From what I can tell, essentially the same formula is used over all of the site’s historical DER data, so the data is generally consistent over time.
It’s worth recalling that DER only measures outs vs. men reaching base – it doesn’t deal with extra bases on doubles and triples, or stolen bases and caught stealing, or other baserunning issues. So, it’s only one part of the picture just as on base percentage is just one part of the offensive picture. But like OBP, it’s the single most important part.
What Goes Into Team DER?
One of Bill James’ maxims throughout the 1980s was that “much of what we perceive to be pitching is in fact defense.” As most of my readers will recall, Voros McCracken broke major ground in the field of baseball analysis of pitching and defense in 2001 with a study showing that Major League pitchers, over time, had no effect – or at least, there was no difference among Major League pitchers in the effect they had – on whether balls in play become outs. Strikeouts, walks and home runs (the so-called “Three True Outcomes”) are the pitcher vs. the hitter, mano a mano, but on average, BABIP (batting average on balls in play, the flip side of DER) shows no tendency to be consistent year to year among individual pitchers; other statistical indicators also strongly suggest that a pitcher’s BABIP tends to be mostly a combination of team defense and luck. The simple way of expressing McCracken’s insight is that it’s the defense rather than the pitcher that determines how many balls in play become outs.
As with most groundbreaking insights, further research has added some caveats to McCracken’s theory. The first one, which he observed from the beginning, was that knuckleballers tend as a group to have lower than average BABIP, and thus are something of an exception to the rule. I haven’t absorbed all the further studies, but there are reasons to suspect that other classes of pitchers may have a modest advantage in the battle against BABIP, including elite relievers (Troy Percival, Armando Benitez, Mariano Rivera, Trevor Hoffman and Keith Foulke all seemed to have much lower career BABIP than their circumstances would suggest) and possibly pitchers who throw a huge number of breaking balls (we’ll discuss Andy Messersmith a bit below).
Also, McCracken’s research, and most of the following research, looked at the conditions of modern baseball (at the time, Retrosheet and Baseball Prospectus’ database only went back to the mid-1950s). It’s entirely possible that pitchers had greater influence on BABIP/DER in the era before 1920, or further back, when there were pitchers who had consistent success even in the era when most plate appearances resulted in a ball in play and thus the pitcher had little opportunity to set himself apart from his peers by success in the Three True Outcomes. As I explained in this 2001 essay, the playing conditions were greatly different in 19th century baseball in particular, and I’d be hesitant without data on that era to just assume that the pitcher’s effect on balls in play was as minimal then as it is now.
Finally, of course, as with other statistical measures, there are park effects. We all know that different parks are more or less favorable for hitters, and of the components of that, park effects on home runs are significant, and parks can effect walks and strikeouts as well. (Less so for baserunning, in most cases). Balls in play are no exception, and I don’t have data handy on how park effects specifically affect balls in play over time besides the ability to notice some trends (for example, the Polo Grounds for many years was a great home run park but not a great hitters’ park; I assume DER there tended to be high) and a few specific examples where I dug into the numbers we have. So bear in mind that the numbers set out below are not park-adjusted.
Key to the Charts
BIP%: Percentage of plate appearances resulting in a ball in play (i.e.,Plate Appearances minus homers, walks and strikeouts). Since I used league batting rather than pitching data for this, there may be a slight discrepancy for the period since the start of interleague play in 1997.
NL/AL etc.: Under the league name I have the league’s DER for that season.
High/Low: The team with the league’s highest and lowest DERs. I used Baseball-Reference.com’s team abbreviations.
DER: That team’s DER
High%/Low%: Team DER divided by the league average. This is the key number I use to identify the best and worst defensive teams, so we can see who were the best and worst defensive teams relative to the league average. As usual, I’m not using any math here more complicated than simple arithmetic and basic algebra.
Also, where I compute “rough” estimates of BABIP for pre-1950 pitchers I used the basic formula of (H-HR)/((IP*3)+H-HR-K)
The 1870s
Talent levels in the 1870s were especially uneven, as the first organized league – the National Association – began play in 1871 just two years after the debut of the first-ever professional team. Schedules were short (20 games in 1871, in the 60s by decade’s end), fielders didn’t wear gloves, playing surfaces were ungroomed and in some cases effectively without fences, and with nine balls for a walk and longballs unheard of, nearly every plate appearance resulted in a ball in play – the 1872 season’s 96.5% rate is the highest in the game’s history, and 1879 was the last season above 90%.
As you can see, defenses improved dramatically over this period, in part no doubt as professional pitchers and fielders learned their craft and more of the nation’s best ballplayers gathered into the National Association and later the NL. But errors were a big chunk of the poor defense of the era – in each of the NL’s first five seasons, there were more unearned runs than earned runs scored, and it wasn’t until 1906 that the average number of unearned runs would drop below 1 per game.
The most successful defensive team of the era was the 1876 St. Louis “Brown Stockings” team (not precisely the same organization as the Cardinals), the only Major League team ever to be 10% better than its league in DER. Starting pitcher George “Grin” Bradley struck out 1.6 men per 9 innings but led the league with a 1.23 ERA (the team also allowed the league’s fewest runs, although their 2.36 unearned runs per 9 innings was only third-best in the league) while throwing all but four of the team’s innings. A rough estimate of the BABIP against Bradley is .258 in 1875, .224 in 1876, but .285 after he changed teams the next year, when his ERA nearly tripled, and .267 for his career. Which at least seems consistent with the notion that Bradley’s defense was doing most of the work.
Note that the Philadelphia Athletics of 1873-74, featuring Cap Anson and Ezra Sutton in their infield, made the only repeat appearance on the decade’s leaderboard (Anson, in his early 20s, played multiple positions including short and third, while Sutton was beginning a long career as a third baseman and shortstop).
The worst defensive team of all time? I hate to give you such an underwhelming answer, but by a wide margin it’s the 1873 Baltimore Marylands, who folded after just 6 winless games and almost none of whose players appeared in the big leagues again. The hapless Marylands allowed 144 runs in 6 games (24 per game), only 48 of which were earned; in addition to hideous defense their pitchers didn’t strike out a single batter. (The offense was no better, as a team batting average of .156 with only one extra base hit and no walks attest). When you think of the level of competition in those early years, think of the Marylands.
National Association-National League

BIP% NA High DER High % Low DER Low %
1871 94.5% 586 NYU 608 103.75% TRO 548 93.52%
1872 96.5% 589 BOS 647 109.85% OLY 510 86.59%
1873 96.2% 578 ATH 613 106.06% MAR 458 79.24%
1874 96.7% 589 ATH 629 106.79% BAL 552 93.72%
1875 96.4% 619 HAR 663 107.11% WAS 538 86.91%
BIP% NL High DER High % Low DER Low %
1876 95.3% 626 STL 698 111.50% CIN 569 90.89%
1877 92.2% 623 HAR 642 103.05% CIN 561 90.05%
1878 89.5% 628 CIN 638 101.59% MLG 615 97.93%
1879 90.2% 632 BUF 659 104.27% TRO 599 94.78%

The 1880s
The game gradually professionalized in the 1880s, but not without a great many bumps along the way. The Union Association of 1884 was only barely a major league (four teams, including Wilmington, folded after playing less than a quarter of the schedule), but diluted the talent level of the two major leagues. The 4-ball/three-strike count wasn’t standardized until 1889, after a gradual decline in the number of balls for a walk and a one-year experiment in 1887 with four strikes for a strikeout; DERs rose sharply after the three-strike rule was restored. The schedule topped 100 games for the first time in 1884, and had reached 135 by 1888. The color line was established in the wake of the failure of Reconstruction (which effectively ended in 1877), after only a few black players had taken the field. The first gloves were becoming commonly used by decade’s end.
Anson’s 1882 White Stockings (now Cubs) and the 1882 Red Stockings (now Reds) became the first pennant-winning teams to lead the league in DER since the founding of the National League (in the NA, only the 1872 Boston team had done so); four teams would do so in each of the two leagues in ten years, plus the Union Association champs. Bid McPhee, enshrined in the Hall of Fame in 2000 largely for his defense, anchored the Red Stockings teams that led the league three times in their first six seasons in the league, and their 1882 and 1883 DERs were the most dominant of the decade outside the UA, but the mid-decade St. Louis Browns (now Cardinals) juggernaut also emerged as a defensive powerhouse. The woebegotten 1883 Philadelphia Quakers were the decade’s worst defensive team. The NL’s most successful defensive squad? The 1884 Providence Grays, much to the benefit of Old Hoss Radbourn, who had his famous 59-12, 1.38 ERA season. Radbourn also struck out 441 batters in 678.1 innings, so he did his share as well, and by a rough calculation the opposing BABIP of .242 – while a career best – wasn’t hugely out of line with his career .271 mark. Lucky and good is a good combination.
National League

BIP% NL High DER High % Low DER Low %
1880 88.8% 649 PRO 681 104.93% BUF 615 94.76%
1881 88.6% 641 CHC 664 103.59% BUF 613 95.63%
1882 87.4% 641 CHC 667 104.06% WOR 590 92.04%
1883 86.3% 617 CLV 651 105.51% PHI 553 89.63%
1884 81.2% 633 PRO 678 107.11% DTN 611 96.52%
1885 83.8% 651 NYG 697 107.07% BUF 613 94.16%
1886 81.1% 644 PHI 674 104.66% KCN 602 93.48%
1887 84.7% 647 DTN 663 102.47% WHS 635 98.15%
1888 83.9% 671 NYG 694 103.43% IND 659 98.21%
1889 82.0% 650 CLV 673 103.54% WHS 622 95.69%

American Association

BIP% AA High DER High % Low DER Low %
1882 89.3% 639 CIN 692 108.29% BAL 599 93.74%
1883 87.5% 631 CIN 688 109.03% PIT 591 93.66%
1884 83.7% 640 LOU 670 104.69% WAS 580 90.63%
1885 84.5% 649 STL 679 104.62% PHA 623 95.99%
1886 81.0% 643 STL 667 103.73% PHA 625 97.20%
1887 84.5% 630 CIN 658 104.44% NYP 595 94.44%
1888 82.8% 662 STL 702 106.04% LOU 626 94.56%
1889 81.0% 640 BRO 665 103.91% LOU 604 94.38%

Union Association

BIP% UA High DER High % Low DER Low %
1884 80.7% 591 SLM 644 108.97% WIL 539 91.20%

The 1890s
The NL achieved dominance after the Players League war. The modern era of pitching arrived in 1893 when the mound was moved back from 50 feet to its current 60 feet 6 inches; the percentage of balls in play spiked as strikeouts became almost non-existent, while DERs plunged in 1894 and 1895, suggesting more hard-hit balls off pitchers struggling to adjust to the new distance. The 1890 Pirates were the decade’s worst defensive team, the 1895 Baltimore Orioles (with extra balls hidden in the long grass of the outfield among their notorious tricks) the best, although the late-decade Beaneaters (now Braves, featuring Hall of Famers Hugh Duffy and Billy Hamilton in the outfield, Jimmy Collins at third, and Kid Nichols as the staff ace) were consistently dominant and would remain so through 1901. (Collins left in 1901, Duffy the previous year, but Nichols, Hamilton and infield anchors Herman Long, Bobby Lowe and Fred Tenney were there the whole time; Long and Nichols had also been on the 1891 team). Four teams had the NL’s best record while leading the league in DER, three of them Beaneaters teams.
National League

BIP% NL High DER High % Low DER Low %
1890 81.8% 663 CHC 696 104.98% PIT 598 90.20%
1891 82.1% 665 BSN 677 101.80% CLV 645 96.99%
1892 82.2% 672 CLV 697 103.72% BLN 625 93.01%
1893 84.4% 654 PIT 673 102.91% WHS 614 93.88%
1894 84.9% 626 NYG 651 103.99% WHS 601 96.01%
1895 85.5% 637 BLN 677 106.28% LOU 606 95.13%
1896 85.9% 649 CIN 673 103.70% WHS 625 96.30%
1897 86.0% 648 BSN 679 104.78% STL 618 95.37%
1898 86.2% 669 BSN 708 105.83% WHS 633 94.62%
1899 86.9% 660 BSN 699 105.91% CLV 610 92.42%

American Association

BIP% AA High DER High % Low DER Low %
1890 80.2% 652 COL 692 106.13% PHA 609 93.40%
1891 80.4% 653 COL 677 103.68% WAS 605 92.65%

Players League

BIP% PL High DER High % Low DER Low %
1890 82.6% 636 NYI 655 102.99% BUF 612 96.23%

The 1900s
The foul-strike rule, adopted in the NL in 1901 and the AL in 1903, brought back the strikeout and contributed, along with better gloves and more “small ball,” to rising DERs, as the NL in 1907 became the first league ever to turn 70% of balls in play into outs, rising to 71.4% in 1908, a level that would not be matched again until 1942. Schedules also started to be standardized in 1904, settling around 154 games after a decade mostly in the high 120s.
Surprisingly, defense was not the essential element for many of the pennant winners of the Dead Ball Era’s first decade – only one AL pennant winner (the 1903 Red Sox, featuring Jimmy Collins yet again) led the league, and only two NL pennant winners. That being said, the Cubs of the Tinker-Evers-Chance era have as good an argument as anyone to be the dominant defensive team of all time. They led the NL in DER eight times in nine years, as well as finishing a close second (at 726, 101.68% of the league) the ninth of those, and second again in 1912. In 1906, on the way to a 116-36 record, they became the first of five post-1900 teams to beat the league average by 5% or more, and their 736 DER bested the second-place Phillies by 29 points and would not be topped (in raw terms) for 62 years, by men using vastly superior equipment. It’s possible there was a park factor at work, although Baseball-Reference.com lists West Side Park (where the Cubs played until Wrigley opened in 1916) as if anything a hitters park until late in the decade; in 1906, the Cubs combined to score and allow 7.24 runs per game at home, 7.03 on the road, with the defense in particular allowing 2.22 runs per game on the road compared to 2.78 at West Side Park. Was it the pitchers? By my rough estimate, the BABIPs against four or the five pitchers on that staff to throw 1000 or more innings as Cubs between 1903 and 1912 -Three Finger Brown, Carl Lundgren, Orval Overall, and Jack Pfiester – varied between .237 and .241 compared to a team average of .241 for all pitchers to throw at least 200 innings on the team over those years, with only one such pitcher above .254. Only Ed Reulbach, at .230, seems to have stood out a bit. That suggests that the team’s defense was the predominant factor. The same BABIP figure for the rival Giants, a good but more normal defensive team, was .259 – the 19-point advantage on balls in play for Brown over Christy Mathewson is almost certainly the main explanation for why Brown’s ERA was better (1.75 to 1.90) over those years, although of course Brown was nonetheless a great pitcher.
Best AL defensive team? The 1901 Red Sox, another Jimmy Collins squad. Worst team of the decade? The unraveling 1902 Baltimore Orioles, who were deserted by John McGraw in mid-season and relocated to New York (now the Yankees) the following spring (like the prior year’s Milwaukee franchise – there’s a long history of teams getting folded or moved after cellar-dwelling DERs, as terrible defense is often a byproduct of organizational failure).
Also, note the atrocious showings by the late-decade Washington Senators, the team on which Walter Johnson broke in, yet another way in which Johnson’s early career was plagued by bad teams. Johnson would bear some closer study – a quick look suggests that his BABIPs may have been better than his teams’ for much of his career, as if he needed more advantages on top of leading the AL in K/BB ratio nine times, K/9 seven times, fewest BB/9 twice and fewest HR/9 three times (a favorite stat: Johnson in 1918-19 threw 616.1 innings and allowed just two home runs, both of them by Babe Ruth). His BABIP seems to have hit a career low of .219 in 1913 at the same time as his career high 6.39 K/BB ratio, another example of perhaps being both lucky and good, or perhaps there being a correlation between the two.
National League

BIP% NL High DER High % Low DER Low %
1900 86.3% 661 BSN 691 104.54% NYG 637 96.37%
1901 83.3% 664 BSN 685 103.16% CIN 640 96.39%
1902 84.3% 674 BRO 696 103.26% STL 648 96.14%
1903 83.5% 664 CHC 681 102.56% STL 647 97.44%
1904 83.7% 688 CHC 709 103.05% PHI 658 95.64%
1905 82.9% 683 CHC 716 104.83% BRO 649 95.02%
1906 82.0% 698 CHC 736 105.44% BSN 670 95.99%
1907 82.8% 702 CHC 730 103.99% BSN 685 97.58%
1908 83.7% 714 PIT 730 102.24% STL 698 97.76%
1909 82.2% 698 CHC 721 103.30% BSN 680 97.42%

American League

BIP% AL High DER High % Low DER Low %
1901 86.4% 658 BOS 684 103.95% MLA 647 98.33%
1902 86.2% 671 BOS 686 102.24% BLA 636 94.78%
1903 83.8% 680 BOS 695 102.21% WSH 668 98.24%
1904 82.9% 693 CHW 716 103.32% WSH 668 96.39%
1905 81.8% 697 CHW 721 103.44% NYY 688 98.71%
1906 83.3% 692 CLE 719 103.90% WSH 672 97.11%
1907 83.7% 693 BOS 710 102.45% WSH 666 96.10%
1908 82.7% 700 CHW 719 102.71% NYY 680 97.14%
1909 82.3% 695 PHA 717 103.17% SLB 676 97.27%

The 1910s
Defense had the upper hand in the teens, with DERs regularly topping 70% leaguewide in the second half of the decade, especially in the NL. If top defensive teams winning the pennant were a rarity in the prior decade, they became routine in the teens – five times in the NL, five in the AL. The Red Sox were the decade’s dominant team in the AL both defensively and overall, and continued to lead the league even after the departure in 1916 of Tris Speaker. (Oddly, the Red Sox went from the best DER in the AL in 1912 to the worst in 1913 and back to the best in 1914; more on that below.) Meanwhile, the NL’s revolving door of pennant winners (and World Series doormats) from 1915-19 were generally whoever handled the balls in play best. Yet most of those NL teams didn’t beat the league average by all that much, and the best single-season showing was the 1919 Yankees. The worst, unsurprisingly, was the post-fire-sale 1915 A’s (with a fossilized 40-year-old Nap Lajoie at second and their best remaining player, catcher Wally Schang, playing out of position at third), although the doormat 1911 Braves weren’t far behind.
The Cubs’ defense stopped being dominant with the 1913 departure of Joe Tinker, who went on to anchor the Federal League’s best defense, while Johnny Evers was part of lifting those Braves out of their 1911-12 defensive funk to a slightly above average defensive team in 1914 (they’d been below average in 1913 – that said, I’d expected the 1914 Miracle Braves to be one of the teams that had a huge year defensively, and even with Evers and Rabbit Maranville, they didn’t).
National League

BIP% NL High DER High % Low DER Low %
1910 81.4% 688 CHC 708 102.91% STL 673 97.82%
1911 80.1% 684 CHC 698 102.05% BSN 649 94.88%
1912 81.2% 679 PIT 703 103.53% BSN 659 97.05%
1913 81.8% 691 NYG 702 101.59% CIN 684 98.99%
1914 81.5% 698 PIT 712 102.01% PHI 666 95.42%
1915 82.1% 704 PHI 715 101.56% NYG 687 97.59%
1916 82.3% 704 BRO 719 102.13% STL 684 97.16%
1917 83.2% 704 NYG 723 102.70% CHC 691 98.15%
1918 85.2% 707 NYG 723 102.26% BSN 695 98.30%
1919 85.2% 705 CIN 729 103.40% PHI 672 95.32%

American League

BIP% AL High DER High % Low DER Low %
1910 81.7% 692 PHA 713 103.03% SLB 663 95.81%
1911 80.6% 662 CHW 675 101.96% WSH 655 98.94%
1912 80.5% 666 BOS 683 102.55% NYY 640 96.10%
1913 81.1% 685 PHA 701 102.34% BOS 670 97.81%
1914 80.2% 692 BOS 709 102.46% CLE 662 95.66%
1915 80.1% 693 BOS 712 102.74% PHA 654 94.37%
1916 80.9% 698 BOS 713 102.15% PHA 668 95.70%
1917 82.4% 704 BOS 724 102.84% PHA 687 97.59%
1918 83.5% 705 BOS 729 103.40% DET 694 98.44%
1919 83.1% 689 NYY 715 103.77% PHA 661 95.94%

Federal League

BIP% FL High DER High % Low DER Low %
1914 80.7% 679 CHI 711 104.71% SLM 667 98.23%
1915 81.9% 694 CHI 708 102.02% BAL 660 95.10%

The 1920s
Lower strikeout rates with the lively ball’s arrival were probably the largest factor in the sudden increase in scoring in the Twenties, as even the gradual arrival of home run hitters and a leaguewide rise in walks couldn’t stop the upward march of the percentage of balls in play. But DERs dropped a good 15 points as well.
Defense was slightly more the hallmark of AL than NL pennant winners in the Twenties – six in the AL, four in the NL. Naturally the 1927 Yankees were the best in the league at this, too, their fifth league lead in nine years. And Walter Johnson finally got some real defensive support when the Senators won their two pennants in 1924-25, dropping Johnson’s BABIP from .280 to .248 in 1924.
As discussed in the next decade, you have to figure a significant park effect was at work in the fact that the Phillies were dead last in the NL in DER 14 times in their last 17 full seasons in the Baker Bowl, including the NL’s worst showing of the decade in 1926. Then again, nearly all of those Phillies teams were terrible teams, with a collective .383 winning percentage and only one winning record, in 1932 when their DER was 98.5% of the league average. And the Phillies had led the league in DER behind Grover Alexander in 1915.
National League

BIP% NL High DER High % Low DER Low %
1920 85.3% 693 CIN 708 102.16% STL 678 97.84%
1921 85.6% 680 PIT 696 102.35% PHI 658 96.76%
1922 84.7% 677 NYG 700 103.40% STL 663 97.93%
1923 84.5% 681 CHC 700 102.79% PHI 651 95.59%
1924 84.8% 687 PIT 704 102.47% PHI 665 96.80%
1925 84.3% 676 CIN 689 101.92% PHI 659 97.49%
1926 84.6% 689 STL 707 102.61% PHI 656 95.21%
1927 84.4% 687 PIT 705 102.62% PHI 663 96.51%
1928 83.6% 693 STL 707 102.02% PHI 666 96.10%
1929 83.1% 680 PIT 692 101.76% PHI 662 97.35%

American League

BIP% AL High DER High % Low DER Low %
1920 83.7% 677 NYY 689 101.77% PHA 656 96.90%
1921 83.5% 674 BOS 683 101.34% PHA 666 98.81%
1922 83.6% 687 NYY 707 102.91% CLE 669 97.38%
1923 83.1% 686 NYY 709 103.35% WSH 673 98.10%
1924 83.9% 682 WSH 709 103.96% CHW 666 97.65%
1925 83.3% 679 WSH 689 101.47% BOS 662 97.50%
1926 83.2% 689 CLE 702 101.89% DET 677 98.26%
1927 83.6% 684 NYY 701 102.49% SLB 666 97.37%
1928 83.2% 687 PHA 700 101.89% CLE 665 96.80%
1929 83.0% 687 PHA 703 102.33% DET 664 96.65%

The 1930s
1935 saw the arrival of night baseball, which would eventually be a factor in bringing back strikeout rates, as would the growth of relief pitching, still taking its first baby steps in the Thirties; between those factors and more home runs, the AL in 1937 became the first major league in which less than 80% of plate appearances resulted in a ball in play, after being above 83% in the AL and 84% in the NL for much of the Twenties. Six AL pennant winners had the league’s best DER, compared to just two in the NL.
The 30s were the best and worst of times. The Phillies hit their nadir in 1930, at 631 the worst raw DER since 1900 (the 1911 Braves being the only other team since 1906 to finish below 650), the worst relative to the league since the ill-fated 1899 Cleveland Spiders and the only team lower than 95% of the league average since the 1915 A’s. Not for nothing did they post a modern-record 6.71 team ERA, allow 7.69 runs per game, and lose nearly two-thirds of their games even with Lefty O’Doul batting .383/.453/.604 and scoring 122 runs and Chuck Klein (probably the most park-created of all Hall of Famers) batting .386/.436/.687 with 158 runs scored and 170 RBI. Then again, they also had the league’s worst K/BB ratio and allowed the league’s most homers, so it wasn’t all the defense’s fault. And the Phillies left the Baker Bowl for good at the end of June 1938, and still finished last in DER in 1938 and 1941 plus three more times in the mid-1940s.
In the AL, the late-30s St. Louis Browns, presumably despite Harlond Clift at third, were the league’s worst, hitting bottom in 1939. Also in St. Louis, if you’re curious, the 1934 “Gashouse Gang” Cardinals team was league-average.
On the positive end, we have the 1900s Cubs’ top competition for the title of the best defensive team of all time, the 1939 Yankees, the team that Rob Neyer and Eddie Epstein (measuring by runs scored and allowed relative to the league) marked as the greatest team of all time in “Baseball Dynasties,” noting that they led the league in runs scored and fewest runs allowed four years in a row. So it’s not surprising to encounter them here. The Yankees’ DER was the furthest above their league of any team since 1885, and their 730 DER led the league by 35 points. This was part of a string of six straight seasons and 12 in 13 years when they had the league’s most successful defense, starting in Babe Ruth’s last year two years before the arrival of Joe DiMaggio and running clear through World War II. While a number of players appeared on many of those teams (DiMaggio, Tommy Henrich, Frank Crosetti, Red Rolfe, Joe Gordon), the only constants were manager Joe McCarthy and catcher Bill Dickey. (Both had also been on the 1933 team that was last in the AL in DER before cutting back the Babe’s playing time and putting Earle Combs and Joe Sewell, both 34, out to pasture). You have to give McCarthy some of the credit for the Yankees’ consistent defensive excellence, if only in how he chose to distribute playing time.
That said, a significant park effect can’t be discounted here. Yankee Stadium was always a pitcher’s park, and seems to have been a particularly extreme one in 1939: unlike for the Cubs, we have home/road detailed splits for the 1939 Yankees, which show that Yankee hitters had a BABIP of .273 at home, .315 on the road, while Yankee opponents had a BABIP of .248 at home, .267 on the road – combined, .260 at home, .292 on the road. I haven’t had time to run the splits for the Yankees’ whole run in that period – this essay took up quite enough of my time, and it would be a worthwhile project for someone else to carry on further – but even on the basis of the huge split for 1939, as remarkable as the Yankees’ defensive performance was in the McCarthy era, it has to be taken with the same grain of salt as the Baker Bowl era Phillies. (The 1930 Phillies’ Home/Road BABIP splits were .352/.300 for their offense, .365/.341 for their pitching staff, and a combined line of .358/.321 – a 36-point spread)
Speaking of managers, Walter Johnson may not have had great defenses as a pitcher, but as a manager he did better, skippering the Senators to two league-best DERs in four years from 1929-32. And the 1938 Braves became the first Casey Stengel-managed team to lead the league in DER, albeit a squad he inherited from Bill McKechnie with the decade’s best DER in the NL in 1937.
National League

BIP% NL High DER High % Low DER Low %
1930 82.8% 669 BRO 693 103.59% PHI 631 94.32%
1931 83.4% 687 NYG 706 102.77% PHI 666 96.94%
1932 84.0% 691 PIT 702 101.59% STL 673 97.40%
1933 85.1% 702 NYG 719 102.42% PHI 682 97.15%
1934 82.9% 685 NYG 704 102.77% CIN 666 97.23%
1935 83.2% 686 NYG 707 103.06% PHI 667 97.23%
1936 82.7% 684 CHC 698 102.05% PHI 666 97.37%
1937 81.3% 689 BSN 714 103.63% PHI 670 97.24%
1938 82.1% 697 BSN 711 102.01% PHI 675 96.84%
1939 81.7% 695 CIN 708 101.87% PIT 682 98.13%

American League

BIP% AL High DER High % Low DER Low %
1930 82.0% 678 WSH 702 103.54% CLE 661 97.49%
1931 82.0% 683 PHA 708 103.66% SLB 667 97.66%
1932 81.2% 688 WSH 703 102.18% CHW 674 97.97%
1933 81.5% 694 WSH 709 102.16% NYY 682 98.27%
1934 80.2% 684 NYY 703 102.78% CHW 677 98.98%
1935 81.2% 687 NYY 713 103.78% WSH 668 97.23%
1936 80.5% 676 NYY 690 102.07% SLB 657 97.19%
1937 79.5% 683 NYY 697 102.05% SLB 658 96.34%
1938 79.2% 685 NYY 694 101.31% SLB 671 97.96%
1939 79.9% 687 NYY 730 106.26% SLB 660 96.07%

The 1940s
In the 1940s, change was in the winds. The war decimated MLB’s talent level and introduced inferior baseballs (due to wartime shortages) that traveled poorly when hit. DERs rose back above 70% even before the war in the NL, and in 1942 in the AL. After the war, integration followed and the game was off to the races, while night baseball really came into its own.
In the NL, defense was king – seven pennant winners led the league in DER in nine years between 1939-47, plus the 104-win second-place 1942 Dodgers; four pennant winners led the AL, but three of those were the 1941-43 Yankees. The strongest defensive teams of the decade were McKechnie’s 1940 Reds and Lou Boudreau’s 1948 Indians (a team famous for its outstanding infield of Boudreau, Ken Keltner, Joe Gordon and Eddie Robinson), the weakest the 1940 Pirates and 1942 Senators (the difference between the Senators of the mid-40s and the Indians of the 50s explains a lot about Early Wynn’s career). The chicken-egg question remains regarding good defenses and successful managers, as Leo Durocher’s arrival in Brooklyn in 1939 and Billy Southworth’s in St. Louis in 1940 were followed within a few years by the construction of superior defensive teams.
The 1947 Reds were the third and last team to go from first to last in the league in DER in a single season, after the 1913 Red Sox and 1880 Buffalo Bisons:

Team Years DER1 DER2 Change Change %
BUF 1879-80 659 615 -44 93.3%
CIN 1946-47 716 693 -23 96.8%
BOS 1912-13 683 670 -13 98.1%

The Bisons and their ace pitcher, Hall of Famer Pud Galvin, hail from baseball’s ancient past, and the Red Sox were a bit of a fluke, given the small size of their decline and their rapid rebound the following year. What of the 1947 Reds? 1946 was the last season of McKechnie’s career, and McKechnie was notoriously defense-obsessed. The team gave a lot more playing time to 30-year-old shortstop Eddie Miller, outfielder Frank Baumholtz and noodle-armed 35-year-old left fielder Augie Galan. Sidearmer Ewell Blackwell had his big breakthrough season in 1947, improving his K/BB from 1.27 to a league-leading 2.03, but saw his ERA slip slightly from 2.45 to 2.47, while veterans Johnny Vander Meer and Bucky Walters got completely wiped out by the defensive collapse.
National League

BIP% NL High DER High % Low DER Low %
1940 81.5% 701 CIN 730 104.14% PIT 676 96.43%
1941 80.9% 704 BRO 732 103.98% PHI 683 97.02%
1942 81.2% 716 BRO 734 102.51% CHC 699 97.63%
1943 82.1% 707 STL 719 101.70% NYG 691 97.74%
1944 82.3% 707 STL 733 103.68% CHC 689 97.45%
1945 82.1% 701 CHC 718 102.43% PHI 674 96.15%
1946 80.2% 709 CIN 716 100.99% PHI 697 98.31%
1947 79.3% 703 BRO 720 102.42% CIN 693 98.58%
1948 79.0% 704 BSN 714 101.42% PHI 694 98.58%
1949 79.3% 707 NYG 722 102.12% CHC 684 96.75%

American League

BIP% AL High DER High % Low DER Low %
1940 79.1% 691 CHW 715 103.47% WSH 675 97.68%
1941 79.7% 698 NYY 714 102.29% WSH 680 97.42%
1942 80.9% 706 NYY 721 102.12% WSH 676 95.75%
1943 80.5% 714 NYY 725 101.54% SLB 703 98.46%
1944 81.9% 702 NYY 712 101.42% WSH 692 98.58%
1945 81.3% 707 NYY 716 101.27% CHW 692 97.88%
1946 78.4% 703 NYY 715 101.71% SLB 690 98.15%
1947 78.9% 712 CLE 734 103.09% WSH 697 97.89%
1948 78.8% 704 CLE 731 103.84% SLB 685 97.30%
1949 77.7% 707 CLE 724 102.40% SLB 680 96.18%

Part II here.

A History of Team Defense (Part II of II)

Part I here.
The 1950s
Baseball started moving west with the Braves’ move to Milwaukee in 1953, and the resulting shakeup ended the stranglehold of old, mostly smaller ballparks in the East. High walk rates, more power hitters and a few more strikeouts meant that balls in play rates were dropping, while defenses got stingier – the 71.6% of balls in play turned into outs in the NL in 1956 remains the league record.
I’ve written before about the advantage Casey Stengel’s Yankees got from their defense and how it played into the superior performance of pitchers in pinstripes. But it was the Indians who were the true defensive juggernaut of that era, leading the AL seven times in the decade between 1947-56. The AL was truly defensively stratified in those years, with the upper tier of the Yankees, Indians and White Sox at the top and weak sisters like the Browns, Senators, A’s and Tigers at the bottom. Park effects were part of that picture for the Yankees – for example, in 1955 the Yankees and their opponents had a BABIP of .265 at home, .278 on the road, compared to .272 at home, .269 on the road for the 1954 Indians.
The 111-win Indians were the best defensive team of the decade (the 1909 Pirates, who finished one point behind the Cubs, are the only team to win 110 games in a season without leading the league in DER), Durocher’s 1950 Giants the best NL team, the 1955 Pirates and 1950 Browns the worst; the Pirates were perennially hapless. Four pennant-winning teams in each league led the league in DER, although as I’ve noted the Yankees often finished second or third in DER while winning the pennant, and the 1953 Dodgers and 1957 Braves just narrrowly missed the league lead.
I’d expected the Ashburn-era Phillies to lead the league more than once; the strangest league leaders were the 1952 Cubs, an also-ran team that featured one of the more plodding sluggers (Hank Sauer) ever to win the MVP.
National League

BIP% NL High DER High % Low DER Low %
1950 77.7% 707 NYG 729 103.11% CHC 693 98.02%
1951 78.9% 711 NYG 721 101.41% PIT 697 98.03%
1952 78.1% 713 CHC 723 101.40% PIT 703 98.60%
1953 77.5% 702 MIL 715 101.85% PIT 687 97.86%
1954 77.8% 707 NYG 722 102.12% PIT 687 97.17%
1955 76.8% 714 PHI 728 101.96% PIT 688 96.36%
1956 76.8% 716 BRO 730 101.96% PIT 702 98.04%
1957 76.6% 706 BRO 717 101.56% CHC 698 98.87%
1958 75.8% 703 MLN 721 102.56% LAD 693 98.58%
1959 75.4% 701 CHC 714 101.85% STL 685 97.72%

American League

BIP% AL High DER High % Low DER Low %
1950 77.6% 700 CLE 721 103.00% SLB 676 96.57%
1951 78.6% 706 CLE 720 101.98% SLB 686 97.17%
1952 77.9% 713 CHW 723 101.40% DET 700 98.18%
1953 78.5% 706 CHW 720 101.98% DET 682 96.60%
1954 77.9% 711 CLE 735 103.38% PHA 689 96.91%
1955 76.7% 710 NYY 733 103.24% WSH 689 97.04%
1956 75.3% 705 CLE 722 102.41% WSH 683 96.88%
1957 76.6% 713 NYY 727 101.96% WSH 694 97.34%
1958 76.2% 712 NYY 726 101.97% WSH 697 97.89%
1959 76.0% 712 CLE 730 102.53% KCA 691 97.05%

The 1960s
Rising strikeout rates, with the onset of expansion, new pitchers’ parks in LA and Houston, and the expansion of the strike zone in 1963, are a major part of the story of pitching dominance in the Sixties; the AL in 1961, the year of Maris and Mantle, became the first league to see balls in play drop below 75% of plate appearances, and by 1964 it was down to 72.9%, the lowest it would be until 1987. Unsurprisingly, that started to loosen the relationship between defense and success – only three NL pennant winners led the league in DER, four in the AL, and the 1967 Twins came within a game of becoming the first team to finish first while being last in the league in DER.
Meanwhile, the story on balls in play showed a real split between the leagues: DERs actually declined in the NL, while reaching historic highs in the AL. The 724 DER in the AL in 1968 is the highest in Major League history, and the 743 figure by the 1969 Orioles is the highest ever recorded by a team. That Brooks Robinson-Mark Belanger-Davey Johnson infield and Paul Blair-led outfield really was impenetrable, and even adjusted for the league was the best of the decade, powering the O’s to 109 wins. (Home/road split: .275 at home, .278 on the road).
The Dodgers of the Sixties did well on balls in play, even as they dominated the pitcher-controlled aspects of defense (if I recall correctly, the 1966 Dodgers still hold the team K/BB ratio record).
The 1962 Mets, surprisingly, did not have the league’s worst DER (unlike the 1969 Seattle Pilots), finishing a point above the Astros; the 1969 Mets did lead the league (in fact, they led three years in a row from 1968-70), but other surprise teams of the decade did not – the 1967 Red Sox were just below the league average at 715, and the 1960 Pirates were also below average. Probably no team in this sample surprised me more with their poor defensive stats than the Pirates of the 1960s, finishing last in DER in 1961 and 1964 despite a lineup stocked with legendary defensive players like Bill Mazeroski, Roberto Clemente and Bill Virdon as well as other respected glove men like Dick Schofield Sr. The other surprise, more on which later, was the persistent poor performance of the Astros.
The Yankee dynasty’s collapse was reflected defensively, as the Yankees were second in DER in 1964 (at 726), but ninth in 1965 at 707.
National League

BIP% NL High DER High % Low DER Low %
1960 75.0% 703 LAD 714 101.56% PHI 694 98.72%
1961 75.0% 699 MLN 721 103.15% PIT 683 97.71%
1962 74.7% 695 SFG 710 102.16% HOU 680 97.84%
1963 74.8% 706 MLN 721 102.12% NYM 694 98.30%
1964 75.7% 698 LAD 709 101.58% PIT 682 97.71%
1965 74.5% 704 LAD 727 103.27% PHI 687 97.59%
1966 75.4% 699 STL 712 101.86% HOU 687 98.28%
1967 75.1% 703 SFG 719 102.28% HOU 683 97.16%
1968 75.8% 707 NYM 719 101.70% HOU 690 97.60%
1969 73.6% 701 NYM 729 103.99% HOU 683 97.43%

American League

BIP% AL High DER High % Low DER Low %
1960 75.8% 712 NYY 732 102.81% BOS 688 96.63%
1961 74.7% 708 BAL 731 103.25% KCA 689 97.32%
1962 74.7% 710 NYY 719 101.27% LAA 702 98.87%
1963 74.4% 713 NYY 725 101.68% WSA 701 98.32%
1964 72.9% 711 CHW 733 103.09% BOS 683 96.06%
1965 73.3% 715 CHW 728 101.82% BOS 692 96.78%
1966 73.9% 717 CHW 728 101.53% BOS 704 98.19%
1967 73.4% 718 CHW 735 102.37% MIN 704 98.05%
1968 74.0% 724 BAL 740 102.21% WSA 702 96.96%
1969 73.7% 714 BAL 743 104.06% SEP 691 96.78%

The 1970s
In the 1970s, even after the arrival of the DH, AL teams with top defenses tended to finish first in their divisions – 8 times in 11 years from 1969-79. In the NL, it was a different story, as teams like the Big Red Machine and the late-70s Pirates seemed often to lead the league in years other than the years those same teams finished first. The Dodgers led the league in DER four times between 1972 and 1978, and won the division the three years they didn’t.
You’ve met two of the five teams since 1900 to better the league average in DER by 5% or more, the 1906 Cubs and 1939 Yankees, both great teams that left the rest of their league in the dust. But the third team was one left in the dust by another juggernaut: the 1975 Dodgers, who led the league in DER by 20 points over the 108-win Reds, while finishing 20 games behind them (it didn’t help that the Dodgers underperformed their Pythagorean record by 7 games). Oddly, the very best Dodger defense came in a season when Bill Russell missed a good deal of time, but the then-youthful infield of Garvey, Lopes and Cey was otherwise tremendously durable, while 33-year-old Jimmie Wynn anchored the outfield defense (Wynn had also played on those late-60s Astros teams that perennially finished last in DER; go figure). Park effect? The Dodgers and their opponents combined for a .268 BABIP at home, .276 on the road, so the park seems to have had something to do with it. What about a pitching staff effect? Knuckleballer Charlie Hough had the team’s lowest BABIP (.219), but Hough threw only 61 innings. 321 innings were thrown by curveballer Andy Messersmith, and there may be something to that – pitcher BABIP are available since 1950, and Messersmith has the lowest career BABIP of any pitcher with 2000 or more career innings at .243 (rounding out the top 10, he’s followed by Catfish Hunter at .246, Hoyt Wilhelm at .250, Jim Palmer at .251, Hough at .253, Mudcat Grant at .258, Koufax at .259, Early Wynn at .260, and Tom Seaver and Warren Spahn at .262). The fact that that persisted across three teams (Angels, Dodgers and Braves) before he broke down in 1977 and that only Hunter’s even close to him suggests that Messersmith may have had some ability in that area. On the other hand, you have knuckle-curve specialist Burt Hooton, making the case for it being the team: Hooton’s BABIPs with the Cubs from 1972-94 were .278, .303 and .322, and .400 in the early going in 1975; after arriving with the Dodgers it dropped to .236, and was .253 over the next three seasons. Whether that’s the defense or the park, it’s evident that Hooton’s sudden improvement was due to the environment he pitched in.
The best AL defense of the decade was the Orioles again in 1973 (featuring much of the same cast, but this time with Bobby Grich at second); Earl Weaver’s defenses remained outstanding for years, as did Billy Martin’s when he arrived in New York (and brought in Paul Blair, among others). The worst were the 1974 Cubs and 1970 White Sox. Those Cubs featured Bill Madlock at third, 31 year old Don Kessinger at short, and an outfield of three guys who later became professional pinch hitters (Rick Monday, Jose Cardenal and Jerry Morales) and a DH at first (Andre Thornton). That said, BABIPs were higher at home – .312 at home, .296 on the road – so even aside from the home run ball, the park likely exaggerated the Cubs’s defensive failings in that era. Not for nothing did Rick Reuschel retire with a career BABIP of .294.
National League

BIP% NL High DER High % Low DER Low %
1970 73.3% 697 NYM 721 103.44% STL 686 98.42%
1971 75.6% 706 CIN 727 102.97% STL 689 97.59%
1972 74.7% 707 LAD 721 101.98% HOU 695 98.30%
1973 75.1% 704 LAD 729 103.55% CHC 687 97.59%
1974 75.9% 702 ATL 720 102.56% CHC 672 95.73%
1975 76.3% 700 LAD 737 105.29% CHC 673 96.14%
1976 77.0% 704 LAD 723 102.70% SFG 691 98.15%
1977 75.2% 698 PIT 711 101.86% ATL 677 96.99%
1978 76.2% 706 MON 718 101.70% CIN 697 98.73%
1979 76.3% 700 HOU 719 102.71% CHC 680 97.14%

American League

BIP% AL High DER High % Low DER Low %
1970 73.7% 710 OAK 728 102.54% CHW 684 96.34%
1971 74.8% 714 OAK 730 102.24% CHW 701 98.18%
1972 75.3% 718 BAL 740 103.06% BOS 699 97.35%
1973 75.7% 701 BAL 731 104.28% TEX 683 97.43%
1974 77.0% 702 BAL 716 101.99% MIN 691 98.43%
1975 76.1% 703 BAL 731 103.98% DET 683 97.16%
1976 77.7% 705 NYY 729 103.40% CHW 693 98.30%
1977 76.3% 698 NYY 714 102.29% CHW 682 97.71%
1978 77.7% 706 NYY 723 102.41% TOR 690 97.73%
1979 77.4% 700 BAL 727 103.86% OAK 678 96.86%

The 1980s
DERs in the AL finally dropped back in line with the NL by the late 70s, and the two leagues have mostly remained even since then. Balls in play percentages dropped in 1986, perhaps reflecting the rise in strikeouts occasioned by, among other things, the popularity of the split finger fastball and the increasing specialization of bullpens.
Best defensive team of the 80s: the Billyball A’s of 1980. In the NL: the far less remembered 1982 Padres. Worst: the 1981 Indians and 1984 Giants. The Whitaker-Trammell-Chet Lemon Tigers also stand out, although they are not as remembered as a defensive unit (but see the career of Walt Terrell). Their DER was also 713 when they had their big year in 1984, 705 in 1987.
The 1980s might be the decade that defense mattered least. Only two teams, the 1985 Blue Jays and 1989 A’s, finished first while leading the league in DER; the 1982 Giants came within two games of being the first team to finish first while being last in the league in DER, and a year later the “Wheeze Kids” Phillies turned the trick, remaining to this day the only team to be first in the standings and last in DER (the league hit .286 on BABIP against Cy Young winner John Denny, .329 against Steve Carlton). Those two teams had two things in common – an aging lineup (which for the Giants included Darrell Evans and Reggie Smith, the Phillies Pete Rose, Tony Perez, Gary Maddox, Mike Schmidt and Gary Matthews) and specifically, Joe Morgan at second base. I have to wonder about Morgan – it’s not a surprise that he would be found on poor defensive teams as his bat kept a decaying glove in the lineup in his late 30s (don’t forget, these were still good teams), but the Reds’ only league lead in DER in the 70s was in 1971, the year before Morgan’s arrival, and the Astros had routinely finished last during his years as their second baseman in the 60s. Could all be a coincidence, as Morgan’s defensive stats seem to suggest he was a fine glove man in his prime, but it bears closer examination.
The 1989 Yankees became the first Yankees team to finish last in the league in DER since 1933. The Mets finished second in the NL in DER in 1985, third in 1986. The Red Sox at 686 were below average in 1986, but at least not in the cellar as they were in 1985 and 1987.
National League

BIP% NL High DER High % Low DER Low %
1980 77.0% 700 LAD 715 102.14% CHC 680 97.14%
1981 77.2% 704 HOU 721 102.41% CHC 686 97.44%
1982 76.3% 701 SDP 725 103.42% SFG 688 98.15%
1983 74.9% 702 HOU 718 102.28% PHI 685 97.58%
1984 75.1% 698 SDP 721 103.30% SFG 676 96.85%
1985 75.0% 706 STL 718 101.70% ATL 691 97.88%
1986 73.3% 700 HOU 721 103.00% CHC 678 96.86%
1987 73.1% 696 PIT 711 102.16% CHC 677 97.27%
1988 75.3% 708 CIN 723 102.12% ATL 692 97.74%
1989 74.3% 709 SFG 725 102.26% PHI 699 98.59%

American League

BIP% AL High DER High % Low DER Low %
1980 77.7% 698 OAK 727 104.15% TEX 676 96.85%
1981 77.6% 711 DET 740 104.08% CLE 678 95.36%
1982 76.6% 704 DET 725 102.98% CHW 688 97.73%
1983 77.0% 699 DET 726 103.86% CAL 683 97.71%
1984 76.1% 699 BAL 715 102.29% SEA 683 97.71%
1985 75.2% 703 TOR 724 102.99% BOS 690 98.15%
1986 73.5% 699 DET 719 102.86% SEA 670 95.85%
1987 72.7% 697 CHW 714 102.44% BOS 674 96.70%
1988 75.1% 702 DET 718 102.28% CLE 692 98.58%
1989 75.3% 698 OAK 715 102.44% NYY 683 97.85%

The 1990s
DERs dropped sharply in 1993, inaugurating the era of…well, the Steroids Era, if you prefer. Or in the NL, perhaps the Mile High/Coors era. There were also ever fewer balls in play, with more and more homers, strikeouts and walks. Four NL teams finished first in DER and first in their division, three AL teams including the 1998 Yankees (the only Jeter-era Yankees team to finish either first or last in DER).
The worst defensive teams of the decade were the 1999 Rockies and 1997 A’s (the start of the “Moneyball” era – the A’s often fielded Jason Giambi and Matt Stairs in the outfield corners – although the winning A’s teams of a few years later would be above-average defensively, leading the AL in 2005). The Rockies’ home/road splits were so vast – .374 at home, .306 on the road in 1999 – that it’s almost impossible to evaluate their defense as such.
The 1990s also brought us the fourth of the five great defensive teams, the 1999 Reds, who led the league by a margin of 17 points over the Mets on the way to losing a one-game playoff for the wild card when their bats were stifled by Al Leiter. That Reds team is not recalled as widely as a great defense – it was the Mets that year who got the Sports Illustrated cover asking if they had the best infield ever – but with Barry Larkin, Mike Cameron and Pokey Reese, they had an outstanding defensive unit. Their home/road splits – .306 at home, .312 on the road – suggest that they did it without a huge amount of help from their home park.
National League

BIP% NL High DER High % Low DER Low %
1990 74.4% 701 MON 713 101.71% ATL 676 96.43%
1991 74.0% 706 ATL 714 101.13% NYM 689 97.59%
1992 74.8% 705 CHC 716 101.56% LAD 685 97.16%
1993 74.2% 692 ATL 711 102.75% COL 664 95.95%
1994 72.8% 688 SFG 707 102.76% COL 664 96.51%
1995 71.9% 688 CIN 699 101.60% PIT 669 97.24%
1996 71.5% 687 STL 706 102.77% HOU 667 97.09%
1997 71.2% 688 LAD 706 102.62% COL 667 96.95%
1998 71.3% 689 ARI 704 102.18% FLA 669 97.10%
1999 70.7% 687 CIN 722 105.09% COL 659 95.92%

American League

BIP% AL High DER High % Low DER Low %
1990 74.4% 699 OAK 732 104.72% CAL 681 97.42%
1991 74.1% 699 CHW 728 104.15% CLE 678 97.00%
1992 75.0% 702 MIL 725 103.28% TEX 680 96.87%
1993 73.7% 693 BAL 704 101.59% MIN 679 97.98%
1994 72.3% 687 BAL 706 102.77% SEA 669 97.38%
1995 72.3% 690 BAL 716 103.77% DET 672 97.39%
1996 71.7% 683 MIN 694 101.61% BOS 665 97.36%
1997 71.6% 684 BAL 699 102.19% OAK 660 96.49%
1998 72.0% 686 NYY 708 103.21% TEX 668 97.38%
1999 71.8% 683 ANA 699 102.34% TBD 661 96.78%

The 2000s
Is defense the new market inefficiency? Maybe in the National League, as eight first-place teams led the league in DER between 2000 and 2010 compared to three in the AL (plus the 2002 Angels, who didn’t finish first but did win 99 games and the World Series). Even with BIP percentages dropping, marginal advantages in defense can still help make a division winner.
Worst DERs of the decade: the 2007 Rays and Marlins, both scraping just above 650. Best in the NL: the 2009 Dodgers. And the fifth and final team to beat the league by 5% or more – indeed, second only to the 1939 Yankees at 105.52% – the 2001 Mariners, who tied the 1906 Cubs’ record of 116 regular season wins. The Mariners featured Ichiro, John Olerud, Bret Boone, Carlos Guillen, and yes, Mike Cameron in center again. They got some help from Safeco (home/road split of .300/.322), where they led the AL again in 2003 (Cameron’s last year there) and 2004.
Then there’s the 2007-08 Rays. As I noted before the 2008 season, Baseball Prospectus’ optimistic PECOTA projection for the Rays required them to massively improve on their MLB-worst team defense; as I noted that October, they did just that, to the point where nearly the entire turnaround to a pennant-winning team was a function of becoming the MLB’s best defensive team in one year. This made them just the ninth team ever to go worst-to-first in their league in DER in one year (other unsurprising names on that list include the Billyball A’s and the 1991 Braves), and aside from a team from 1878, Tampa’s defensive improvement was the largest leap of any of those teams, a 56-point or 8.6% improvement, which made their pitching staff much better without changing its personnel. The Rays did this returning five regulars – Carl Crawford, BJ Upton, Akinori Iwamura, Carlos Pena and Dioner Navarro – although Upton in 2007 was still learning center field as a new position, and Iwamura moved from third to second in 2008. Adding Evan Longoria and Jason Bartlett, plus clearing out some less mobile players and letting the incumbents settle in, led to a historic turnaround:

Team Years DER1 DER2 Change Change %
CIN 1877-78 561 638 77 113.7%
TBR 2007-08 652 708 56 108.6%
CLV 1891-92 645 697 52 108.1%
PHI 1914-15 666 715 49 107.4%
OAK 1979-80 678 727 49 107.2%
BOS 1913-14 670 709 39 105.8%
ATL 1990-91 676 714 38 105.6%
WSH 1923-24 673 709 36 105.3%
NYY 1933-34 682 703 21 103.1%

National League

BIP% NL High DER High % Low DER Low %
2000 70.3% 689 CIN 702 101.89% MON 672 97.53%
2001 70.4% 693 ARI 703 101.44% MON 682 98.41%
2002 71.2% 695 LAD 716 103.02% SDP 675 97.12%
2003 71.6% 694 SFG 710 102.31% COL 678 97.69%
2004 71.2% 693 LAD 711 102.60% COL 677 97.69%
2005 72.0% 693 HOU 705 101.73% COL 670 96.68%
2006 71.2% 690 SDP 710 102.90% PIT 674 97.68%
2007 71.5% 688 CHC 704 102.33% FLA 659 95.78%
2008 70.5% 689 CHC 703 102.03% CIN 671 97.39%
2009 70.2% 692 LAD 713 103.03% HOU 677 97.83%

American League

BIP% AL High DER High % Low DER Low %
2000 71.7% 684 ANA 699 102.19% TEX 667 97.51%
2001 72.3% 689 SEA 727 105.52% CLE 670 97.24%
2002 72.4% 695 ANA 718 103.31% CLE 674 96.98%
2003 73.2% 694 SEA 721 103.89% TEX 674 97.12%
2004 72.1% 689 SEA 699 101.45% KCR 674 97.82%
2005 73.5% 694 OAK 715 103.03% KCR 666 95.97%
2006 72.6% 685 DET 701 102.34% TBD 671 97.96%
2007 72.1% 684 BOS 704 102.92% TBD 652 95.32%
2008 71.9% 688 TBR 708 102.91% TEX 666 96.80%
2009 70.8% 688 SEA 712 103.49% KCR 675 98.11%

The 2010s
History continues to march on: the NL in 2010 became the first league in baseball history to have less than 70% of all plate appearances result in a ball put in play.
2011 stats are through May 31, 2011. DERs can be volatile in-season; I noted a few weeks ago that the Astros were at 648, 633 around the beginning of May, which would have set them on pace as the first defensive team since the 1930 Phillies to finish below 650, but since replacing Angel Sanchez with Clint Barmes they’ve been on an upward trajectory, and are no longer even last in their division. As you can see, the Cubs are having a terrible defensive year, while the Braves and those Rays again (even sans Carl Crawford and Jason Bartlett) are flying high. The AL (unlike the NL) is above 700 this season, the first time either league has cracked 700 since 1992.
National League

BIP% NL High DER High % Low DER Low %
2010 69.7% 689 SFG 707 102.61% PIT 671 97.39%
2011 70.2% 695 ATL 716 103.02% CHC 665 95.68%

American League

BIP% AL High DER High % Low DER Low %
2010 71.4% 694 OAK 711 102.45% KCR 679 97.84%
2011 71.6% 702 TBR 723 102.99% CHW 691 98.43%

Part I here.

Bruce Springsteen and the Right

When New Jersey’s Republican governor, Chris Christie, was sworn into office, he chose to celebrate at his inauguration by joining a Bruce Springsteen cover band in singing the Boss’ signature anthem, ‘Born to Run’. Governor Christie hails from Bruce’s home state of New Jersey, and his zealous Springsteen fandom is perhaps unusually dedicated for a politician. But it also symbolizes a paradox: while Springsteen has long been open about his left-wing political views and has hit the campaign trail for the last two Democratic presidential candidates, he remains enduringly popular with a broad segment of conservatives and Republicans. In part, that’s for the obvious reason: Bruce is a rock legend with a ton of fans, so we should be unsurprised that he would have fans of every political persuasion. It’s also partly demographic; Bruce’s fans tend to be disproportionately white and, increasingly, older, and those are more conservative groups than the population at large. But my own anecdotal sense is that Bruce’s fanbase is – if anything – more conservative-leaning than you would explain by those factors alone, and certainly not markedly more liberal. Speaking as a conservative and a longtime Springsteen diehard, let me offer some theories as to why that is. This is not an essay dedicated to claiming Springsteen for the Right, or arguing that he’s unwittingly some sort of crypto-conservative, although I do note at a few points conservative themes in his writing and his life. Rather, my argument is that the things that appeal to fans of Bruce Springsteen and his music are, quite logically, most appealing to conservatives.
Generally, we conservatives have pretty low expectations, politically, for our pop-culture icons. We understand that most of them don’t agree with us on politics or policy. So, what we look for are artists who have some tolerance and respect for us, some themes in common with our worldview, and sometimes being one of the good guys on something. Bruce delivers on all counts.

Continue reading Bruce Springsteen and the Right

Science and its Enemies on the Left: An Update

Scientific integrity and scientific progress continue to take a beating from the Left.
In Part I of my series of essays on Science and its Enemies on the Left, I looked at the toll of junk science, quackery and anti-technological Luddism and the role of the social and political Left in promoting all three. In Part II, I looked at politicized science (both the misuse of science by politicians and the politicization of scientists themselves) and the temptations presented to scientists by their ability to gain power through science.
I’m overdue to finish Part III of the series, but in the meantime, there have been enough additional examples of my thesis that it’s worth taking an updated look at the myriad ways in which the agenda and interest groups of the political Left stand in the way of scientific integrity and scientific progress.

Continue reading Science and its Enemies on the Left: An Update

Science And Its Enemies On The Left, Part II(A)

In the first installment of this series, I looked at the real dangers to scientific integrity and scientific progress presented by junk science, quackery and Luddism promoted and practiced by the cultural and political Left, including the use of bad science in product liability lawsuits and the Left’s attacks on vaccination, nuclear power and genetically engineered crops.
In this second part, we look at politicized science and the temptations of power. Part II is posted in its entirety at The New Ledger but my site won’t support a single post that long.

III. Polticized Science
Many of the worst kinds of junk science and quackery are to be found when science is used to advance political agendas. The corrupting influence of money has nothing on the corrupting influence of political power. And contrary to what the Left may wish you to believe, the espousal of left-wing causes that advocate the expansion of such power is not an ennobling but a corrupting influence on scientific integrity. As I will discuss below, the current controversy involving climate researchers – the “Climategate” scandal triggered by the release of emails by the Climate Research Unit (CRU) at the University of East Anglia in Great Britian – vividly illustrates this.
There are at two main hazards presented when science is marshalled in political argument. One, politicians may take scientific data gathered in good faith and misrepresent, overstate or suppress it – witness John Kerry overstating the growth of carbon emissions by a factor of 32 for a recent example that didn’t stand up to even mininal scrutiny. And two, scientists themselves may become willing pawns in the circulation of bad science for political ends. Recent history shows that the agenda of greater government control of society pushed by the Democrats and others on the Left has often been abetted by bad science.
A. The Politics of Stem Cell Research
The most notorious recent example of politicians running far ahead of any scientific basis for their claims, of course, came from Democratic vice-presidential candidate John Edwards, who in the course of a diatribe about the miraculous promise of embryonic stem cell research, declared in October 2004, the day after the death of actor Christopher Reeve:

If we do the work that we can do in this country, the work that we will do when John Kerry is president, people like Christopher Reeve are going to walk, get up out of that wheelchair and walk again.

Nancy Pelosi likewise claimed that embryonic stem cells had “the biblical power to cure,” and Ron Reagan told the 2004 Democratic Convention, “How’d you like to have your own personal biological repair kit standing by at the hospital? Sound like magic? Welcome to the future of medicine.” Of course, no such thing was or is imminent:

In January 2003, a science writer for the New York Times admitted: “For all the handwringing by scientists, you might think that therapeutic cloning is on the verge of curing a disease or two. . . . Almost all researchers, when questioned, confess that such accomplishments are more dream than reality.”

But Edwards and Pelosi had elections to win. And scientists who should have known better went along for the ride:

In the summer before the 2004 presidential election, Ron McKay, from the National Institutes of Health, admitted that he and his fellow scientists had generally failed to correct the media’s false reports about the promise of stem cells – but that was all right, he told the Washington Post, since ordinary people “need a fairy tale.” They require, he said, “a story line that’s relatively simple to understand.”

In fact, the hot story in embryonic stem cell research in the middle years of the Bush Administration was a South Korean researcher, Woo Suk Hwang, looking at the use of stem cells for spinal cord research who claimed to have implanted cloned human stem cells in a cloned dog – results that turned out to be fraudulent. And proponents of embryonic stem cell research had fallen for it:

For all the major scientific journals, embryonic research had become what Robert P. George and Eric Cohen would call “a litmus test for being pro-science and the central front in the alleged war of scientific reason against religious barbarians.” Science magazine had fast-tracked Hwang’s work to let America know the cost of President Bush’s refusal to fund embryonic stem-cell research. Scientific American published a mea culpa for all scientific journals, and it is, George and Cohen pointed out, “remarkable for both its honesty and remorse: ‘Hwang is guilty of raising false expectations, but too many of us held the ladder for him.'”

This would not be the last time scientists made themselves willing pawns of the Left at the expense of their integrity.

Science And Its Enemies On The Left, Part II(B)

B. Anthropogenic Global Warming

More recently than the stem cell controversy, we have the series of mushrooming controversies – most spectacularly the “Climategate” scandal – over Anthropgenic Global Warming (AGW), i.e., the theory that human industry is responsible, by means of carbon emissions, for an upward trend in global temperatures. AGW is very important to the Obama Administration and its allies in the Democratic Party and on the international Left; recall Barack Obama’s grandiloquent pronouncement that people would remember of his clinching of the Democratic nomination in June 2008 that “this was the moment when the rise of the oceans began to slow and our planet began to heal”. But AGW theory and its adherents are rotten with bad science.

1. The EPA Report

The first sign that the new Administration was willing to push the barriers between science and politics in support of its AGW agenda was this spring’s flap over the Obama Administration’s suppression of an EPA report that contradicted the agency’s decision to classify CO2 (the most natural of gases, being that it is exhaled by human beings) as a “pollutant” – a decision that has been used to justify “cap-and-trade” legislation as well as administrative actions on the issue without the need for legislation (the latter being supported by an agency “finding” released in April by the EPA and finalized in November that greenhouse gases threaten public health and welfare). This episode was a fairly classic example of how government policymaking in areas of scientific expertise remains more about politics than about science. Read the summary overview of that report here, and Ben Domenech’s writeup here. Michelle Malkin sums up the kind of critique presented in the report:

[I]t spotlights EPA’s reliance on out-of-date research, uncritical recycling of United Nations data, and omission of new developments, including a continued decline in global temperatures and a new consensus that future hurricane behavior won’t be different than in the past.

Chris Mooney, the liberal author of the “Republican War on Science” book, went so far as to argue that because he disagreed with its conclusions, the EPA was right to suppress the report. Of course, this is rather a far cry from the arguments made during the Bush years that about the dangers of suppressing scientific skepticism and dissent; the orthodoxy must be enforced.

Given its policy aims, it is not surprising that the Administration was hesitant to publish a report that contradicted the AGW narrative. In fact, the AGW hypothesis presents the most egregious example in recent years – in terms of its sheer scale – of thoroughly politicized science. The AGW debate merits consideration at some length here because of its centrality to a policy debate affecting a vast proportion of human economic activity and the copious examples it provides of the corruption of politicized science. Put simply, any reasonable person who looks at the evidence must conclude that the proponents of AGW theory are political advocates first and scientists, if at all, a distant second.
Now, it may well be true – it is certainly possible – that the Earth is presently in a warming trend, and that such a trend can be projected into the future, and that human activity is responsible for that trend, and even that changes in future economic structures could alter that trend. All of that may be true, and it may be false; science is supposed to help us find the answers to such questions, and to tell us honestly if the answers cannot in confidence be found. Science is not about identifying what is possible or plausible or arguable and then asserting it as fact; it’s about following the evidence wherever it may lead, to determine whether a particular hypothesis is proven, disproven, unproven or inherently unprovable. (Unprovable theories aren’t without their uses in science, if they remain the most likely explanation for a set of facts – but such explanatory theories ought not to be asserted as fact, and they make a shaky basis for sweeping and disruptive public policy initiatives.)

If you were to construct a checklist of the warning signs of bad science, the campaign to persuade the public of AGW would tick off basically every box: the refusal to share data, to the point of outright destroying it; the manipulation of the peer-review process to skew results; the constant changing of models and predictions to avoid having them subject to testing against hard evidence; the campaign of alarmism and demonization of skeptics; the rank appeals to authority and consensus in place of reasoned discussion of the evidence. Only the most credulous rubes could believe the proponents of AGW without a raised eyebrow at these tactics.

2. Warming? What Warming?

The reason why AGW has such political salience, of course, is that it is used as justification for vast governmental controls over economic activity – long a project of the Left, but now with the newly-added patina of physical science as support for the same old programs. In order to justify the massive dislocations that would be caused by such controls, it is necessary not only that AGW be unquestioned, but that it be menacing; thus, we get things like a scientific advisor to the British Government claiming that AGW will annihilate 90% of the world’s population if the temperature rises four degrees Celsius. And in some cases, the rush to make dire predictions founders on the most banal forms of sloppiness, as when the IPCC predicted the demise of Himalayan glaciers by 2035, when the data said 2350. A digit here, a digit there…

The need to generate predictions of doom is a double-edged sword. One of the problems at the heart of AGW theory, and which has caused no end of difficulty for its proponents, is that it is a predictive model, yet proponents of the theory keep having to change what it predicts to avoid ever allowing the theory to be falsifiable. A theory of global warming, after all, presupposes that the Earth is getting warmer, and indeed the entire basis for convincing anyone that the theory holds water is to point to the correlation between increasing industrial emissions of carbon and recent increases in global temperature. But even before you get to the questions of (1) whether the historical temperature readings are accurately recorded and presented and (2) whether correlation equals causation, you run up against the fact that persistent alarmist predictions that the warming trend would continue have not panned out.

As you may recall, the headline-grabber that made AGW a political issue in the 1990s was the famous “hockey stick” graph produced by Penn State climatologist Michael Mann, so-called because it showed a sharp upward spike in global temperatures, shaped like the blade of a hockey stick, near the end of the 20th century. The hockey stick, in turn, was premised in good part upon historical temperature data derived from a database of tree ring measurements maintained by the CRU. Mann’s hockey stick was never the sole source of AGW theory, and the CRU was never the sole source of historical data, but the hockey stick graph was central to the project of capturing the public imagination and turning a scientific theory into a political juggernaut. The clear implication to anyone looking at the hockey stick was that at precisely the time of accelerating industrialization, we had entered a period of accelerating increase in global temperatures that would continue unchecked into the future. Correlation being easily confused with causation, much of the public simply accepted that the increase in carbon emissions resulting from increasing industrialization must have been the cause of the temperature spike; the two patterns were too visually striking to be coincidence.

While many scientists were convinced of the logic of computer models of how a “greenhouse effect” would work in transmuting carbon emissions into increased temperatures, scientists could never prove that their models of how carbon emissions affected the Earth’s temperature were correct; you can’t conduct an experiment on something as large and complex as a planet and its entire surroundings in the solar system, and there was no historical precedent for the Earth’s industrialization, only a long history on this and other planets of climates changing without human intervention. But with the hockey stick, nobody needed to question the underlying logic of causation anymore than they do in the case of lung cancer and smoking (i.e., it’s still not known how smoking causes lung cancer, but the statistical correlation over innumerable studies covering a very large sample is so strong that nobody today seriously disputes the causal connection despite the absence of a known mechanism – much of epidemiology works that way).

Unfortunately for the proponents of AGW, it turns out in retrospect that the hockey stick was just a figment of small and incomplete samples. You can read fuller explanations here and here, but I will summarize them briefly. Basically, the original “hockey stick” did two things: not only did it show a sharp upward spike in temperatures in the late 20th century, but it also rebutted the contention that this could be a natural phenomenon by showing the lowest temperatures in 1032, in the midst of what had been believed to be the “Medieval Warm Period.” That hockey stock was premised on a 1995 paper that “depended on 3 short tree ring cores from the Polar Urals whose dating was very problematic,” and when additional data became available in 1999, the updated temperature series was not published, but rather replaced with a new study from Yamal, also in Russia. But to skeptic Steve McIntyre, the Yamal data – collected in two sets – didn’t add up, and he embarked on a years-long battle to get all the data to review independently. When he finally did, in September 2009, the resulting sample – using a larger sample size for late 20th century data – changed the shape of the “stick” to eliminate the blade (as well as modifying the medieval results), leaving something much more clearly resembling a random walk of statistical noise. You can see the results in this graph from McIntyre’s site: the red line is the Mann/CRU hockey-stick graph, the black line is the data left out of the stick, and the green line is what you get when you combine the two sets:

The bottom line:

At least eight papers purporting to reconstruct the historical temperature record times may need to be revisited, with significant implications for contemporary climate studies, the basis of the IPCC’s assessments. A number of these involve senior climatologists at the British climate research centre CRU at the University East Anglia. In every case, peer review failed to pick up the errors.

The hockey stick isn’t the only such example, as illustrated by National Oceanic and Atmospheric Administration (NOAA) data wrongly showing a non-existent and persistent spike in ocean temperatures in 2001.

If you had – as many AGW proponents did, in the 1990s – begun to make short-term predictions about climate trends along the lines of continuation of the Mann/CRU hockey stick trends, you would have been grievously wrong, as in fact all such predictions have proven. Since AGW rose to prominence as a political project, the past decade has shown no growth in global temperatures since the natural El Nino temperature surge of 1998. One study after another has shown that the Earth simply has not gotten warmer in the past 11 years:

[In the fall of 2009], Britain’s Hadley Centre for Climate Prediction and Research added more fuel to the fire with its latest calculations of global average temperatures. According to the Hadley figures, the world grew warmer by 0.07 degrees Celsius from 1999 to 2008, and not by the 0.2 degrees Celsius assumed by the United Nations Intergovernmental Panel on Climate Change. And, say the British experts, when their figure is adjusted for two naturally occurring climate phenomena, El Niño and La Niña, the resulting temperature trend is reduced to 0.0 degrees Celsius — in other words, a standstill.

It’s not just temperature. Sea ice levels provide another example of how testing the AGW hypothesis by treating it as a predictive model has yielded more failures than successes:

[M]ean ice anomaly — defined as the seasonally-adjusted difference between the current value and the average from 1979-2000, varies much more slowly. That anomaly now stands at just under zero, a value identical to one recorded at the end of 1979, the year satellite record-keeping began.

+++

Earlier this year, predictions were rife that the North Pole could melt entirely in 2008. Instead, the Arctic ice saw a substantial recovery. Bill Chapman, a researcher with the UIUC’s Arctic Center, tells DailyTech this was due in part to colder temperatures in the region. Chapman says wind patterns have also been weaker this year. Strong winds can slow ice formation as well as forcing ice into warmer waters where it will melt.

Why were predictions so wrong? Researchers had expected the newer sea ice, which is thinner, to be less resilient and melt easier. Instead, the thinner ice had less snow cover to insulate it from the bitterly cold air, and therefore grew much faster than expected, according to the National Snow and Ice Data Center.

Likewise, Antarctic “sea ice coverage has grown to record levels since satellite monitoring began in the 1979, according to peer-reviewed studies and scientists who study the area” released in 2007.

The response of proponents of AGW: change the predictions so they don’t risk being disproven by events, as illustrated by this report from September 2009:

Forecasts of climate change are about to go seriously out of kilter. One of the world’s top climate modellers said Thursday we could be about to enter one or even two decades during which temperatures cool.
“People will say this is global warming disappearing,” he told more than 1500 of the world’s top climate scientists gathering in Geneva at the UN’s World Climate Conference.

“I am not one of the sceptics,” insisted Mojib Latif of the Leibniz Institute of Marine Sciences at Kiel University, Germany. “However, we have to ask the nasty questions ourselves or other people will do it.”
Few climate scientists go as far as Latif, an author for the Intergovernmental Panel on Climate Change. But more and more agree that the short-term prognosis for climate change is much less certain than once thought.

+++

In candid mood, climate scientists avoided blaming nature for their faltering predictions, however. “Model biases are also still a serious problem. We have a long way to go to get them right. They are hurting our forecasts,” said Tim Stockdale of the European Centre for Medium-Range Weather Forecasts in Reading, UK.

More here. Indeed, a 2007 study all but admitted that the prediction game would have to stay vague:

Climate change models, no matter how powerful, can never give a precise prediction of how greenhouse gases will warm the Earth, according to a new study.

+++

The analysis focuses on the temperature increase that would occur if levels of carbon dioxide in the atmosphere doubled from pre-Industrial Revolution levels. The current best guess for this number – which is a useful way to gauge how sensitive the climate is to rising carbon levels – is that it lies between 2.0 C and 4.5 C. And there is a small chance that the temperature rise could be up to 8C or higher.

To the frustration of policy makers, it is an estimate that has not become much more precise over the last 20 years. During that period, scientists have established that the world is warming and human activity is very likely to blame, but are no closer to putting a figure on exactly much temperatures are likely to rise.

AGW theory’s inability to accurately predict global temperatures has gotten so bad that it has spurred a movement to rebrand “global warming” as “climate change,” a moniker so vague that it can never be disproven (climates change; that’s what they do, and have for all of Earth’s history). The latest fad is “climate collapse,” apparently because “change” wasn’t scary enough. The ever-shifting definition of what the problem is, what it’s called, and how it could be measured is a classic symptom of bad, politicized science. The constant goalpost-moving may be a drearisome feature of politics, but it’s not supposed to be how science works.

Rebranding the AGW hypothesis allows things like Al Gore’s scare tactics based on supposed trends projected from short-term fluctuations in natural disasters. In the specific example of Gore’s misuse of disaster data, the question may be more one of politicians abusing scientific data than the underlying data being politicized, but both are problematic. It’s unhelpful to have leading political figures running around telling us that “I hold in my hand a list of dire climate predictions” that nobody can subject to dispassionate review. Fortunately, the resort to dire predictions about natural disasters, like predictions about temperature, are subject to correction by events; we just finished an unusually mild hurricane season for the second time in four years, which is not at all the “climate change” that Gore is threatening (in fact, predictions of 2009 the hurricane season were also inaccurate). But not to worry; the predictions will just continue being kicked out further down the time horizon to ensure that they can’t ever be disproven conclusively.

3. Consensus? What Consensus?

Given the mounting failure of efforts to convince the public that bad weather – or unseasonably good weather, either will do – is scientific proof of AGW, the theory’s proponents have instead turned to appeals to authority, insisting that there is an ironclad scientific consensus that proves the theory to be true, and demanding that the citizenry trust the consensus because they’re scientists.
This ought to set off serious alarm bells. To begin with, anyone remotely familiar with the history of science understands that scientific consensuses are made to be broken; most of the really important new scientific theories and discoveries since Aristotle have come from the overturning of an existing and erroneous consensus. If consensus was the end of science, we would have to consign Einstein, Darwin, and Newton to the ash heap of history.

Students of human nature should be equally alarmed. The proponents of policies supported by the “consensus” have sought to freeze that consensus in amber by embodying it in a series of reports by the United Nations Intergovernmental Panel on Climate Change, an international bureaucratic institution honored by another international bureaucratic institution with a Nobel Peace Prize. But the IPCC’s reports are worth no more than the sum of their parts, especially given that only a fraction of the vaunted 2,500 scientists signing onto the IPCC reports have personally conducted sufficient research to validate AGW theory from their own personal experience and expertise.

Indeed, Jonathan Adler finds the very structure of the IPCC reports to be a threat to scientific integrity:

The effort to compile an “official” scientific “consensus” into a single document, approved by governments, has exacerbated the pressures to politicize policy-relevant science. So too has been the tendency to pretend as if resolving the scientific questions will resolve policy disputes.

Mike Hulme, an AGW believer and climate scientist at the University of East Anglia, agrees.

Government-backed and -enforced scientific consensuses have a dire history, the most notorious example of which was the work of Soviet geneticist Trofim Lysenko:

Lysenko…ruled the life sciences of Soviet Russia from the late 1920s until the early 1960s. He had a theory which fit Marxism perfectly: acquired characteristics can be inherited. This is not true, of course, but Lysenko had the Politburo and Stalin behind him. It was science that fit the political needs of the Bolsheviks, and so it was science backed by the awful power of the party and the state.

Lysenko’s experiments were heralded, although the experiments were never replicated. The Soviet Union was full of botanists, biologists, geneticists, and other life scientists, and it was obvious to anyone with a free mind that Lysenko was propounding nonsense. But it was not until 1962 that the Soviet government allowed a real critique of his cartoon science.

As I will discuss below, the “Climategate” emails strike at the heart of the credibility of the IPCC reports. As the Future of Capitalism blog observes of the CRU emails:

On the broader question of climate change science, the group-think suggested by the emails is bad for the scientific process, and as Thomas Kuhn pointed out in his classic The Structure of Scientific Revolutions, it’s often a precursor to a paradigm shift that, when it comes, is adamantly resisted at first. Just ask Galileo. And for a flavor of the way that the elite reacts to the questioning of the climate change consensus, check out how the once-dignified New Yorker handled Superfreakonomics, and the way that handling was praised by the Nobel laureate New York Times columnist Paul Krugman. Self-reinforcing orthodoxies have a way of being punctured in fields other than science, too, whether it is a single party’s apparent dominance in Washington or mindless and widespread optimism about rising house prices.

(It should be borne in mind that groupthink and ideological bias are in addition to the far-from-foolproof nature of peer review in the first place; like any human endeavor, peer review can be and sometimes is also undone by ordinary cronyism or simple laziness or haste, as in the recent example of a scientific journal accepting for publication a nonsense article generated by a computer program, a scandal that resulted in the resignation of the journal’s editor).
Proponents of the AGW consensus as definitionally unassailable have circled their wagons against the danger of free thinkers by attacking their critics as paid shills of industry. Unsurprisingly, given that carbon-emitting industries have an enormous amount to lose from the policy proposals at issue, the targeted industries have in fact sought to fund anybody who might question the forces arrayed against them. But in science, the proper remedy for self-interested assertion is transparency and replication of methods, not “na, na, na, I’m not listening.”

The incessant attacks on the financial motivations of the skeptics – in addition to being antithetical to the whole project of scientific inquiry by means of evaluation of the evidence rather than argument ad hominem – not only ignores the fact that the proponents have great incentives of their own in terms of aggrandizing their political power, it also ignores that there’s quite a lot of money in AGW too. As Vladimir at RedState notes:

[Employees and scientists funded by the IPCC] work for the U.N.’s Intergovernmental Panel on Climate Change. Of course they believe in Climate Change; it says “Climate Change” on their paychecks! The global warming opinions of organizations like the American Petroleum Institute have always been treated with skepticism; why should we not consider the source when it comes to the IPCC’s studies?

If money corrupts and renders ones scientific opinions tainted, what’s with Nobel Peace Laureate Al Gore? As a partner in investment bank Kleiner Perkins, he’s positioned to score big from government’s “investment” in green energy.

Bret Stevens notes the vast sums of money involved in the broader enterprise:

Consider the case of Phil Jones, the director of the CRU and the man at the heart of climategate. According to one of the documents hacked from his center, between 2000 and 2006 Mr. Jones was the recipient (or co-recipient) of some $19 million worth of research grants, a sixfold increase over what he’d been awarded in the 1990s….

Thus, the European Commission’s most recent appropriation for climate research comes to nearly $3 billion, and that’s not counting funds from the EU’s member governments. In the U.S., the House intends to spend $1.3 billion on NASA’s climate efforts, $400 million on NOAA’s, and another $300 million for the National Science Foundation. The states also have a piece of the action, with California – apparently not feeling bankrupt enough – devoting $600 million to their own climate initiative. In Australia, alarmists have their own Department of Climate Change at their funding disposal.

And all this is only a fraction of the $94 billion that HSBC Bank estimates has been spent globally this year on what it calls “green stimulus” – largely ethanol and other alternative energy schemes – of the kind from which Al Gore and his partners at Kleiner Perkins hope to profit handsomely.

And of course, the CRU’s funding includes money from the U.S. Department of Energy and the EPA. Another email shows concerns that the Commerce Department would grow “suspicious” of the CRU’s activities. And the desire to keep the money flowing clearly affected AGW proponents’ view of the legitimacy of criticism, as illustrated by this October 2009 email from the Climategate files:

How should I respond to the below? [an article questioning AGW theory] (I’m in the process of trying to persuade Siemens Corp. (a company with half a million employees in 190 countries!) to donate me a little cash to do some CO2 measurments here in the UK – looking promising, so the last thing I need is news articles calling into question (again) observed temperature increases

Despite the confident assertion of consensus issued ex cathedra by the IPCC and the heavy costs in acrimony and ad hominem assault to dissenting scientists, the skeptics, organized politically by Oklahoma GOP Senator Jim Inhofe, have found no shortage of scientists willing to question the “consensus” on AGW. Senator Inhofe has released reports in 2007 & 2009 quoting more than 700 dissenting scientists, many of them quite distinguished. (One of the more distinguished skeptics is profiled by the New York Times here). Ditto for the direst predictions of climate-change disaster:

[I]f there is one scientist who knows more about sea levels than anyone else in the world it is the Swedish geologist and physicist Nils-Axel Morner, formerly chairman of the INQUA International Commission on Sea Level Change. And the uncompromising verdict of Dr Mörner, who for 35 years has been using every known scientific method to study sea levels all over the globe, is that all this talk about the sea rising is nothing but a colossal scare story.

Despite fluctuations down as well as up, “the sea is not rising,” he says. “It hasn’t risen in 50 years.” If there is any rise this century it will “not be more than 10cm (four inches), with an uncertainty of plus or minus 10cm”. And quite apart from examining the hard evidence, he says, the elementary laws of physics (latent heat needed to melt ice) tell us that the apocalypse conjured up by Al Gore and Co could not possibly come about.

The reason why Dr Morner, formerly a Stockholm professor, is so certain that these claims about sea level rise are 100 per cent wrong is that they are all based on computer model predictions, whereas his findings are based on “going into the field to observe what is actually happening in the real world”.

In fact, one rarely has to look far for legitimate scientific skepticism about AGW climate models, even among those who buy into some aspects of AGW theory. Bjorn Lomborg, a skeptic who believes in AGW but argues that it’s been overblown, notes that “there are reputable peer-reviewed studies out there that show that because we have pumped out so much CO2 in the atmosphere, we haven’t gone into a new Ice Age.”. A July 2009 article in Science argued that cloud behavior is a major player in global warming, and that if so, “almost all climate models have got it wrong.” Others note that the historical evidence shows that the models don’t account for or understand all the factors at work:

[A] new study published online [in July 2009] in the journal Nature Geoscience … found that only about half of the warming that occurred during a natural climate change 55 million years ago can be explained by excess carbon dioxide in the atmosphere. What caused the remainder of the warming is a mystery.

“In a nutshell, theoretical models cannot explain what we observe in the geological record,” says oceanographer Gerald Dickens, study co-author and professor of Earth Science at Rice University in Houston. “There appears to be something fundamentally wrong with the way temperature and carbon are linked in climate models.”

During the warming period, known as the “Palaeocene-Eocene thermal maximum” (PETM), for unknown reasons, the amount of carbon in Earth’s atmosphere rose rapidly. This makes the PETM one of the best ancient climate analogues for present-day Earth.

As the levels of carbon increased, global surface temperatures also rose dramatically during the PETM. Average temperatures worldwide rose by around 13 degrees in the relatively short geological span of about 10,000 years.

The conclusion, Dickens said, is that something other than carbon dioxide caused much of this ancient warming. “Some feedback loop or other processes that aren’t accounted for in these models — the same ones used by the Intergovernmental Panel on Climate Change for current best estimates of 21st century warming — caused a substantial portion of the warming that occurred during the PETM.”

To anyone who cares about the scientific search for truth, questions of this nature are an invitation to further research. To the political zealots who regard further inquiry as damnable heresy, they are simply quibbles to be brushed aside.

Science And Its Enemies On The Left, Part II(C)

4. Not So Interested In Sharing

Even before the Climategate story broke, we learned perhaps the most damning fact of all about the CRU: its refusal to share the raw data that purports to demonstrate that the Earth is getting warmer. There is nothing more essential to scientific integrity than the willingness to share data to enable everyone – colleagues, competitors, skeptics – to peer-review the conclusions drawn by applying your processes to that data. In a world of many minds, you can never know who will bring new insight to a problem, and the spirit of open inquiry demands that the largest number of minds be brought to bear on any problem. Yet, AGW proponents have fought tooth and nail to avoid sharing their data, until CRU admitted this summer that critical data supporting the AGW hypothesis has been tampered with to the point where it is no longer accessible in its original, unadulterated form:

In the early 1980s, with funding from the U.S. Department of Energy, scientists at the United Kingdom’s University of East Anglia established the Climatic Research Unit (CRU) to produce the world’s first comprehensive history of surface temperature. It’s known in the trade as the “Jones and Wigley” record for its authors, Phil Jones and Tom Wigley, and it served as the primary reference standard for the U.N. Intergovernmental Panel on Climate Change (IPCC) until 2007. It was this record that prompted the IPCC to claim a “discernible human influence on global climate.”

+++

In June 2009, Georgia Tech’s Peter Webster told Canadian researcher Stephen McIntyre that he had requested raw data [regarding global temperatures], and Jones freely gave it to him. So McIntyre promptly filed a Freedom of Information Act request for the same data. Despite having been invited by the National Academy of Sciences to present his analyses of millennial temperatures, McIntyre was told that he couldn’t have the data because he wasn’t an “academic.” So his colleague Ross McKitrick, an economist at the University of Guelph, asked for the data. He was turned down, too.

Faced with a growing number of such requests, Jones refused them all, saying that there were “confidentiality” agreements regarding the data between CRU and nations that supplied the data. McIntyre’s blog readers then requested those agreements, country by country, but only a handful turned out to exist, mainly from Third World countries and written in very vague language.

+++

Roger Pielke Jr., an esteemed professor of environmental studies at the University of Colorado, then requested the raw data from Jones. Jones responded:

Since the 1980s, we have merged the data we have received into existing series or begun new ones, so it is impossible to say if all stations within a particular country or if all of an individual record should be freely available. Data storage availability in the 1980s meant that we were not able to keep the multiple sources for some sites, only the station series after adjustment for homogeneity issues. We, therefore, do not hold the original raw data but only the value-added (i.e., quality controlled and homogenized) data.

Jones’ email response to McIntyre included a classic example of the mindset of politicized science:

We have 25 or so years invested in the work. Why should I make the data available to you, when your aim is to try and find something wrong with it.

H/T. As Bruce McQuain concludes from this shoddy episode:

Anyone familiar with data storage throughout the short history of the computer age knows this is nonsense. Transfer of data from various systems to newer systems has been accomplished without real difficulty all thorough its development. What Jones is trying very hard to do is one of two things a) hide data that he’s pretty sure won’t support his conclusion or b) admitting to a damningly unscientific procedure which should, without his ability to produce and share the original data, call into serious question any findings he’s presented.

In short:

[T]he raw data on which the landmark 1996 United Nations Intergovernmental Panel on Climate Change based its conclusion has been destroyed. The University of East Anglia’s Climate Research Unit acknowledged in August that it discarded data that, in addition to the IPCC report, has been cited by other international studies as the main justification for severe restrictions on carbon emissions worldwide.

More here on additional shenanigans with CRU’s computer models. As the CRU emails reveal, the destruction of data was something of a pattern driven by the need to avoid scrutiny:

A May 2008 email from Mr. Jones with the subject line “IPCC & FOI” asked recipients to “delete any emails you may have had” about data submitted for an IPCC report. The British Freedom of Information Act makes it a crime to delete material subject to an FOI request; such a request had been made earlier that month.

Only the subsequent breaking of “Climategate” has finally forced CRU to agree that it will begin to release the raw data on which its studies rest.
As things stood until mid-November 2009, the refusal to share raw data was bad enough. But it was about to get uglier.

5. “Hide The Decline”

The “Climategate” revelation of the CRU emails – which show deliberations among the CRU’s scientists and with allies such as Prof. Mann – came from an unknown source, almost certainly as a byproduct of McIntyre’s battle to get the concealed data. But no one now seriously contests their authenticity, and they are damning in the extent to which they confirm all the worst suspicions about the politicization of the science underlying AGW theory at an institution that has been a central player in shaping the IPCC’s “consensus” reports:

In global warming circles, the CRU wields outsize influence: it claims the world’s largest temperature data set, and its work and mathematical models were incorporated into the United Nations Intergovernmental Panel on Climate Change’s 2007 report. That report, in turn, is what the Environmental Protection Agency acknowledged it “relies on most heavily” when concluding that carbon dioxide emissions endanger public health and should be regulated.

At least one major figure in the scandal, CRU’s Prof. Phil Jones, has already stepped down from his position pending an inquiry into the affair.

I can’t hope to catalog here the full scope of the CRU emails – for example, accounts of the scientists cheering the death of one skeptic and musing about punching another in the face or questioning the motivations of their critics and comparing them to critics of Obama’s health care plan – but will hit a few of the high points. The emails show CRU personnel frankly admitting the political process’ impact on the science

Other emails include one in which Keith Briffa of the Climate Research Unit told Mr. Mann that “I tried hard to balance the needs of the science and the IPCC, which were not always the same”

More broadly, they reveal a point of view in which facts need to be found to fit the theory rather than the other way around. Here’s one email response to the BBC piece linked above regarding the lack of warming over the past 11 years:

The fact is that we can’t account for the lack of warming at the moment and it is a travesty that we can’t. The CERES data published in the August BAMS 09 supplement on 2008 shows there should be even more warming: but the data are surely wrong. Our observing system is inadequate.

In what is now the most notorious email, Jones, in a 1999 message to Mann and four others, discussed imitating a “trick” used by Mann to “hide the decline” in certain post-1960 temperatures (context explained here and here):

Once Tim’s got a diagram here we’ll send that either later today or first thing tomorrow. I’ve just completed Mike’s Nature trick of adding in the real temps to each series for the last 20 years (ie from 1981 onwards) amd [sic] from1961 for Keith’s to hide the decline. Mike’s series got the annual land and marine values while the other two got April-Sept for NH land N of 20N. The latter two are real for 1999, while the estimate for 1999 for NH combined is +0.44C wrt 61-90. The Global estimate for 1999 with data through Oct is +0.35C cf. 0.57 for 1998.

(Mann, for his part, has offered the most unconvincing of explanations as to how he could be ignorant of what this email was talking about).
This graph shows precisely the impact of Jones’ trick on the dataset at issue:

H/T

A similar attitude is found in a 2009 email to Jones from Wigley, presenting strategies to “explain” a “warming blip” in the data from the 1940s – again, the sort of thing one does if presenting data in an argumentative format, rather than in the spirit of disinterested inquiry:

Here are some speculations on correcting SSTs to partly explain the 1940s warming blip. If you look at the attached plot you will see that theland also shows the 1940s blip (as I’m sure you know).

So, if we could reduce the ocean blip by, say, 0.15 degC, then this would be significant for the global mean – but we’d still have to explain the land blip. I’ve chosen 0.15 here deliberately. This still leaves an ocean blip, and i think one needs to have some form of ocean blip to explain the land blip (via either some common forcing, or ocean forcing land, or vice versa, or all of these). When you look at other blips, the land blips are 1.5 to 2 times (roughly) the ocean blips – higher sensitivity plus thermal inertia effects. My 0.15 adjustment leaves things consistent with this, so you can see where I am coming from.
Removing ENSO does not affect this.

It would be good to remove at least part of the 1940s blip, but we are still left with “why the blip”.

A similar example of Jones insisting that the data can’t be right if it contradicts his “gut feeling” is discussed here, and here an example of the CRU crew’s reactions to questions raised by skeptics that they recognized as having some validity. And the examples of the CRU’s misconduct may not be isolated incidents, as examination of official data at NASA and in the New Zealand government’s temperature records suggests.

Another of the alarming but – to observers of the AGW debate – unsurprising revelations was the extent to which the CRU cabal sought to control the peer-review process to determine its outcome:

Here’s what Phil Jones of the CRU and his colleague Michael Mann of Penn State mean by “peer review.” When Climate Research published a paper dissenting from the Jones-Mann “consensus,” Jones demanded that the journal “rid itself of this troublesome editor,” and Mann advised that “we have to stop considering Climate Research as a legitimate peer-reviewed journal. Perhaps we should encourage our colleagues in the climate research community to no longer submit to, or cite papers.”

So much for Climate Research. When Geophysical Research Letters also showed signs of wandering off the “consensus” reservation, Dr. Tom Wigley (“one of the world’s foremost experts on climate change”) suggested they get the goods on its editor, Jim Saiers, and go to his bosses at the American Geophysical Union to “get him ousted.” When another pair of troublesome dissenters emerge, Dr. Jones assured Dr. Mann, “I can’t see either of these papers being in the next IPCC report. Kevin and I will keep them out somehow – even if we have to redefine what the peer-review literature is!”

Eduardo Zorita, a German climate researcher who reviewed papers for Climate Research, has called for three of the leading (Jones, Mann, and Stefan Rahmstorf) to be ousted from the IPCC, arguing that the CRU emails confirm what was already known by climate researchers about the corruption of the process:

I may confirm what has been written in other places: research in some areas of climate science has been and is full of machination, conspiracies, and collusion, as any reader can interpret from the CRU-files. They depict a realistic, I would say even harmless, picture of what the real research in the area of the climate of the past millennium has been in the last years. The scientific debate has been in many instances hijacked to advance other agendas.

These words do not mean that I think anthropogenic climate change is a hoax. On the contrary, it is a question which we have to be very well aware of. But I am also aware that in this thick atmosphere -and I am not speaking of greenhouse gases now- editors, reviewers and authors of alternative studies, analysis, interpretations,even based on the same data we have at our disposal, have been bullied and subtly blackmailed. In this atmosphere, Ph D students are often tempted to tweak their data so as to fit the ‘politically correct picture’. Some, or many issues, about climate change are still not well known. Policy makers should be aware of the attempts to hide these uncertainties under a unified picture. I had the ‘pleasure’ to experience all this in my area of research.

Others have also come forward with stories of Jones’ involvement in using peer review to stifle dissenting points of view. The structure of scientific peer review and of academia more broadly unfortunately creates opportunities for politicized groups to capture these institutions and enforce their particular brand of groupthink in a field like climate science. The critical way this is done – hinted at by Jones’ threat to “redefine” peer review – is the existence of gatekeepers. An establishment consisting of a comparatively small number of people controls publication, which controls who gets to get jobs in academia and who has to go out into business. That establishment also controls or influences grant funding (which is often government grant funding, depending on the field), which controls whose jobs are made permanent with tenure and whose aren’t. You have to publish and get funding to get and keep your job. If the gatekeepers refuse to publish or fund any dissenters, and they do refuse, then scientific consensus is not reached by reasoning but manufactured by brute force.
The kind of thinking apparent in the CRU emails is so common among AGW proponents that they are sometimes unafraid to say it aloud. Economist Thomas Schelling told The Atlantic that “It’s a tough sell. And probably you have to find ways to exaggerate the threat” before musing that “I sometimes wish that we could have, over the next five or ten years, a lot of horrid things happening — you know, like tornadoes in the Midwest and so forth — that would get people very concerned about climate change.”

Some environmentalists, like British leftist George Monbiot, found Climategate too much to stomach, leading calls for Jones to step down. But the head of the IPCC, after the fashion of all UN bodies, has circled the wagons around the Climategate miscreants; while he chastised them for being “indiscreet” in putting their comments in writing, he “said an independent inquiry into the emails would achieve little, but there should be a criminal investigation into how the emails came to light.”

The Obama Administration’s response, as the President prepares to journey to Copenhagen to promote new restrictions on the U.S. economy in the name of preventing AGW, has similarly been one of sheer denial of the need for re-examination of the science:

Press Secretary Robert Gibbs stressed this afternoon, and the White House nonetheless believes “climate change is happening.”

“I don’t think that’s anything that is, quite frankly, among most people, in dispute anymore,” he said during Monday’s press briefing.

Climate czar Carol Browner was equally dismissive, leaning on the crutch of “consensus”:

Ms. Browner initially shrugged when asked about the e-mails, saying she didn’t have a reaction. But when a reporter followed up, she said she will stick with the consensus of the 2,500 climate scientists on the International Panel on Climate Change who concluded global warming is happening and is most likely being pushed by human actions.

And “science czar” John Holdren saw nothing unusual in the CRU’s behavior:

“It’s important to understand that these kinds of controversies and even accusations of bias and improper manipulation are not all that uncommon in all branches of science,” Holdren told the House of Representatives Select Committee on Energy Independence and Global Warming.

This is unsurprising, since the email archive includes Holdren’s own emails sharing support and suggestions with a number of the Climategate figures.
Pay no attention, in other words, to the politicized hacks behind the curtain; just know they have reached a “consensus.”

C. Cutting The Corners

The assiduous use of shady science for political ends usually runs further under the radar than John Edwards’ snake oil or the Climategate scandal. As liberal Slate writer Will Saletan admits of efforts to use politicized junk science to prop up “sin taxes” on junk food and fast food as a means of meddling with individuals’ personal choices:

To justify taxes on unhealthy food, the lifestyle regulators are stretching the evidence about obesity and addiction…. Liberals like to talk about a Republican war on science, but it turns out that they’re just as willing to bend facts. In wars of piety, science has no friends.

And Congressional liberals can be quite as uninterested in science when acting on product safety scares driven by junk science, quackery or Luddism, as NPR noted earlier this year:

A new federal ban on chemical compounds used in rubber duckies and other toys isn’t necessary, say the government scientists who studied the problem.

The ban, which took effect in February, prohibits making or selling duckies and other children’s products that contain chemicals called phthalates, which are used to make plastic soft. Congress passed the ban in 2008 after concluding that the chemicals posed a risk to children who chew on their toys.

The action came despite advice not to enact the ban from scientists at the Consumer Product Safety Commission, which regulates toys.

The commission opposed the ban because “there was not a risk of injury to children,” says Dr. Marilyn Wind, deputy associate executive director for health sciences at CPSC.

It reached that conclusion after studying phthalates in toys for more than 25 years and acting several times to make sure children were not exposed to even a slight risk from products that contain the chemicals.

Overlawyered has extensive archives on the the Boxer-and-Feinstein-pushed legislation in question, the Consumer Product Safety Improvement Act (CPSIA), and its disastrous effects in practice – another reminder that injecting bad science into politics has real-world consequences. In fact, there is ample historical precedent, in the hard sciences as well as in social science, for left-wing political and social agendas to drive scientific hackwork whose influence far outstrips anyone’s ability to replicate its underlying research:

Consider the residue of such frauds as Rachel Carson, Alfred Kinsey, and Margaret Mead. Carson’s invented findings and unscientific methods led to the banning of DDT, which in turn cost the lives of tens of millions of children in undeveloped nations. Kinsey’s tortuously doctored “sex research,” as Dr. Judith Riesman has so amply demonstrated, was not only invented to sate his perverted lusts, but created scientific myths about normal and abnormal behavior which haunt us to this day. Mead also simply invented research to fit her idea of what the science of anthropology ought to be in order to justify her own immature and immoral behavior. Carson, Kinsey, and Mead had an agenda before they did any research, and this agenda governed everything else.

Which brings us to the root cause of politicized science: the temptation of power.

Science And Its Enemies On The Left, Part II(D)

IV. The Temptation of Power
Politicized science is, itself, a subset of the most profound problem of scientific integrity: the temptation presented when science is freed from the restraints that accompany all other forms of human activity, from accountability to moral opprobrium to external civilian oversight. When experts rule, the first casualty is the quality of their expertise.
The siren song of scientific triumphalism was graphically on display throughout the multi-year controversy over embryonic stem cell research. The conservative objection to such research was that it not only entailed the destruction of human embryos, but envisioned the future creation of more embryos – each containing a genetically unique human identity – solely for the purpose of destroying them in the process of scientific research. Even moreso than the question of the humanity of unborn fetuses growing in the womb, the question of whether to regard embryos outside the womb as fully human due to their distinct genetic identity is one on which people of good faith disagree. There is understandable reluctance to face the consequences of granting any legal status to an embryo, especially because embryos are routinely created with no prospect of a full human life in the process of in vitro fertilization, and by and large our society has settled without much debate on the legality and propriety of in vitro fertilization.
President Bush, weighing the moral calculus involved, reached a compromise decision – explained in a nationally televised address in August 2001 – to provide for the first time federal funding for stem cell research, whether or not it involved stem cells derived from the destruction of embryos, but drawing the line at taxpayer funding for any research that would entail the destruction of future embryos. Bush’s compromise was not morally satisfying or entirely principled from anyone’s perspective, but it was an attempt to balance the moral and practical considerations surrounding some of the thorniest problems of modern bioethics.
Honest critics of Bush’s decision argued that Bush had drawn the line in the wrong place, and that embryos should not carry any moral weight. But those voices were few. By far the loudest talking point from the Democrats was that Bush had committed the offense of placing moral restraints of any kind on science. This was, we were told, “anti-science” or a “war on science,” and as discussed above it set off an orgy of exaggerations of the promises of the science involved. At the core of the argument was the assertion that religious people in particular should not dare to speak against the morality of anything scientists might wish to explore.
The constant insistence by the Democrats that scientific progress should brook no moral restraint, and that anyone standing in the way of this particular scientific project was a dangerous theocrat, was positively chilling. Because science, with its great power not only over human liberty but human life itself, is if anything one of the human activities most in need of our most strenuous moral faculties. Biochemists and climatologists need to be subjected to civilian oversight and the moral conscience of society for precisely the same reasons as soldiers, economists, central bankers, lawyers, spies, diplomats, epidemiologists, rocket scientists, urban planners, and every other form of expert.
The temptation of the unrestrained expert comes in two stages. First, the expert in pretty much anything is subject to tunnel vision, and the greater the expertise, the greater the risk of such a focus. The expert is apt to have a limitless appetite for resources while ignoring competing social priorities. He may demand policies that maximize the ends sought by his discipline, while ignoring countervailing considerations and interests. He may refuse to accept any moral restraints or limitations on his methods or the uses of his creations.
Tunnel vision is only the beginning, however. Because the expert who learns that the recitation of jargon and the appeal to authority effectively exempts him from moral or social scrutiny has made the most dangerous discovery known to man: the ability to get away with virtually anything. Because if people will let you talk your way into money and influence with good science on the grounds that they do not understand it or have no right to obstruct it, what is to stop the expert from using bad science from accomplishing the same end, if the layman isn’t equipped to tell the difference between the two?
Cracked.com, of all places, satirically captures the essence of the problem:

Every scientist dreams of a world without ethics. Whenever a scientist sees a set of twins, he or she secretly wonders what would happen if you surgically swapped their faces. They already have a chamber set up to harness the power of their screams as they gradually realize what has happened. Every day, ethics barely prevent experiments like this from being carried out.
But what if we didn’t have these ethics? When Nazi doctors were let loose during WWII, the incredible rate of their discoveries were matched only by the inadequacy of words to atone for them. They might have been monsters, but without them, we never would have discovered the yield elasticity of the elderly, or learned what part of a prisoner’s tongue detects the taste of angel meat.

The Nazis are obviously the extreme example, as is often the case, but the argument ad Hitlerum is a useful moral guidepost for precisely that reason: it reminds us why we insist that scientists, like everyone else, be subject to moral restraints and the skeptical eye of their fellow man. Because otherwise you do things like appointing a “science czar” who has written approvingly of compulsory abortion and sterilization as a solution to overpopulation.
In a society not yet as far gone as Nazi Germany, Climategate is what happens when scientists think nobody is looking, or at least that nobody is competent or willing to call them out. Given power, or the ability to influence those in power, the scientists have acted the way human beings have always acted around power. And because the Left provides greater scope than the Right for the exercise of power over civil society in the name of what science says is good for us – and because it denies the sources of moral remonstrance that can stand as a bulwark against scientific hubris – it will continue to offer the greatest temptations for scientists to be seduced by power.
In Part III: Dogma and the starvation of science and technology.

Science And Its Enemies On The Left, Part I

Liberals have dined out at length in recent years on the charge that the Bush Administration and the cultural Right spent the Bush years engaging in a “war on science.” Since political power passed to the Democrats, President Obama has practically dislocated his shoulder patting himself on the back for “restor[ing] our commitment to science”. But power in the hands of the Left is no boon to science. Quite the contrary.
Whatever one thinks of the validity of the “war on science” charge against the Right, the threats to scientific integrity and scientific progress from the Left are numerous, and they are very real. In this three-part series, I’ll consider six major species of dangers to science and the role of the Left (inside and outside of government) in promoting them.
I. Junk Science
While definitions of science differ, most of us learned in grammar school and high school the basic concepts. Science is, as Karl Popper famously defined it, the testing of falsifiable propositions. In other words, you start with a hypothesis that seems to be supported by certain facts, but that would be proven false if certain other things happened, and you test to see if you can make those things happen. The process of experimentation – whether by laboratory experiments, statistical regressions, archaeological digs, or myriad other methods of testing hypotheses about past events or present processes – can take a variety of forms. But the mental approach to science should remain common: the scientist, being human, may seek a desired conclusion, but is expected to use a method of testing for the truth that keeps the finding of truth always as its ultimate goal (wherever the chips may fall). Perhaps more importantly, the process must be transparent in its methods, so that later researchers can replicate the method to ensure that the same test in different hands produces the same result. Scientists, to be scientists, must never say “trust me, I’m a scientist” or “I’m a scientist, don’t question my work,” and must never demand acceptance of theories that cannot be put to a test they could fail; they must share information and accept correction with a spirit of collegial search for a common and provable truth.
Those are the ideals; humans, being human, often fall short of them. This shouldn’t shock us, but we should see the failures for what they are: bad science.
Probably the most pervasive cause of bad science, and one in which the Left and its component interest groups are heavily complicit, is junk science. Junk science is, broadly speaking, opinion or outright deception masquerading as science, for the purpose of persuading people of something that’s untrue, unprovable or at least unproven. Junk science shows up in many places, but is most frequently encountered in the courtroom, and its motives are often more or less baldly about money.
The proliferation of junk science in the courts is notorious and widespread, and while the federal courts in particular have tried to crack down on it since the Supreme Court’s 1992 Daubert decision authorized trial judges to act as ‘gatekeepers,’ the job of keeping junk science away from juries falls mainly to individual judges who may not necessarily have the scientific training themselves to spot all the charlatans. Much of modern litigation turns on expert witnesses of various stripes, from products liability experts to economists, and a good many of these are effectively professional testifying experts. That, in and of itself, need not be a bad thing; just as with lawyers, there are many honorable and principled professional experts, but many lazy hacks and cheap scam artists as well. Every lawyer knows that with enough monetary incentives, you can eventually find someone with a couple of degrees to say almost anything if you’re not picky.
The personal injury plaintiffs’ bar – one of the Democrats’ core constituencies – is by far the most notorious offender in this regard. The incentives for junk science are especially powerful on the plaintiffs’ side, since a novel scientific theory, in and of itself, can create from whole cloth an industry that will use governmental power to transfer millions or billions of dollars of wealth (a defendant can lose the battle of the experts but win a case on another basis, but a successful plaintiff must have an expert). There’s an awful lot of money to be extracted through the use of junk science. It is no accident that it is customarily the plaintiffs’ bar that resists efforts to have judges take a more active role in screening expert witnesses to determine the reliability of their processes. Asbestos litigation alone has produced more scientific scandals than one could possibly recount. Consider as a sample studies of vast disparities in diagnoses of asbestosis by unaffiliated and plaintiff-affiliated physicians. The Wall Street Journal has exhaustively catalogued the use of junk science to perpetrate a massive products liability fraud against Dole Foods in Nicaragua. The list could go on and on. Michael Fumento explains a typical example from the silicone breast implant litigation:

Consider the case of Dr. Nir Kossovsky of the UCLA, an inventor of one of the types of tests the FDA warned against. Kossovsky is one of the best-known critics of silicone implants, has testified at the FDA hearings that resulted in the essential ban on silicone breast implants, and is a regular expert witness for plaintiffs in implant- related trials.
Kossovsky developed what he called Detecsil, for “detect silicone.” “The Detecsil test confirms whether or not an individual has developed an immune response to silicone-associated proteins,” declared an advertisement. As such, it could be useful in showing whether women with autoimmune disease (in which the body’s immune system turns on itself) got that illness from silicone.
In legal depositions supporting his expert witness testimony, Kossovsky cited tests from the famed Scripps Clinic and Research Foundation in La Jolla, California as corroborating his own. In fact, Scripps researchers found the antibodies of autoimmune disease victims were the same regardless of whether they had silicone implants or not. All the test found was that there was a higher level of antibodies in anybody with autoimmune disease, exactly what one would expect.
Scripps has repeatedly to disavowed Kossovsky’s statements. Indeed, a Scripps researcher was on record as saying, “To my knowledge, there is no test that can predict or indicate any specific immune response to silicone,” which is what the test must do to prove adverse health effects.
Even before this latest public FDA warning, Kossovsky had been warned by the Agency to quit using his test. But the damage has been done. The test has played a crucial role in numerous implant trials, including ones with verdicts of $7 million, $25 million, and an incredible $40 million.

More of the same here.
II. Quackery and Luddism
Another longstanding threat to science is the twin scourge of quackery and Luddism. While there is likewise a lot of money in quackery, and sometimes money in Luddism as well, there is a subtle difference in their genesis. Junk science may be principally driven by the needs of its suppliers, who know what they want to prove and need scientific experts to bend their processes to reach the desired results. But true quackery comes from somewhere different: it arises from existing demand, from the needs of people to believe things that science can’t supply. Quacks prey on popular gullibility about quasi-scientific-sounding cure-alls, while Luddites (the heirs of the British protestors against the Industrial Revolution) thrive on irrational fears and superstitions about technological progress. The social, cultural and political Left is heavily complicit in both phenomena.
For a good illsutration of what this looks like, David Gorski has an exhaustive look at how the Huffington Post has made itself a haven for the opponents of modern medical science. It’s worth reading the whole thing, which details the site’s madness for anti-medical and anti-scientific quackery ranging from campaigns against vaccines to enthusiasm for all sorts of bizarre homeopathy, much of which is reflective of the Hollywood culture that pervades the site. The sort of quackery pushed by the HuffPo and its allies includes a lot of traditional junk science as well (for example, plaintiffs’ lawyers pushing assaults on vaccine makers in the hopes of hitting a judgment jackpot in court) but the rot runs deeper than that, from the Left’s neverending quest for substitutes for religion and commerce and its conspiracy theories about business.
We see all of this at work in the causes the HuffPo flacks for. Parents of children with autism need to blame some evil external force for their children’s condition. New Age spirituality fills the gap created by rejection of traditional faiths, and offers the promise of patent-medicine style cures where modern medicine is short of answers. Diet gurus of every kind prey on the widespread chase for the magic weight-loss pill, just as the purveyors of sexual remedies prey on deeper insecurities. Some of these forces go beyond politics, but New Age hokum and hostility to vaccines and other successful products are unmistakably phenomena of the cultural Left. The campaign against vaccine manufacturers has drawn support from icons of the Democratic party:

US senators John Kerry of Massachusetts and Chris Dodd of Connecticut have both curried favor with constituents by trumpeting the notion that vaccines cause autism. And Robert F. Kennedy Jr., a scion of the most famous Democratic family of all, authored a deeply flawed 2005 Rolling Stone piece called “Deadly Immunity.” In it, he accused the government of protecting drug companies from litigation by concealing evidence that mercury in vaccines may have caused autism in thousands of kids. The article was roundly discredited for, among other things, overestimating the amount of mercury in childhood vaccines by more than 100-fold, causing Rolling Stone to issue not one but a prolonged series of corrections and clarifications. But that did little to unring the bell.

The hysteria – contradicted by numerous peer-reviewed studies – has real consequences:

In certain parts of the US, vaccination rates have dropped so low that occurrences of some children’s diseases are approaching pre-vaccine levels for the first time ever. And the number of people who choose not to vaccinate their children (so-called philosophical exemptions are available in about 20 states, including Pennsylvania, Texas, and much of the West) continues to rise. In states where such opting out is allowed, 2.6 percent of parents did so last year, up from 1 percent in 1991, according to the CDC. In some communities, like California’s affluent Marin County, just north of San Francisco, non-vaccination rates are approaching 6 percent (counterintuitively, higher rates of non-vaccination often correspond with higher levels of education and wealth).
That may not sound like much, but a recent study by the Los Angeles Times indicates that the impact can be devastating. The Times found that even though only about 2 percent of California’s kindergartners are unvaccinated (10,000 kids, or about twice the number as in 1997), they tend to be clustered, disproportionately increasing the risk of an outbreak of such largely eradicated diseases as measles, mumps, and pertussis (whooping cough). The clustering means almost 10 percent of elementary schools statewide may already be at risk.

Left-wing Luddism is also at work in the outright hysteria, especially in Europe, regarding things like genetically modified “frankenfood” and nanotechnology, here at home in the form of fear of nuclear power and food irradiation; in each case the unfocused, irrational fear comes first, and the pseudoscience used to justify it comes later. Thus, despite the sterling safety record of nuclear power everywhere outside the Soviet Union, and its crucial role in the power systems of countries like France and Japan, we have not had a nuclear power plant built in the U.S. since the Three Mile Island accident in 1979.
The environmental Left is especially guilty of this sort of thing, creating bugaboos grounded in public fear and ignorance about technology ranging from 1989’s notorious Alar scare to 2001’s hysteria about microscopic quantities of arsenic in drinking water, to “Gulf War Syndrome.” Over and over we see the Left pressing to convince the public that unseen forces of technology and business – from pesticides to power lines – are conspiring to make them sick, and insisting that once such an assertion is made, the burden is on the skeptic of such crazes to produce conclusive scientific proof to the contrary. The process of disinterested analysis of the evidence and testing of falsifiable hypotheses falls swiftly by the wayside. Science itself becomes the enemy. Anyone who spent time wringing their hands over Bush-era policies with any degree of sincerity should find this all deeply alarming.
In Part II: Politicized science and the temptations of power.

How To Tell The “Culture Wars” Are Not Over

Peter Beinart had an article in the Washington Post the Sunday before Election Day arguing that the culture wars are over; according to Beinart, Sarah Palin was failing to connect with voters because

Palin’s brand is culture war, and in America today culture war no longer sells….Although she seems like a fresh face, Sarah Palin actually represents the end of an era. She may be the last culture warrior on a national ticket for a very long time.

Beinart is wrong – completely wrong. We can tell that the “culture wars” are not over because Democrats and liberals are still fighting them. We know culture warriors won’t disappear from national politics because one of them just won the presidential election. And if Beinart means that conservatives are losing the culture wars, that’s far from a certain bet, and one the Democrats would be ill-advised to take.

Continue reading How To Tell The “Culture Wars” Are Not Over

The Integrity Gap, Part III of III: John McCain and Joe Biden

III. John McCain: The Zeal of the Convert
Given the length and public nature of John McCain’s career on the national stage, I won’t go here through his record in the depth that I explored those of Gov. Palin and Sen. Obama. But I will lay out a number of examples that show the sharp contrast between McCain’s approach to situations calling for integrity and Barack Obama’s.
Senator McCain’s former, false friends in the media used to paint him as some sort of secular saint, a man who infused politics with a unique brand of noblity that elevated the grubby business of Washington to a higher plane of bipartisanship, reform and self-sacrifice. St. John the McCain was always a myth; we should put not our faith in politicians, and nobody gets as far as McCain has in national politics wholly unsullied by politics and all that comes with it. But if McCain the saint is a myth, McCain the public servant is nonetheless an admirable figure who has passed many tests of fire (in some cases, literally). McCain looks more rather than less impressive when we view him through the justifiably jaded eye that should be cast on any politician.
McCain has been, in his words “an imperfect servant” of this country; I will not try to convince you otherwise, and will deal up front with the two major and deserved blots on his reputation. I will not try to convince you that over 26 years in politics he’s been above consorting with lobbyists, accepting endorsements from unsavory people, pandering to constituencies, or changing positions when it suits his needs. But however you define the negative features of “politics as usual,” we expect our Presidents to have that quality that allows them to rise above it – perhaps not every day on every issue, but often enough, and forcefully enough, and in spite of enough slings and arrows that we can have confidence that they can be trusted to stand up for us even when it’s hard to do so, even at great cost.
There is no question that McCain has shown, over and over and over again, his ability to do just that. He’s publicly called out waste and corruption, even in his own party. He’s taken on powerful vested interests on the Left and the Right – not just wealthy and well-connected ones but grassroots interests as well. McCain may not fight every battle that needs to be fought, but he will always be fighting, and he will not be afraid to take on targets that can hit him back.

Continue reading The Integrity Gap, Part III of III: John McCain and Joe Biden

Pick Your Favorite Part of the Farm Bill! Bipartisan Socialism and The Audacity of Corporate Welfare.

Farm policy, although it’s complex, can be explained. What it can’t be is believed. No cheating spouse, no teen with a wrecked family car, no mayor of Washington, DC, videotaped in flagrante delicto has ever come up with anything as farfetched as U.S. farm policy.

-P.J. O’Rourke.
So yesterday, the United States Senate voted to pass into law H.R. 6124, the “Food, Conservation, and Energy Act of 2008,” already passed by the House, in both cases by a veto-proof majority, rendering irrelevant the belatedly principled stand of President Bush, who promises a veto.* Chances are pretty good that your Congressperson and at least one of your Senators voted for this atrocity, which passed the House 306-110 and the Senate 77-15, despite valiant efforts to slow down the bill by Jim DeMint and Tom Coburn. Like all really horrendous things to come out of Washington, this load of legislative fertilizer has broad bipartisan support. So give thanks for the hardy few Senators – 13 Republicans and two Rhode Island Democrats – who voted “no” (as well as the lengtier list of their 98 Republican and 12 Democratic House counterparts you can find here):

Bennett (R-UT), Hatch (R-UT), Coburn (R-OK), Collins (R-ME), DeMint (R-SC), Domenici (R-NM), Ensign (R-NV), Hagel (R-NE), Kyl (R-AZ), Lugar (R-IN), Murkowski (R-AK), Sununu (R-NH), Voinovich (R-OH), Reed (D-RI) Whitehouse (D-RI)

In case you are wondering, John McCain and Barack Obama missed the vote, but McCain says he would have vetoed the bill “and all others like it that serve only the cause of special interests and corporate welfare” and because farm subsidies threaten free trade, whereas Obama is proud to support precisely the kind of legislation that has made Washington so roundly popular with the public (in Obama’s statement, he says “I applaud the Senate’s passage today of the Farm Bill, which will provide America’s hard-working farmers and ranchers with more support and more predictability.” So much for “Change”).**

Anyway, in honor of this occasion, I ask you to submit your vote for your favorite provision of this new federal law, which your elected representatives have enacted on your behalf. See, Democracy works! Read On…

Continue reading Pick Your Favorite Part of the Farm Bill! Bipartisan Socialism and The Audacity of Corporate Welfare.

The New Federalism Speech

RudyRon.jpg
As regular readers know (see here and here), I continue to believe that Rudy Giuliani is the best potential president in the GOP field – and specifically, the one most likely to accomplish conservative policy priorities – and would be a strong candidate in the general election. That assessment, which I won’t rehash here, is based in large part on Rudy’s personal characteristics, temperament and accomplishments; after all, ideas don’t run for president, people do. Of course, Rudy’s record on social issues has long been the primary obstacle to winning the nomination, and everyone who paid any attention whatsoever to Rudy’s record and to Republican politics over the past few decades knew that. Thus, a Rudy for President campaign needed to have a well-thought-out plan from Day One as to how to deal with that obstacle.
Since the summer of 2005, I have been laying out in public and in private – including to people who hoped, at the time, to have the ear of the Giuliani camp – my roadmap to how Rudy could overcome this obstacle. I never thought he could win over everyone, but I believed then and believe now that there was an opportunity, had Rudy played his cards the right way at the right time, to take the goodwill and respect Rudy enjoyed with socially conservative voters who respected him as a leader and offer a compromise that would keep enough pro-lifers, in particular, on board to build a winning coalition in the primaries and hold enough of the party together – and appeal to enough independent or swing voters – to march to victory in November.
Rudy has followed some of the paths I laid out (not that I take credit for this), but he never gave the speech I thought would really make the difference. When voters go to the polls tomorrow in Florida, they may breathe new life into Rudy’s campaign, or more likely they may end it. Either way, it’s probably too late to give this speech – and so I offer it to you, dear readers, and to posterity.

Continue reading The New Federalism Speech

Two Cheers For The Hypocrites

A few weeks back, Washington DC buzzed with the news that Louisiana Senator David Vitter, a conservative Republican, admitted (a step ahead of public disclosure, possibly by hard-core porn magnate Larry Flynt) that he had frequented a prostitute. The response on the left was numbingly predictable, attacking Vitter not for his immorality but on grounds of hypocrisy because of his socially conservative campaign themes and voting record, such as his opposition to same-sex marriage. A common theme was the idea that Vitter should not be able to argue again for such positions, because his private sins compromised his public positions. Even Glenn Reynolds got into the act, suggesting “How about moving to make prostitution legal in the District instead [of apologizing]? It would be an appropriate penance, and D.C. would be a . . . fitting . . . place to start.”
This is wrong, and dangerous. Our politicians and civic leaders have never been saints, but the punishment for their sins should not fall on the rest of us. I would much prefer to see a wicked man be a hypocrite and vote for what is right and good, rather than choose consistency and advocate for wrongdoing.
The left’s argument on this front – usually implicit, sometimes made explicitly – is that immoral behavior, especially in matters sexual, proves that moral standards are impossible to satisfy, and thus that the whole project of promoting virtue is a fool’s errand. Go and do what feels good, you can’t be expected to know better.* But nobody ever said that moral standards are easy, or the history of human behavior and philosophical and religious thought wouldn’t be littered with battles over what is right and wrong and how to get people to choose the former.
Moreover, the critics set an impossibly high standard when they claim that a moral failing in one area should cause a man to abandon the advocacy of virtue in others. Thus, we hear that Bill Bennett, because he has had a gambling problem, should not be heard to speak on other issues of public and private morals, ranging from sexual mores to drugs to obstruction of justice. But with rare exceptions, the same logic isn’t applied to the champions of vice. The left never argues that figures like Madonna or Hugh Hefner, just to pick two examples of people who have built decades-long careers on championing sexual immorality, are hypocrites because they don’t also have gambling problems. Pursuing this asymmetrical line of reasoning can only have the result of unilaterally disarming one side. If only saints can defend right and good and virtue, they will be undefended, while the ranks of the defenders of wrong and sin swell to bursting.
In any event, the left’s champions are no less frequently guilty of advocating standards they don’t follow or impose on themselves. They call for limits on the use of energy, while galavanting around in private jets and high-powered SUV motorcades. They argue that society benefits from keeping poor kids in public schools without a choice to leave, while sending their own kids to expensive private academies. They hire picketers and leafleters to protest low wages and benefits, and pay them a pittance and no benefits. They press for strict gun controls, then hire armed private bodyguards of their own. The greatest moral controversy in recent memory, the Clinton impeachment, came about when a variety of rules created by moralizing liberals – the independent counsel statute, sexual harrassment litigation, liberal rules of discovery in civil litigation – were turned against one of their own, with predictable howls of outrage.
None of this is to suggest that a man’s private immoral or illegal behavior is irrelevant to his fitness for public office. Voters certainly have to judge the totality of a candidate’s character – moreso in the case of candidates for executive or judicial positions, who exercise broader individual discretion, but it’s not irrelevant for legislators either – and the private and public behavior are all a part of this. The fundamental question Louisiana voters will need to ask about Sen. Vitter is whether this changes their view about his ability to do his job, keep his promises and avoid misusing his office. You don’t take the public man in isolation, but neither do you take the private man in isolation; the whole must be examined and judged as one.
But in asking that question, Sen. Vitter’s continued willingness to fight for the things he campaigned on should be a plus. If you are a Louisiana voter who thinks prostitution is bad for your community, why should you have to live with it because of a Senator’s private sins? If you are a Mississippian who thinks racial preferences are bad policy, why should you have to live with them because of Trent Lott’s mouth? In fact, the courage to stand up for the right thing to do even when it exposes you to the hypocrisy charge is one of the most important attributes of a leader, the facet that makes it possible to pursue justice and virtue without constantly checking to trim your positions to fit your own failings. Consider the “chickenhawk” charge, the assertion that Presidents Clinton and Bush should have been hesitant to use military force, not having served in combat themselves. It was apparent, watching Clinton at work, that while he sent the military hither and yon on ‘humanitarian’ interventions, he was nonetheless hypersensitive to the argument that he should avoid using the military, precisely because of his own personal history; it is equally obvious that Bush does not put stock in such arguments, and makes his calls as he sees them. I much prefer to see Republicans who will stand up against abortion, for example, regardless of the state of their private lives, than those who feel that they have to take a squishily pro-choice position because they fear the scrutiny of the anti-moral scolds.
It takes a truly twisted perspective to see a man who commits private sins while arguing in public for virtue, and choose to take issue with the latter.
So, two cheers for the hypocrites. Even if they don’t do right by themselves or their families – even if, at times, they deserve to be punished by the law or defeated at the polls – they should still be proud to have done the right thing in their time in public service.

Continue reading Two Cheers For The Hypocrites

Baseball’s Most Impressive Records

You often hear discussion of what are baseball’s most unbreakable records; it’s a hardy perennial of the barroom or talk radio debate (I recently got a marketing email from a company selling a video on the topic).
But “unbreakable” isn’t really the yardstick for a great record. Let’s use the most glaring example: in 1879, Will White threw 680 innings. By modern standards, that’s almost beyond comprehension; pigs will fly before you see a pitcher throw 681 innings in a single season. But is it really that impressive? The previous record was 622, in a 66-game season (by 1879 the schedule was 80 games for White’s Reds). Five years later, Old Hoss Radbourn threw 678.2 innings, and Guy Hecker threw 670.2. White deserves a tip of the cap for out-working his contemporaries, but his record was set at the best possible time – the historic high-water mark of starting pitcher innings – and narrowly survived a challenge just 5 years later.
No, what I’m interested in is the baseball’s most impressive records. So I bring you this list. First, the parameters. No team records, just individual feats. No single-game records – if the name “Mark Whiten” doesn’t remind us that anybody can have a great day, I don’t know what will. No postseason records, since the opportunities to set those are very unevenly distributed. No fielding records, for a long list of reasons regarding the nature and availability of fielding stats. No managing records, although Connie Mack’s 53-year managing career is impressive under any definition, as is Joe McCarthy managing 24 years with three different franchises without having a losing record once. And no negative records – Nolan Ryan’s career walks record is perversely impressive, but not worthy of honor. All I looked at was career and single-season hitting and pitching records, and streaks.
Second, my criteria for choosing and ranking the records. I looked at three factors. One, how far the record stands out from the #2 (and for measurement I compared to the second-best by a different player, rather than, say, compare two Barry Bonds seasons). Two, the level of skill, consistency or exceptional endurance involved – winning games and hitting home runs is more impressive than at bats or hit by pitches. Relatedly, I gave more emphasis for higher-profile stats, and didn’t look at really obscure records or metrics (no VORP record here). And three, I gave extra credit to players who – unlike Will White – set their records under less than the ideal conditions for setting that particular record.
Finally, in a few cases I consolidated in a single “record” multiple records a player set in a single season or career that basically flow from the same cause, such as Barry Bonds’ walk and intentional walk records.
This doesn’t claim to be a scientific list; I have my opinion, you have yours. But my justifications and the facts are provided.
Honorable Mention
A. Johnny Vander Meer, Consecutive No-Hitters, June 11 & 15, 1938.
Vander Meer’s is more in the nature of a single feat than a streak, but the fact is, Major League Baseball has been around for 131 years, and in all of that time, only one man has pitched back-to-back no-hitters. The rarity of the thing, given that many opportunities, argues for its impressiveness.
B. Carl Hubbell’s 24 Regular-Season Wins Without a Loss, July 17, 1936-May 27, 1937
Hubbell’s win streak is impressive and tops the #2 on the list (Rube Marquard) by 20%. On the other hand, it’s somewhat artificial because (1) it overlaps two seasons and (2) during the streak he lost Game 4 of the 1936 World Series. If the streak was longer (see below) I might have listed him, but either way it is still an impressive feat.
C. Ed Reulbach, two shutouts in one day, September 26, 1908.
Granted, doubleheaders have always been somewhat rare and it’s been decades since anybody pitched both ends of one, so Reulbach, unlike Vander Meer, didn’t have as much potential competition. Even so, it’s a significant accomplishment to be the only one to do it.
D. The Consecutive Complete Games Record
The record for consecutive starts with a complete game is commonly thought to belong to Jack Taylor, variously attributed as 185, 187 or 188 between 1901 and 1906 (the most thorough examinations seem to support the 185 number; when I was younger I recall it being listed as 176). But back before they moved the mound in 1893, Jack Lynch seems to have thrown 198 straight in the American Association in 1883-87 and 1890, although the one in 1890 after a 3-year absence involved him absorbing 18 runs on 22 hits, and I have no idea what he’d been doing in the interim.
Even with the uncertainties and the prevalance of complete games in those days, though, finishing that many in a row over a period of 5-6 years is really hard work. So these guys get the Honorable Mention. Now, for the list – the number in parentheses is the percentage by which the record exceeds the next best total by another player:
20. Tris Speaker, 792 Career Doubles (6.2%)
Speaker’s doubles record is a mountain few have approached. #2 on the list is Pete Rose, and he needed 15,000 plate appearances (a good 30% more than Speaker) to get within 50. Craig Biggio hits gobs of doubles, has been incredibly durable and is in his 20th season, and Biggio still needs 131 doubles to catch Speaker. Speaker did play the second half of his career in a good era for doubles, and played nearly his whole career in two great doubles parks – Fenway and League Park in Cleveland, which also had a high, close fence (60 feet high and 290 feet away in right) you could bounce doubles off.
19. Ichiro Suzuki, 225 Singles in 2004 (9.2%)
If you look atop the single season singles record list, you will find it dominated by 1890s hitters Willie Keeler and Jesse Burkett, from an era when league batting averages ranged from the .290s to as high as .309. Yet, in an age of the longball, Ichiro the Throwback left Keeler’s record in the dust. Swimming against the modern offensive tide, and in an extreme pitcher’s park no less (Ichiro that season hit .338 at home, .405 on the road) makes his accomplishment more impressive.
18. Nolan Ryan, 7 Career No-Hitters (75%)
The no-hitter is something of a flukey one-game achievement, or this record would rank higher, but only two pitchers have thrown 4 no-nos, and Ryan almost doubled the total of #2 man Sandy Koufax, throwing no-hitters in three decades.
17. Billy Hamilton, 192 Runs Scored in 1894 (8.5%).
Hamilton played in the best of circumstances for the scoring of runs – the highest-scoring season ever, a loaded lineup that set the all-time record by hitting .349 as a team and including three other .400 hitters. But then, he still scored 8.5% more runs than anyone else in his era, and his record has never been seriously challenged even though it was set in a 129-game season. And, of course, scoring runs is the whole point of the game, and you get a lot less help from teammates than with RBIs; this is the most prestigious sort of record.
16. Rickey Henderson, 130 Steals in 1982 (10.2%)
Rickey’s single-season steals record stands out, but further than it did at the time; Brock had stolen 118 nine years earlier, and Vince Coleman would steal 110 three years later as a rookie, the first of three straight 100+ seasons. I’d rate Rickey higher but for the fact that he was caught a record 42 times; he would have helped his team more if he’d attempted 120-130 steals instead of 172. That said, the 1982 A’s were a team that had rapidly collapsed from a contender, so Rickey gave a lot of excitement to fans who had little else.
Either way, the record was partly a matter of choice, and less impressive for being so.
15. Owen “Chief” Wilson, 36 Triples in 1912 (16.1%)
Not only did Wilson set the triples record by a comfortable 36-31 margin, but he finished 10 triples (38%) ahead of the nearest 20th century competitor. It’s rare to see anybody reach mid-May anywhere near Wilson’s pace. It’s just a freakish accomplishment for a guy who played seven seasons as a regular and cracked 20 triples only the once.
14. Walter Johnson, 110 Career Shutouts (22.2%)
And note that Johnson is 39.2% ahead of the #3 guy, Christy Mathewson (Grover Alexander is #2). 110 shutouts is an astonishing figure, a shutout every six starts and more than a quarter of his 417 career wins (he needed them too – Johnson played for good teams and bad, but the latter were sometimes appalling, like the team where the team leader in RBI drove in 44 runs). Johnson did pitch in the best time for shutouts, the era when ERAs were low and unearned runs were rarer than in the 1880s, and when aces finished their starts. He did throw 24 shutouts in 8 years from 1920-27, though.
13. Cal Ripken, 2,632 consecutive games played, May 30, 1982-September 19, 1998 (23.6%).
Ripken’s streak is commonly listed at or near the top of lists like this, but it’s not by any means unbreakable – you just need to want it badly enough, be healthy and lucky and a good enough player not to get benched. Unlike the pitching workload records, it’s not a feat of spectacular physical endurance, nor does it require any particular skill or accomplishment.
All that said, 16 years without missing a game – including several years of not missing an inning – is nonetheless an impressive feat of willpower and durability, and Ripken left Lou Gehrig three seasons in the dust. That deserves some recognition here.
12. Hank Aaron, 6856 Career Total Bases (11.8%)
Aaron’s homer record may be under seige, but his career total bases record, held by a margin of some 700 over Stan Musial and nearly a thousand ahead of #4 Barry Bonds, remains safely out of reach. Aaron had 3771 hits, 98 triples and 624 doubles to go with 755 HR. To do that required durability (15 straight seasons of over 600 plate appearances, 19 straight of over 500, and the first year he fell short he still hit 40 homers), consistency, tremendous power and a good batting average, and he did it despite playing more tha half his prime years in a pitchers’ park and running his career straight accross the low-scoring 1960s.
11. Old Hoss Radbourn, 59 Wins in 1884 (11.3%)
Unlike the innings record, winning a huge number of games in a season requires more than just showing up for work. Even at the height of the everyday starting pitcher’s era, only three pitchers ever won 50 games in a season, and Radbourn beats the next closest (John Clarkson in 1885) by six wins despite having pitched, much unlike Clarkson, for a team that finished fifth in the league in runs scored. The man ended the 1884 season 47 games over .500 in a 112 game season, almost singlehandedly winning his team the pennant, and he did it the hard way, by posting a league-leading 1.38 ERA in a near-the-record 678.2 innings, and topped it off by winning all three games of the first-ever postseason ‘world’s series’ without allowing an earned run.
1884 was the pinnacle of high-inning starting pitching (average innings started falling off sharply within two years), and talent was spread thin that year due to the upstart Union Association at a time when the two leagues barely had enough talent as it was. So, that counts against ranking Radbourn’s feat even higher. But it’s no exaggeration to say that he did more to help his team win that season than any player ever in any season.
10. Ty Cobb, .366 Career Batting Average (2.2%)
Cobb’s margin over Rogers Hornsby is the narrowest of any record on the list, but he well deserves the high ranking. The lifetime batting average record is one of the game’s most important and prestigious, and Cobb has held it wholly unchallenged for eight decades. I believe Hornsby and Al Simmons were the last significant players to crack .360 more than a season or two into their careers, and I don’t believe anyone has actually been ahead of Cobb at the end of a season at any point since (Joe Jackson was above .370 through age 24, Willie Keeler through age 30). Plus, Cobb did most of his damage before the high-average 1920s arrived; at the end of 1919 he was a 32-year-old lifetime .372 hitter. Plus, unlike other percentage record-holders like Ed Walsh’s career ERA record, Cobb held his pace over an extraordinarily long career, 24 seasons and more than 13,000 plate appearances.
9. Eric Gagne, 84 Consecutive Saves, August 28, 2002-July 3, 2004 (39.2%)
Gagne’s streak, like Hubbell’s, was sort of interrupted, albeit by a blown save in the All-Star Game. And yes, saves are somethingof an artificial stat. But still, Gagne’s whole job was to close out wins, and for nearly two years he did that every time he was asked without fail, surpassing the prior record (Tom Gordon with 54) by a margin of 30 saves.
8. Pedro Martinez, 0.737 WHIP in 2000 (4.3%)
Baserunners per inning, or WHIP, is a bit of an obscure stat – or was until the dawn of rotisserie baseball – but it’s a real measure of pitching excellence to hold the all-time record for it. Pedro’s also third on the career list, surrounded entirely by a top 10 of deadball-era pitchers like Walsh and Addie Joss and Three Finger Brown. His single season record is 4.3% ahead of #2 Guy Hecker in 1882, but Hecker pitched just 104 innings; he’s 5.8% ahead of Walter Johnson’s 1913 season.
I rate Pedro this highly because, while other players on this list reached their accomplishments under less than ideal conditions, nobody else set one so much in the teeth of hostile conditions. Pedro did this in Fenway Park in 2000, in a hitters’ park (Pedro’s road WHIP was 0.680) in a league with a 4.91 league ERA; he led the league in ERA by a margin of two runs and Mike Mussina at 1.187 had the only other WHIP in the league below 1.200.
7. Barry Bonds, .609 OBP, 232 Walks, 120 Intentional Walks in 2004 (10.2%, 36.5%, 266.7%)
All three of these records are integrally related, so I rate them as a single accomplishment. Bonds busted Ruth’s walk record by 63 and Ted Williams’ OBP record (set in his .406 season) by more than 50 points, and he did so in good part because he surpassed the second-highest non-Bonds IBB total (Willie McCovey’s record) by a margin of 120-45. (Note that they didn’t keep IBB in Ruth’s day, he almost assuredly beat that in the years before Gehrig came up).
Yes, steroids. But still, taken on its own merits, those are mind-blowing margins on a couple of records I’d never thought would be broken.
6. Babe Ruth, .690 Career Slugging Percentage (8.8%)
Only 34 times in the game’s history has anybody but Ruth slugged above .690 in a season; aside from Albert Pujols, who is still early in his career, only five other players have career figures above .600. Bonds is 82 points behind Ruth. The Babe sustained this pace over a 22-year career, leading the league 13 times in 14 years and only once having enough at bats to qualify and finishing lower than third.
5. Joe DiMaggio’s 56-Game Hitting Streak, May 15, 1941-July 16, 1941 (27.2%)
Joe D’s streak – unlike Gagne’s – would be 57 if you counted the All-Star Game. What makes it even more amazing, as you probably know, is that he started a 17-game streak the day after this one ended. Another player could get hot and break this one, and I don’t list it quite as high as the season and career records that follow, but it is nonetheless a sustained accomplishment of consistency, and the margin compared to the next-closest streak (Keeler and Pete Rose at 44 apiece) places it very high on this list.
4. Mike Marshall, 106 Games and 208.1 Relief Innings in 1974 (12.8%, 23.8%)
Unlike White’s innings as a starter, Marshall’s workload passes the “wow” test – it was recognized as a jaw-dropping accomplishment at the time it happened, and nobody else has tried anything like it since. The innings is the real whopper here (if you are wondering, the #2 non-Marshall total is Bob Stanley; the Steamah threw 168.1 innings in relief in 1978). Some LOOGY may yet challenge the games record a third of an inning at a time, but that relief innings record, though not set really so long ago, will never again be approached.
3. Nolan Ryan, 5714 Career Strikeouts (23.3% and falling)
Ryan’s margin is being eaten away by the #2 man, who as of this morning is Roger Clemens, 17 Ks ahead of Randy Johnson. But both are ancient – Clemens is 44, Johnson is 43 – and more than a thousand strikeouts behind Ryan. Ryan maxed out the record in every direction – he started very young (19), set the single-season record at his peak, and pitched until he was 46. He threw heavy workloads at a very high strikeout rate. Yes, Ryan pitched in a great era for power pitchers, but he buried the record far from his most impressive contemporaries and way out of reach of anybody before or since.
2. Rickey Henderson, 1406 Career Steals (49.9%)
Rickey’s record is just preposterous – nobody could have imagined when Lou Brock set the career steals record that somebody would not just blow by Brock but get halfway to lapping him. Like Ryan, Rickey started early, peaked above everyone else and stayed ridiculously late, and ended by putting his record so far out of reach that nobody will even talk about it again.
1. Cy Young, 511 Career Wins, 7354.2 Career Innings, 749 Career Complete Games (22.5%, 22.5%, 15.9%)
I’d be disinclined to rate Young at the top for mere durability, but first of all he ran off and hid with the career wins record, and hardly any record is more significant or prestigious; he did that in part by having the ninth-best career ERA relative to the league (by ERA+) of anybody with more than 2500 career innings, sixth-best among anybody with 3,000 innings, and he threw more than twice that. And second, while it’s true that plenty of guys carried heavy workloads in Young’s day, and while it’s true that by the end of his career Young was facing guys who would have long pitching careers, Young and Young alone was able to do both, which is why his records stand so far and away beyond anyone in his era, before or since.
Consider this illustrative chart. Among all the pitchers who threw 400 innings in a season even once, only 12 of them managed to stay in a rotation (100 or more innings or 20 or more starts) for more than ten seasons, and everybody but Cy hit the wall by 14 seasons. I list each pitcher with their number of seasons throwing 400, 200 and 100 innings:

Pitcher 400 IP 200 IP 100 IP
Cy Young 5 19 22
Pud Galvin 9 13 14
Kid Nichols 5 13 14
Tim Keefe 7 11 14
Bobby Mathews 7 9 14
Vic Willis 1 13 13
Adonis Terry 1 9 13
Mickey Welch 6 11 12
Tony Mullane 6 11 12
Old Hoss Radbourn 6 11 11
John Clarkson 6 9 11
Gus Weyhing 5 11 11

Note: Pud Galvin threw about 100 innings in the National Association; if you discount that, Young’s margin for Major League innings expands. The chart includes as well Bobby Mathews’ NA experience. Also, Kid Nichols, Young’s nearest contemporary, won 20 games twice in the minor league Western Association in mid-career and then returned to be a top major league pitcher without missing a beat, so he would be closer to Young than anyone else, but still far behind.
This is why Young stands alone at the top. Nobody can match his ability to carry those huge 19th century workloads and keep going into his 40s.

Why I’m With Rudy (Part I)

 

RS: https://web.archive.org/web/20070202033118/https://redstate.com/stories/elections/2008/why_im_with_rudy_part_i

No Mayor of New York City has been elected to statewide or national office in more than 130 years. There is a reason for that: it’s an impossible job, running an ungovernable, bloated, corrupt and dysfunctional bureaucratic leviathan with an even more ungovernable and (despite its massive government) inherently lawless city attached to it. It eats the men who take the job alive.
At least, that is what everyone used to think, before 1993. One man changed that.
It’s too early, of course, for any of us to be 100% certain of who we will support once the candidates have filled out their staffs and endorsements, fleshed out their policy platforms, and taken their show on the road. But if I had to vote today among the candidates who are actually running or likely to run, my vote for the 2008 GOP presidential nominee would without a doubt be former United States Attorney and New York City Mayor Rudolph Giuliani. Here, in general outline, is why I – as a pro-life Reaganite conservative who voted for McCain in 2000, currently support Rudy and hope to be able to support him a year from now.
1. We Need To Win The War.
There are many issues on the table in the next election, but far and away the most important remains the global battle against international terrorism and radical jihad, and in particular the regional struggle in the Arab and Muslim worlds to replace aggressive, terror-sponsoring tyrannies and weak, terror-harboring failed states with governments that provide some measure of popular self-determination and popular legitimacy to stand against the extremists. To win the war, we need four things from the presidential field: (1) a presidential candidate who is committed to prosecuting the war, (2) a presidential candidate who will make the right judgments about how to do so, (3) a presidential candidate with these characteristics who will actually get elected, and (4) a presidential candidate who, after getting elected, can continue to explain and sell his policies to the American people to ensure continued political support for the war.
In terms of public leadership on these issues, Mayor Giuliani and Senator McCain have a huge lead over the other candidates, most of whom (other than Newt Gingrich) are latecomers at best to the public debate. If there is one candidate we can depend on not to bend to Beltway pundit fatigue on this issue, it’s Mayor Giuliani – he was there on the ground when this war came to our shores, he was almost killed himself that day, he went to the funerals of the firemen and cops he had bonded with over his prior 7 and a half years as mayor. It’s personal. Rudy is a battler; he is not tempermentally suited to talking about “exit strategies” but rather about victory, and his background overcoming supposedly insurmountable obstacles as Mayor gives him the fortitude to pursue victory as Ronald Reagan did, even when conventional wisdom says it’s time to coexist.
2. We Need To Win The Election.
As I said above, you can nominate the best candidate in the world, but to win the war he or she needs first to win the election. In terms of electability. . . yes, it can be a fool’s errand for primary voters to vote with their Electoral College calculators instead of their hearts, but in a practical universe you do need to start by looking at who in the field has at least a chance of being viable in a national election. That means no Newt, who consistently polls with a disapproval rating over 50% and whose public image is long since cast in concrete. And it also means no Sam Brownback. I like Brownback, who is one of the GOP’s very best Senators and who has shown a real willingness to follow his conscience even when it means standing nearly alone, sometimes against the White House (as in the Harriet Miers episode), or even when it means taking on issues that nearly nobody else cares about and that don’t fit the stereotype of a right-wing hard-liner. But we simply are not going to hold all the states Bush won in 2004, let alone have the chance to seize more ground, behind the decidedly uncharismatic Brownback, who has made his name almost exclusively on social issues as – yes – the stereotypical right-wing hard-liner. The media would work overtime to put him in that box, and Brownback lacks the star power to go over their heads. He’s not the hill I want to die on.
Also, remember: while it’s true that the Democrats made a huge miscalculation in nominating John Kerry based on “electability” (not that Howard Dean would have been more electable), their real problem was in overvaluing his paper qualifications (war record, long tradition of existence in the Senate) and undervaluing how badly Kerry would perform on the trail. I believe Rudy will show himself to be the best campaigner in the GOP field – he’s quick-witted, funny, and long accustomed to the hot lights of the national stage (when he was Mayor, Rudy was a fixture on national TV shows like Letterman and Conan, and he had to contend with both the local tabloids and big national papers like the NY Times breathing down his neck, as well as dealing with hostile critics retail at countless press conferences and radio call-in appearances). He’s also tough enough to come out swinging at whatever the most likely Democratic nominee, Hillary Clinton, can throw at him. This is one of my big worries about Mitt Romney: to be frank, I don’t want to end up in a knife fight with Hillary armed with nothing but Mitt Romney’s hair.
Sure, Rudy’s liberal record on social issues like abortion and gay rights will cost him some votes nationally, but mainly in states that are not going to break for an arch-liberal Democrat like Hillary or Obama. And Rudy will play well in Florida and put in play key Northeastern states the Democrats can’t win without: New Jersey, Pennsylvania, possibly even Rhode Island (which has a huge Italian-American population), and make the GOP ticket at least competitive in the Empire State (though he would probably only win NY if the Dems nominate Edwards). This is, after all, a man who won two terms in a city that’s 80% Democrat.
3. Leadership Matters.
There’s more to Rudy’s advantages in this regard than just electability – there’s also governability. It’s been 23 years since the GOP nominated a presidential candidate who speaks in complete sentences. That matters beyond the campaign trail – it matters quite a lot if the president can’t sell his own policies and can’t personally defend against attacks. Rudy’s not the only highly articulate candidate in this field, but he does strike me as the best (Romney, who’s a good salesman, has yet to demonstrate the ability to react quickly and speak specifically when pressed with tough questions).
But being articulate is only the beginning of leadership. A good leader has to set direction and inspire. But he also has to do two other things: (1) know his followers and (2) follow through.
It’s on the first point where I have my major concern about John McCain. With the significant exception of his years in the POW camp, McCain has never been a leader. Yes (unlike, say, Kerry), he’s been a strong public voice on specific issues. But a political leader needs to have followers and hold them together, like Moses crossing the Sinai. McCain, by contrast, is a triangulator, a “maverick” who glories in contrasting himself to the people he would need to lead. John Hinderaker said it best: “I might trust McCain with my life, but not with my party.” One need look no further than Bill Clinton to see what damage a president who triangulates can do to his own party and, ultimately, his own ability to get things done. McCain has, too often, opened fire on his own troops.
With the exception of his ill-fated endorsement of Mario Cuomo over George Pataki, Rudy has not made a practice of attacking his own party, a fact that sets him quite apart from many other moderate/liberal Northeastern Republicans. Virtually all the major battles of his mayoralty were with people to his left. Conservatives may not like where Rudy’s starting point is on every issue, but they know when they get behind him they will all be facing in the same direction.
McCain has also been something of a dilettante as a Senator, flitting among issues, sometimes on the sidelines on major issues while leading the charge on small, idiosyncratic campaigns. That’s a highly effective habit for a legislator – you pick your spots for where you can make the biggest impact. But it’s a decades-long habit he will have to break to become an executive (in 2000 he never did roll out the kind of detailed policy papers that came from the Bush campaign – you always got the impression that the John McCain policy shop began and ended with the Senator’s mouth).
Then, there’s the follow-through, something we need more of than we have sometimes seen from President Bush. In the Senate they talk of show-horses and work-horses; if Rudy is an impressive show horse he is an even more formidable work horse, a guy who through sheer force of will bent the New York City government to his way of doing things. And he got results. Other politicians can point to a record of accomplishment, but only Rudy really and definitely changed my life – if it weren’t for his success in cleaning up New York I might have stayed in Boston after law school and surely would not now be a New York City homeowner.
Again: Rudy’s not the only seasoned executive in the race (Romney, Huckabee and Tommy Thompson come to mind), but his record is the most impressive and it’s one that McCain and Brownback can’t match.
4. We Can Hold The Line In The Courts.
Rudy’s record on fiscal, economic, law enforcement and education issues, his battles against racial preferences and the city’s relentless race hucksters, and his outspoken stance on the war on terror, are all the stuff that should excite conservatives about his candidacy. But what concerns people the most is his stance on social/family/sexual issues in general, and abortion in particular.
Now, maybe I’m less of a purist than some pro-lifers. I’ve been voting in New York for 17 years, and in all that time and all the races for Governor, Senator, Attorney General, Congressman, Mayor, and electors for president, the only two pro-lifers I’ve been able to vote for who actually won their elections were Al D’Amato’s re-election to the Senate in 1992 and Dennis Vacco for Attorney General in 1994. Where I come from, if you refuse on principle to vote for pro-legal-abortion candidates, you cede the field to Hillary, Schumer and Spitzer and their ilk.
That said, and while I recognize that there are other Life issues on the agenda, the core battlefield for abortion – the battle we need to win before we can fight any others – is in the composition of the Supreme Court. A pro-choicer who appoints good judges is as functionally pro-life as Harry Reid is functionally pro-choice. (I have discussed this issue in much more exhaustive detail before). And while we need to hear much more from him on this issue, there is, thus far, every indication that Rudy is both willing to appoint conservative judges and able to sell them against a hostile Senate – he’s spoken favorably of John Roberts and Samuel Alito, who he knows from their days in the Reagan Justice Department.
And while Mike Huckabee is a solid pro-lifer and Sam Brownback is a genuine hero on life issues, the other top-tier candidates are less obviously reliable on this issue. Romney, of course, declared himself a committed pro-choicer in 1994, though his repeated conversions on the issue lend a lot of credence to Ted Kennedy’s description of him as “multiple choice” on abortion. McCain has a more consistent pro-life record and voted to confirm the likes of Alito, Clarence Thomas and Robert Bork to the Supreme Court, but three things concern me about McCain on judges – first, his demonstrable willingness to sell out the base to win media plaudits, second, his statements in 2000 that he’d like Souter-backer Warren Rudman as his Attorney General and that he remained proud of all the GOP Justices he’d voted for (which implicitly included Souter and Kennedy), and third, the fact that McCain’s political identity is so wrapped up in his campaign finance crusade, a crusade that may influence him to pick judges who take the written constitution with its pesky free-speech guarantees less than seriously. I’m not saying I’m sold that Rudy would be necessarily better at appointing judges than Romney or McCain, but (1) it’s a close contest and (2) he’d obviously be better than any Democrat.
Life issues are, indeed, important. And if this were peacetime, they would preclude me from supporting Mayor Giuliani. But there’s a war on, folks, and a lot of lives (born and unborn) depend on that, too. In this field, if Mayor Giuliani can make the sale that he will, in fact, appoint solid constitutionalists to the federal courts, that will tide us through.
Anyway, I haven’t covered the entire waterfront on Rudy here, and surely will return to other points in his favor – and other criticisms of him – as we go along. But I do think conservative Republicans who want to win the election, win the war and get results should give the Mayor a long, hard look.
*In the spirit of full disclosure: I do have a variety of ties to Rudy that are not worth tedious rehashing here, having met the man in small gatherings on several occasions and received a fellowship in law school funded by an organization including Mayor Giuliani. Take that for what it’s worth. I’m not affiliated with his exploratory committee, and the only money I’ve received from it is a $30 Blogad on my site the past month.

How Wrong Was Josh Marshall?

Plenty Wrong.

Now that it has been revealed that the main source for Bob Novak’s column “outing” Valerie Plame Wilson as a CIA employee was Richard Armitage, Colin Powell’s right-hand man at the State Department and (like Novak) no fan of the Iraq War, with Karl Rove and a CIA spokesman merely confirming what Novak had already been told by Armitage – and that the White House was kept in the dark for many months, at a minimum, about Armitage’s role – it is clear that there was never any validity to the notion that Novak’s column was the result of some neo-conservative cabal seeking retaliation against Wilson and his wife for Wilson’s publication of a NY Times Op-Ed detailing what should have been a classified intelligence-gathering mission to Niger. This “neocon retaliation” theory was, as you will recall, the central and original theory of why the Plame story was a scandal at all, rather than a one-day story of a run-of-the-mill imprudent leak, and not even in the top ten as far as the most damaging leaks of the past five years.

Joe Wilson himself, of course, was the original source of this theory. But I thought it would be instructive to look back at one of the main blogospheric advocates of that theory – Josh Marshall – to get a full sense of how long and hard he pushed this notion, and thus how badly he ended up leading his readers astray. (I may get to look back at some of the other top Plame-ologists of the Left, but Marshall was perhaps the most visible and this post is long enough as it is). In Marshall’s case, the conspiracy theory was particularly attractive because it fit in with his broader attack on Vice President Cheney and the “neocon” advisers in the Vice President’s office and the Defense Department – indeed, Marshall repeatedly tried to retail a particularly baroque explanation in which the “outing” of Mrs. Wilson was tied to forged documents passed through Italy relating to Niger.

I should start by noting that re-reading Marshall’s archives reminds me how slippery he is – he truly is a master of implying things without coming out and saying them. But the sheer volume of his posts on this story has, unsurprisingly, yielded up more than a few instances of Marshall actually saying what he intended his readers to believe:

Continue reading How Wrong Was Josh Marshall?

Unhealthy Fixation

Tuesday’s fun with the “chicken hawk” argument was, at first blush, about yet another of the stupid arguments you encounter (from Left and from Right) in political debates, an ad hominem that feels good to toss around but makes no logical sense. But this argument is much more than that: it’s political hemlock that the Left/liberals/Democrats can’t seem to stop imbibing, with catastrophic consequences in the 2004 election. You would hope that they’ve learned something from that. Let me count the ways:
1. The Wesley Clark Boomlet: One of the problems the Democrats faced, once Howard Dean flamed out, was the absence of meaningful alternatives to John Kerry that anti-Kerry voters could rally around. One reason for that was the time wasted in the fall of 2003 fawning over Wesley Clark, whose only qualification for running was his military experience. The willingness of Democratic pundits, bloggers and (for a time) voters to swoon over Clark’s military pedigree was a bad early sign of their confusion of military experience with good ideas on foreign policy. Significantly, some of the biggest Clark boosters in the blogosphere, like Kevin Drum and Mark Kleiman, were the same people who went ga-ga over the “AWOL Bush” story. Coincidence? I think not. They convinced themselves that you could defeat Bush in a foreign policy debate by comparing Clark’s distinguished service record to Bush’s.
2. The Rise of Michael Moore: Moore had been on the political scene for some time, with his books and movies. But you may recall that his first direct insertion into the campaign came in January 2004 when he endorsed (who else) Wesley Clark and, in the process of his endorsement, called President Bush a “deserter.” In retrospect, that was the best opportunity then and there for somebody to smack down Moore and keep the debate focused on things that happened less than 30 years ago. Nobody did; to the contrary, Moore kick-started a blog and media frenzy over the previously dormant AWOL story, setting off, among other things, comments from DNC Chair Terry McAuliffe on the subject. This created a monster, as Moore quickly learned that he could say whatever he wanted and still be embraced by the party’s leadership.
3. The Kerry Nomination: Of course, the biggest debacle of all was the decision to nominate John Kerry. I believe, and I doubt too many people would disagree with me on this one, that Kerry would never have won the nomination had it not been for the widespread perception that he could take advantage of the distinction between his own combat record and Bush’s military service record. That calculation wound up overcoming a wealth of reasons, well known to many Democrats, why Kerry could be a terrible candidate.
Now, Kerry did have a decent resume at first glance (two decades in the Senate) and did have his strengths as a candidate, notably his startling aggressiveness as a debater. And he didn’t get blown out in November. But he did lose a lot of ground Al Gore had held, and as more than a few people pointed out during the primaries as well as later on, he was a sort of Frankenstein’s monster of bad candidate traits: in a Senate divided between work horses and show horses, Kerry is a show horse who doesn’t show well, a faux populist who’s bad with people, an orator who gives deadly dull speeches, a guy who’s all image and no substance . . . and his image is as a guy who’s dull, condescending, mean, arrogant, and insincere. A glass-jawed bully who picks fights and boasts “bring it on,” yet whines when attacked back. He’s basically spent thirty years living off youthful exploits that he himself denounced, hiding behind medals he pretended to throw away. And, of course, there was his famous inability to take a clear position and stick to it.
All of this was well known to Democrats. But they overlooked it all in their obsession with proving that Bush was a chicken hawk and Kerry a noble war hero.
4. The Convention: You know the story: the Democratic Convention produced almost no bounce in the polls, and turned out to be a missed opportunity to lay out a coherent message. Why? Does the phrase “reporting for duty” ring a bell? Yet another blind alley, as the Democrats stressed over and over the contrast in Kerry’s and Bush’s service records at the expense of talking about a winning strategy in the war on terror or even laying out a stronger and more detailed critique of Bush’s.
5. The Swift Boat Vets: We knew all along that Kerry would take some heat from Vietnam veterans over his conduct after the war. But nobody had really expected Kerry to suffer such damage from attacks on his service itself. There’s no question that those attacks were motivated and given more visibility by the extent to which Kerry sought to play the “I served and you didn’t” card.
6. Rathergate: The final way Bush’s critics went astray over their obsession with hunting chicken hawks was the fiasco of the 60 Minutes hack job on Bush’s National Guard service. Once again, the zeal of Bush critics who had pursued this story for five years overbore their judgment about the credibility of their sources, and led to a humiliating reversal that symbolized, for many voters, the media’s mania to get Bush by any means necessary. Worse for the Democrats, the report coincided to a high degree of coordination with attack ads rolled out by McAuliffe. (And I’m leaving out here the roles of Tom Harkin and Max Cleland)
Could Bush have been beaten in 2004? It’s a debate that can rage on through political history, but those of us who lived through it, on either side of the fence, certainly thought it was at least possible, and at any rate a stronger race against him might have salvaged some of the down-ticket disasters for the Dems.
Most of us who supported Bush recognized that Kerry’s service record compared to Bush’s was a positive for Kerry. If the Democrats had left it at that, it would have helped them. But at every turn, the obsession of Bush’s critics with the “chicken hawk” argument – the idea that Bush’s lack of combat service wasn’t just one factor but a disabling fatal flaw for a wartime president – overbore their better judgment about sticking to the issues and the record, and wound up turning a positive into a series of disasters. Will they ever learn? Stay tuned.

You Can Tell A Man By The Company He Keeps

In light of John Kerry’s puzzling insistence on a go-it-alone approach to North Korea in Thursday night’s debate, I thought I’d make a little list. Admittedly, I’m doing much of this from memory, but there seems to be a certain consistency . .

1. The North Vietnamese, during the Vietnam War, compared Ho Chi Minh to George Washington, argued that their war was one of national liberation, accused US troops of regularly committing war crimes and atrocities, called on Nixon to end the war immediately, argued that the people of South Vietnam would be happy to accept communism, and generally argued that the US war in Vietnam was immoral from beginning to end. John Kerry, during the Vietnam War, compared Ho Chi Minh to George Washington, argued that the North’s war was one of national liberation, accused US troops of regularly committing war crimes and atrocities, called on Nixon to end the war immediately, argued that the people of South Vietnam would be happy to accept communism, and generally argued that the US war in Vietnam was immoral from beginning to end.

2. The Soviet Union and its allies denounced the US invasion of Grenada in 1983. John Kerry denounced the US invasion of Grenada in 1983.

3. The Soviets, in the 1980s, denounced Ronald Reagan as a warmonger and a threat to peace for deploying missiles in Western Europe. John Kerry, in the 1980s, denounced Ronald Reagan as a warmonger and a threat to peace for deploying missiles in Western Europe.

4. Daniel Ortega, in the 1980s, denounced US support for the Nicaraguan contras and argued that the US should have peace talks with his regime. John Kerry, in the 1980s, denounced US support for the Nicaraguan contras and argued that the US should have peace talks with Ortega’s regime.

5. Moammar Qaddafi argued that Reagan’s bombing of Libya was unjustified and caused excessive civilian casualties. John Kerry argued that Reagan’s bombing of Libya was unjustified and caused excessive civilian casualties.

6. Our adversaries during and since the Cold War have argued that we were reckless and irresponsible by pursuing missile defense. John Kerry has argued that we were reckless and irresponsible by pursuing missile defense.

7. Fidel Castro has, for decades, regularly denounced US sanctions against Cuba. John Kerry has, for decades, regularly denounced US sanctions against Cuba.

8. In 1991, Saddam Hussein wanted to draw out the process of the Western response in the hopes that it would bog down. John Kerry said we should have drawn out the process.

9. Yasser Arafat has denounced the security fence erected by Israel. John Kerry has denounced the security fence erected by Israel.

We can add four more from the debate alone:

10. In 2002-03, Saddam Hussein wanted to draw out the inspections process and make it more multilateral. John Kerry says we should have drawn out the inspections process and made it more multilateral.

11. Kim Jong-Il wanted to have bilateral talks rather than multilateral talks. John Kerry says we should have had bilateral talks rather than multilateral talks.

12. Osama bin Laden says we helped him by invading Iraq. John Kerry says we helped bin Laden by invading Iraq.

13. The Iranian mullahs oppose US sanctions against Iran, wish to enter into agreements with the US, and insist that there are plausible reasons why a poor but oil-rich country needs nuclear power. John Kerry opposes US sanctions against Iran, argues that we should enter into agreements with Iran, and insists that there are plausible reasons why a poor but oil-rich country needs nuclear power.

Does Kerry have company on some of these stances? Yes. Can he defend some by pointing to occasions (as with Israel and Cuba policy) where he’s since taken the opposite position? Yes. Is he actually an unpatriotic America-hater? Of course not. But remember: Time and time and time again, America’s enemies have argued against us – and Kerry has echoed their charges. I’d rather trust the national defense to someone who’s not so quick to echo the words and strategies of our enemies.

(A partial list of sources: Kerry’s stances on Grenada and Nicaragua, the first Gulf War, the Cold War and Grenada again, the security fence, the Cold War again, Libya, Nicaragua again, and Grenada again, and Cuba).

Baseball Mom

Baseball, the sages tell us, is a game for fathers and sons. From games of catch and Little League coaches all the way to the big league world of Alomars and Ripkens and Bondses and Griffeys, we often think of how the game ties together generations of men. All of this is true, of course; hey, I got choked up at the end of “Field of Dreams” the first time I saw it, too.
But let’s not overlook one of the best gifts a boy can have growing up as a baseball fan: the Baseball Mom.

Continue reading Baseball Mom

WHY BASEBALL STILL MATTERS: My September 11 Story

On Tuesday, they tried to kill me.

I am ordinarily at my desk between 7:30 and 8:30 in the morning, in my office on the 54th floor of one of the World Trade Center’s towers. Tuesday, I was running late – I stopped to vote in the primary election for mayor, an election that has now been postponed indefinitely. Thank God for petty partisan politics.

Around 20 minutes to 9, as I have done every day for the past five years, I got on the number 2/3 train heading to Park Place, an underground stop roughly a block and a half, connected underground, to the Trade Center. The train made its usual stop at Chambers Street, five blocks north of my office, where you can switch to the local 1/9 that runs directly into the Trade Center mall. The subway announcer – in a rare, audible announcement – was telling people to stay on the 2/3 because the tunnel was blocked by a train ahead of us. Then he mentioned that there had been “an explosion at the World Trade Center.”

Now, I grew up in the suburbs, so maybe I’m not as street smart as I should be, but after living in the city a few years, you develop a sense of the signs of trouble (like the time there were shots fired in the next subway car from mine). I didn’t know what the explosion was, maybe a gas leak or something, but I knew that I was better off getting above ground to see what was going on rather than enter the complex underground. So I got off the train to walk to work.

When I got above ground, there was a crowd gathering to see the horror above: a big hole somewhere in the top 15-20 stories of the north tower (having no sense of direction, I thought that was Number 2 at the time, not Number 1 where my office was), with flames and smoke shooting out. I quickly realized it would not be safe to go into the office, despite a number of things I had waiting for me to do, so as I heard the chatter around about there having been a plane crash into the building (onlookers were saying “a small plane” at that point) and a possible terrorist attack, I turned away to start looking for a place to get coffee and read the newspaper until I could find out what had happened. That was when it happened.

The sound was a large BANG!, the unmistakable sound of an explosion but with almost the tone of cars colliding, except much louder. My initial thought was that something had exploded out of the cavity atop the tower closer to us and gone . . . where? It was followed by a scene straight out of every bad TV movie and Japanese monster flick: simultaneously, everyone around me was screaming and running away. I didn’t have time to look and see what I was running from; I just took off, hoping to get away from whatever it was, in case it was falling towards us. Nothing else can compare to the adrenaline rush of feeling the imminent presence of deadly danger. And I kept moving north.

Once people said that a second plane had hit the other tower, and I saw it was around halfway up – right where my office was, I thought, still confused about which tower was which – it also appeared that the towers had survived the assault. I used to joke about this, telling people we worked in the only office building in America that had been proven to be bomb-resistant. I stopped now and then, first at a pay phone where I called my family, but couldn’t hear the other end. I stopped in a few bars, calling to say I was OK, but I still didn’t feel safe, and I kept moving north. In one bar I saw the south tower collapse, and had a sick feeling in my stomach, which increased exponentially when I saw Tower Number One, with my office in it and (so far as I knew) many of the people I work with as well, cave in. Official business hours start at 9:30, but I started reeling off in my head all the lawyers who get in early in the morning, and have for years. I thought of the guy who cleans the coffee machines, someone I barely speak to but see every day, who has to be in at that hour. I was still nervous, and decided not to think about anything but getting out alive. A friend has an apartment on 109th street, so I called him and kept walking, arriving on his doorstep around 1 p.m., and finally sat down, with my briefcase, the last remnant of my office. I had carried a bunch of newspapers and my brown-bag lunch more than 120 blocks. The TV was on, but only CBS was broadcasting – everyone else’s signal had gone out of the Trade Center’s antenna.

Finally, the news got better. I jumped when there were planes overhead, but they were F-15s, ours. American combat aircraft flying with deadly seriousness over Manhattan. My wife was home, and she had heard from people at the office who got out alive. It turns out that my law firm was extraordinarily lucky to get so many people out – nearly everyone is now accounted for, although you hold your breath and pray until it’s absolutely everyone. The architect who designed the towers – well, we used to complain a lot that the windows were too narrow, but the strength of those buildings, how they stayed standing for an hour and an hour and a half, respectively, after taking a direct hit by a plane full of gasoline – there are probably 10 to 15,000 people walking around New York today because they stayed up so long.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

By Wednesday night, the adrenaline was finally wearing off, and I was just angry. They had tried to kill me, had nearly killed many of the people I work with, and destroyed the chair I sit in everyday, the desk I work at and the computer I do my work on. And that’s before you even begin to count the other lives lost. Words fail to capture the mourning, and in this area it’s everywhere. I finally broke down Thursday morning, reading newspaper accounts of all the firemen who were missing or dead, so many who had survived so many dangers before, and ran headlong into something far more serious, far more intentional. My dad was a cop, my uncle a fireman. It was too close.

The mind starts to grasp onto the little things, photos of the kids and from my wedding; the radio in my office that I listened to so many Mets games on, working late; a copy of my picture with Ted Williams (more on that some other day); the little Shea Stadium tin on my desk that played “take me out to the ballgame” when you opened it to get a binder clip, the new calculator I bought over the weekend. All vaporized or strewn halfway across the harbor. The things can mostly be replaced, they’re just things, but it’s staggering to see the whole context of your daily routine disappear because somebody – not “faceless cowards,” really, but somebody in particular with a particular agenda and particular friends around the world – wants you dead.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

There’s a scene that comes to mind, and I’m placing it in the Lord of the Rings because that’s where I remember it, but feel free to let me know if I’ve mangled it or made it up. Frodo the hobbit has lived all his life in the Shire, where the world of hobbits (short, human-like creatures) revolves around hospitality and particular etiquette and family snobbery and all the silliest little things, silly at least in comparison to the great and dangerous adventure he finds himself embarked on. Aragorn, one of the Men, has been patrolling the area around the Shire for years, warding off invading creatures of all varieties of evil. Frodo asks Aragorn, eventually, whether he isn’t frustrated with and contemptuous of hobbits and the small, simple concerns that dominate their existence, when such dangers are all at hand. Aragorn responds that, to the contrary, it is the simpleness and even the pettiness of the hobbits that makes the task worthwhile, because it’s proof that he has done his job – kept them so safe and insulated from the horrors all around them that they see no irony, no embarrassment in concerning themselves with such trivial things in such a hazardous world. It has often struck me that you could ask no better description of the role of law enforcement and the military, keeping us so safe that we may while our days on the ups and downs of made-up games.

+++++++++++++++++++++++++++++++++++++++++++

And that’s why baseball still matters. There must be time for mourning, of course, so much mourning, and time as well to feel secure that 55,000 people can gather safely in one place. The merciful thing is that because, save for the Super Bowl and the Olympics, U.S. sports are so little followed in the places these evildoers breed – murderous men, by contrast, have little interest in pennant races – that they have not acquired the symbolic power of our financial and military centers. But that may not be forever.

But once we feel secure to try, we owe it most of all to those who protect us as well as those who died to resume the most trivial of our pursuits. Our freedom is best expressed not when we stand in defiance or strike back with collective will, but when we are able again to view Barry Bonds and Roger Clemens as the yardsticks by which we measure nastiness, to bicker over games. That’s why the Baseball Crank will be back. This column may be on hiatus for an undetermined time while the demands of work intrude – we intend to be back in business next week, and this will not be without considerable effort – but in time, I will offer again my opinion of why it would be positively criminal to give Ichiro the MVP, and why it is scandalous that Bill Mazeroski is in the Hall of Fame. And then I’ll be free again.

Hall of Fame: Lou Whitaker, Dave Concepcion and Dave Parker

Hall of Fame Part 3: Lou Whitaker, Dave Concepcion and Dave Parker (Originally ran 12/22/00 on the Boston Sports Guy website):

SECOND BASEMAN
Lou Whitaker is a pretty easy one, in my book. No question whether Sweet Lou had the longetivity – only Eddie Collins and Joe Morgan played more games at second base than Whitaker. It’s a tough position; a lot of guys get ruined turning double plays in traffic. And there was never any down time in the 18 seasons (not counting an 11-game cup-a-joe in 1977) of Whitaker’s career. He was Rookie of the Year in 1978, and notched his two best slugging percentages in his last two seasons, 1994 and 1995 (when he was platooned). He never had an on base percentage below .331, and was over .360 eleven times, finishing his career at .363. He slugged over .400 fourteen years in a row, a very rare accomplishment for a middle infielder.

Continue reading Hall of Fame: Lou Whitaker, Dave Concepcion and Dave Parker