Saturday, August 31, 2013

Facing reality: deterrence is no longer available...

... to the United State as a policy or objective. In Syria or anywhere else.

Following the President's announcement that he would be seeking Congressional approval for a military strike against the Assad regime in Syria, punditry-land is now full to overflowing with articles trying to make the case for bombing Syria or the case for not bombing Syria.

A typical pro-military action piece by William Saletan at Salon:
So why do it? Because if we don’t, things can get much worse. “We cannot see a breach of the nonproliferation norm,” Obama argued. We “have to make sure that when countries break international norms on weapons like chemical weapons that could threaten us, that they are held accountable.” This afternoon, Kerry elaborated: 
“A lot of other countries whose policy has challenged these international norms are watching. … They are watching to see if Syria can get away with it, because then maybe they too can put the world at greater risk. … [If] Assad can gas thousands of his own people with impunity … there will be no end to the test of our resolve and the dangers that will flow from those others who believe that they can do as they will. … [Iran] will now feel emboldened, in the absence of action, to obtain nuclear weapons. It is about Hezbollah and North Korea and every other terrorist group or dictator that might ever again contemplate the use of weapons of mass destruction. Will they remember that the Assad regime was stopped from those weapons’ current or future use? Or will they remember that the world stood aside and created impunity?” 
Impunity is a primitive idea. Kerry and Obama are saying that Assad must be punished. Obama calls it “repercussions,” a “shot across the bow,” a “signal that [Assad] better not do it again.” It’s not about saving Syria, much as we'd all like to do that. It’s about inflicting pain.
A typical anti-military action piece by John Hinderaker at Powerline:
First, there are, and have been, many regimes that abuse their people in various ways. Generally speaking, we do not undertake to “punish” them for doing so. While the use of chemical weapons on civilians is evil, it is not clear that it is any more so than the use of machine guns. It seems to me that we should not undertake to punish without a strong, and clearly defined, security interest.

Second, I have no confidence in our ability to calibrate a strike so finely–enough to punish, but not enough to tip the balance of power in the rebels’ favor. It seems highly likely that whatever we do will be either pitifully inadequate, or unduly heavy-handed. As the unfortunate experience of the “Arab Spring” shows, the last thing we want to do is inadvertently bring about an extremist Muslim regime in Syria.

Third, there is a good deal of truth to Colin Powell’s “Pottery Barn” theory–if we break it, we own it. I don’t see any happy outcome for Syria (or, for that matter, any Arab country) in any foreseeable time frame. Sending a few cruise missiles Assad’s way can’t influence Syrian history in any significantly positive way, but whatever we do, its impact will be exaggerated forever. We will find ourselves being blamed for whatever ills Syria suffers for the next 50 years, no matter how silly such claims may be. And apart from hyperbolic claims, any attack certainly will entail civilian casualties and other undesirable consequences...

In my view, if we are not prepared to bring about Assad’s demise–and we probably shouldn’t be–the best thing we can do is stand aside. Sometimes history is tragic, and there isn’t anything we can do about it.
Both pieces are of course operating under the umbrella of a limited military strike, since Obama has made it crystal clear this will be the only military action on the table. Saletan essentially takes it as a given that such course of action will have a meaningful impact. Why? What is his basis for such an assumption? No doubt it is the same as pretty much everyone else with similar views: "we're gonna blow some stuff up and kill a bunch of people, and that will scare Assad into changing his behavior."

Friday, August 30, 2013

Objectives and an actual plan: these are good things to have

In the aftermath of the Iraq War, one of the criticisms leveled against the Bush Administration was that there was a plan in place to attack Iraq from day one, even before the events of September 11th, 2001. Supposedly, this plan was a product of the neoconservatives who filled up important roles in the Administration, from Paul Wolfowitz to Richard Perle. And in that regard, 9-11 became nothing more than an excuse to invade Iraq. Indeed, George Tenet's tell-all book--At the Center of the Storm--seemed to make this clear, when it relayed the following:
On the day after 9/11, he [Tenet] adds, he ran into Richard Perle, a leading neoconservative and the head of the Defense Policy Board, coming out of the White House. He says Mr. Perle turned to him and said: "Iraq has to pay a price for what happened yesterday. They bear responsibility."
It was damning stuff and seemed to confirm the story of the neocon plan. Unfortunately for the wingnut crowd, Tenet's tale was of a fairy sort, a fabrication, since Richard Perle was in France the day after 9-11 and did not return to the United Sates until September 15th. Tenet claimed later that he may have gotten the date wrong (yeah, right), but Perle insists he never said anything like the above, on that day or any other.

Still, the idea of a pre-conceived plan for invasion remained, owing greatly to the neoconservative think tank Project for a New American Century and the various white papers it produced, wherein increased U.S. military activity around the world was called for almost as a matter of course. Page six of this particularly critical white paper--released in September of 2000--lays out the primary objectives in this regard:
HOMELAND DEFENSE. America must defend its homeland. During the Cold War, nuclear deterrence was the key element in homeland defense; it remains essential. But the new century has brought with it new challenges. While reconfiguring its nuclear force, the United States also must counteract the effects of the proliferation of ballistic missiles and weapons of mass destruction that may soon allow lesser states to deter U.S. military action by threatening U.S. allies and the American homeland itself. Of all the new and current missions for U.S. armed forces, this must have priority.

LARGE WARS. Second, the United States must retain sufficient forces able to rapidly deploy and win multiple simultaneous large-scale wars and also to be able to respond to unanticipated contingencies in regions where it does not maintain forward-based forces. This resembles the “two-war” standard that has been the basis of U.S. force planning over the past decade. Yet this standard needs to be updated to account for new realities and potential new conflicts.

CONSTABULARY DUTIES. Third, the Pentagon must retain forces to preserve the current peace in ways that fall short of conduction major theater campaigns. A decade’s experience and the policies of two administrations have shown that such forces must be expanded to meet the needs of the new, long-term NATO mission in the Balkans, the continuing no-fly-zone and other missions in Southwest Asia, and other presence missions in vital regions of East Asia. These duties are today’s most frequent missions, requiring forces configured for combat but capable of long-term, independent constabulary operations.

TRANSFORM U.S. ARMED FORCES. Finally, the Pentagon must begin now to exploit the socalled “revolution in military affairs,” sparked by the introduction of advanced technologies into military systems; this must be regarded as a separate and critical mission worthy of a share of force structure and defense budgets.
By the way, the term "neocon" is rarely used correctly and has become something of a simple pejorative. To understand it properly, consider carefully the above objectives and the accompanying analysis in the white paper. Despite the ultimate agreement with some of these ideas--especially the last--people like Donald Rumsfeld, Dick Cheney, Condi Rice, and even George Bush are not, nor were they ever, actual neocons.

Wednesday, August 28, 2013

The President's metaphorical confusion

If you're going to use a metaphor, you damn well ought to be sure you know what it means, especially if you happen to be a political leader of some sort like, say, the President of the United States.

Back in July of 2011, in the middle of the Debt Ceiling Showdown in Washington, D.C, Obama had a meeting with some of the Republican leaders in Congress. In that meeting, Obama apparently got a little peeved and ended up issuing a pseudo-threat to Eric Cantor, saying to the House Majority Leader:
Eric, don't call my bluff. I'm going to the American people on this.
The problem with this, with telling someone not to call your bluff, is that you're admitting you have no hand! You're admitting that you're bluffing. I have to say, I'd love to play poker with the Supreme Leader, because if he's going to make it a point to announce to the table every time he bluffs, he's not going to win many hands.

But perhaps we can be a little forgiving, with regard to the President's metaphor fail in this instance. He was no doubt tired and certainly a little cranky. So he screwed up a metaphor, it didn't really do any damage. No one was hurt. Hey, it's not like he gave away military plans, secrets, or long-term objectives, right? And no one is going to screw up a metaphor so badly that it would do the above kind of damage, let alone end up putting American lives in jeopardy, right?

Yeah. Apparently, someone is. And that someone is President Obama.

Speaking about the situation in Syria today, Obama said the following in an interview with Gwen "Party like it's 1773" Ifill and Judy Woodruff (my boldface):
And if, in fact, we can take limited, tailored approaches, not getting drawn into a long conflict, not a repetition of, you know, Iraq, which I know a lot of people are worried about – but if we are saying in a clear and decisive but very limited way, we send a shot across the bow saying, stop doing this, that can have a positive impact on our national security over the long term...and may have a positive impact in the sense that chemical weapons are not used again on innocent civilians.
Hello? I'm pretty sure, Mr. President, that you already drew a line in the sand for Syria. Do you remember that?

Forgotten military strikes and long-term repercussions

As I write this, most news outlets are running stories on the likelihood of a U.S. military strike against Syria for the latter's apparent use of chemical weapons against anti-Assad forces in the middle-eastern country (which has essentially been in a state of civil war for two years now). The consensus, by and large, is that such a strike or strikes is very likely. Similarly, the consensus is also that such military actions will be of a limited nature; "regime change" is apparently not on the table.

So one must ask, what is the point of such strikes? Punishment? A reminder of what could happen if the Syrian government doesn't play nice with the rebel forces? Or actual targeted destruction of military facilities in order to limit/destroy chemical weapons and other WMDs? Most assuredly, the Administration and Pentagon officials will point to the last. Of course, the effectiveness of strikes in this regard is dependent on effective intelligence, something that we often take for granted but have learned--the hard way--that we really shouldn't.

But what about the larger picture? Allowing that such limited military strikes are effective in the moment, what are the long-term consequences? The quintessential historical example--with regard to the United States--is Vietnam. Most remember now how the conflict there was an "escalating one," how the United States went from supplying advisors only to South Vietnam forces to supplying arms, then finally to actual U.S. troops and accompanying military apparatus, from planes to tanks to warships.

However, at first the United States simply initiated a series of limited airstrikes against North Vietnam targets, in response to supposed attacks on U.S. ships in the Gulf of Tonkin in August of 1964. It wasn't until March of the following year that any actual ground troops were sent to Vietnam as a fighting force. And true enough, that was the beginning of a massive escalation. To this day, many argue that President Johnson overstated--at the very least--the incidents in the Gulf of Tonkin in order to get to the point of full-scale war in Vietnam. The series of airstrikes that led up to war concluded with the start of the infamous Operation Rolling Thunder, a massive program of aerial bombardment lasting over three years, destroting infrastructure throughout Vietnam, yet ultimately failing to achieve its stated goals.

There are more recent examples of limited airstrikes by the United States, as well. For instance, President Bill Clinton engaged in a series of them against Iraq, starting with the launching of 23 cruise missiles into Baghdad on June 26, 1993. This attack was in response to the uncovering of an Iraqi-led assassination plot against former President George Bush in Kuwait two months earlier. Thus, it was very much a retaliatory strike, intended to demonstrate what the consequences would be for Saddam Hussein and Iraq if he continued to "misbehave."

Yet, Hussein did continue to misbehave. Despite the presence of UN personnel in Iraq (still trying to "inspect" Iraqi facilities for WMDs) and still-extant no-fly zones established after the Gulf War, Hussein launched an offensive against Kurdish forces in the north (in one of the no-fly zones). In response to this, Clinton once again turned to limited airstrikes against Iraqi, ordering the launching of another 44 cruise missiles against targets in southern Iraq. From Clinton's statement on the attack (my boldface):
The Iraqi attack adds fuel to the factional fire and threatens to spark instability throughout the region. Our objectives are limited, but clear: to make Saddam pay a price for the latest act of brutality; reducing his ability to threaten his neighbors and America's interests.

First, we are extending the no-fly zone in southern Iraq. This will deny Saddam control of Iraqi airspace from the Kuwaiti border to the southern suburbs of Baghdad and significantly restrict Iraq's ability to conduct offensive operations in the region. Second, to protect the safety of our aircraft enforcing this no-fly zone, our cruise missiles struck Saddam's air defense capabilities in southern Iraq...

We must make it clear the reckless acts have consequences, or those acts will increase. We must reduce Iraq's ability to strike out at its neighbors and we must increase America's ability to contain Iraq over the long run.

Tuesday, August 27, 2013

Miley Cyrus and the irrelevance of culturally significant events

It's an amazing thing to witness, the huge reactions garnered by the actions of intentionally provocative celebrities, especially when those actions take place within a framework of an "awards show" that consists largely of navel-gazing self-promotion, based on such moments of provocation.

The latest episode: the antics of former child star Miley Cyrus (nee Hannah Montana). Given the publicity her performances at the Video Music Awards on Sunday night are getting, one would think there was some significance to all of this, or at least that Cyrus had done something more than just be outrageous (or try to be so, truth be told).

Some time ago, the Superbowl became the event for introducing new commercials to the American public (owing to the huge viewership, to be sure). In a weirdly similar way, the MTV Video Music Awards program has become the event for outrageous behavior by music industry performers (and other celebrities). The list of such moments is exceedingly long, but outside of hardcore fans, most people probably remember only a handful--if that--of them, like Madonna's kiss of Britney Spears and Kanye West's rude interruption of Taylor Swift.

I have to admit that the current "look at me" culture has left me behind to some degree. I realize it was always present to some extent, but it appears to have grown by leaps and bounds in the last couple of decades, largely due to the internet and the prevalence of cell phone cameras I would guess. But at the same time, those obsessed with recognition seem to be running out of things to do, to draw attention to themselves. In Cyrus' case, she simulated sex acts, stripped down to minimal clothing, and what else? Danced very badly?

In 1969, Jim Morrison--legendary front man for The Doors--found himself arrested and charged with indecent exposure five days after a concert performance at Miami's Dinner Key Auditorium by the band. Why? Long story short, Morrison was inciting the crowd, inviting it to come up on stage (trying to provoke a riot, really) past security and supposedly exposing himself while on stage.

This was Big News at the time, despite the fact that Morrison had routinely behaved the same way in various other venues for years. The microphone stand--for Morrison--was a stand-in for a sexual partner, figuratively and maybe sometimes even literally. But then, Mick Jagger and Keith Richards played the same sorts of games--with each other! gasp!--as well. And the so-called "twerking" of Cyrus and so many others, does no one remember Elvis Presley at all?

Monday, August 26, 2013

Judging activism in the Court

It's an often invoked term--"judicial activism"--yet one that lacks a very clear meaning in the minds of some. I spent some time discussing the phrase previously, along with the other one that goes with it, "legislating from the bench," because both are being roundly misused in service to ideological agendas. More often than not, the terms are called into service by the Left whenever the so-called conservative wing of the SCOTUS rules in such a way to overturn a law or a portion of a law. But such a usage is wholly incorrect. From my previous piece:
The basic idea is that the judicial branch should not be driven by policy and should not be engaged in policy-making, that it should accept the Constitution and the laws made by the legislature as they are, with regard to the original intent. Of course, there is a caveat: the Constitution is supreme, thus the implied power of Judicial Review, wherein the Supreme Court can determine the constitutionality of a law. For instance, the Constitution very clearly confers to the citizenry a right to "keep and bear arms," thus any law made by the legislature should not infringe on this right. If a law does, then it is the Court's duty to strike that law down. Simple, really. 
But this is not "legislating from the bench." It just isn't. It's the accepted and proper function of the Court, no more, no less.
Nor is such an action "judicial activism." Both terms refer to Court actions that engage in exactly the behavior forbidden to the judicial branch: making new laws. The Court can do this through the power of Judicial Review, it is true, but not by striking down what it deems are unconstitutional provisions in laws, but rather by expanding the meanings of laws already in place such that new power or authority is granted to the Federal Government.

Let's be clear about this. Assume there is a law or constitutional provision affording the government the power to do X. But government agents infer this law also allows them to do Y, because Y is similar in some respects to X. The actions are challenged in court by those adversely affected by the government's actions. Ultimately, the Court decides that even though Y is not explicitly allowed, it's somehow okay. This is justified by a convoluted legal opinion from the Court that ignores precedent and expands Federal power.

That is judicial activism. That is legislating from the bench. For what the Court has done in such a case is manufacture a new law (or laws) all on its own, a law that never was passed by Congress, that had never been codified anywhere, that exists only within a Supreme Court opinion, nowhere else.

Why am I bringing this up now? Because apparently the most senior "liberal" Justice on the Court doesn't comprehend any of this:
Justice Ruth Bader Ginsburg, 80, vowed in an interview to stay on the Supreme Court as long as her health and intellect remained strong, saying she was fully engaged in her work as the leader of the liberal opposition on what she called “one of the most activist courts in history"...

In general, Justice Ginsburg said, “if it’s measured in terms of readiness to overturn legislation, this is one of the most activist courts in history.”
Again, it is the job of the SCOTUS to "overturn" legislation that defies the Constitution. But in terms of her actual claim, I can't honestly say one way or the other if it is true. I seriously doubt it is, though. The Courts of the 1860's and 1870's, for instance, ruled against the government many times, certainly far more than the number of cases Ginsburg points to in her interview (that would be two, for those scoring at home). Ted Olson recently noted that most people "use the term 'judicial activism' to explain decisions that they don't like." It would appear Justice Ginsburg is using the term in exactly such a fashion, only in reference to decisions of which she does not approve.

Sunday, August 25, 2013

A pox on all of your paxi

History is replete with violence between and within states (in the sense of defined societies ruled by governments, of one form or another). We can go back to the Warring States era in Chinese History, to the age of Alexander the Great in the Mediterranean to the unification of Upper and Lower and Egypt, and before. While such conflicts can be attributed to a variety of apparent and specific causes, there remains a background reality to all of them: the control of resources as a means of obtaining/maintaining power. Because resources--of all sorts--are always limited with respect to a given time and place, some amount of conflict is inevitable of course, though it can manifest in a variety of ways, not all of them overtly violent. And for those that are violent, such activity need not always rise to civil war, war between states, much less world war.

Yet, it a rare to find extended periods of time wherein one of the three kinds of war is not taking place in the world at large or even within an extended sphere of influence. Why? Because such extended periods are--like it or not--a consequence of power being disproportionately held by one state, as compared to many others.

The first such period one is liable to think of is the so-called Pax Romana, a two hundred year period of relative peace throughout the Mediterranean world, extending well into Western Europe as well, beginning during the reign of Augustus and ending with the death of Marcus Aurelius. The peace of this period--which led to increased economic activity, population growth, a growth in the arts, and the like--was most definitely an enforced peace, however. It depended on the threat of Rome's military might. And in that regard, there was violence, as Augustus and his successors were quick to squash any unrest or even potential unrest.

After the Pax Romana came to an end (which was followed by the collapse of the Roman Empire, eventually), there were various other periods of extended peace in more limited regions of the world (and to be fair, at the same time as the Pax Romana, there was also the Pax Sinica in China). But for many centuries, none were so extensive as the Pax Romana.

Then came the Pax Britannica of the nineteenth century. For about one hundred years (1814-1914), the British Empire was the unquestioned Sovereign of the Seas, as its navy was without equal in all of the world. As such, it was largely in control of most international commerce. Built on the twin pillars of British colonialism and the Industrial Revolution, the Pax Britannica was a period that saw the accumulation of massive wealth by both the Crown and private citizens in Great Britain. Many thought it would last forever, if not longer, and saw it as evidence of British superiority in all things. Almost everyone in the West is familiar with the period, thanks to its glorification in literature and the arts. Perhaps the most memorable summary of this is found in a song from Mary Poppins, of all things:

A portion of the lyrics:
It's grand to be an Englishman in 1910
King Edward's on the throne
It's the Age of Men
I'm the lord of my castle
The sovereign, the liege
I treat my subjects, servants, children, wife
With a firm but gentle hand
Noblesse oblige...

Friday, August 23, 2013

How to improve education: ignore the data, plod forward

Big problems require big solutions. Or at least that's what the fans of big government are always saying, always insisting. And the corollary: big solutions cost big money.

This mindset has dominated the corridors of power throughout the land for decades now. Since the Great Depression, really, when the mantra was put to the test. And for many decades after that, most assumed it had been proven true, that FDR's responses--huge, costly responses--had effectively saved the nation (nevermind that even those making such arguments are quick to offer the World War II caveat). More recent scholarship--roundly ignored by those with vested interests in maintaining and growing a large central government--suggests that FDR's responses may have actually prolonged the Great Depression.

But that is neither here nor there. The point is, people simply assume a big problem must require a big solution because they believe such an assumption logically follows or is just common sense. In fact, it's very much a logical fallacy, the fallacy of identity.

In historical scholarship, this fallacy rears its ugly head quite frequently. David Hackett Fischer describes it thusly:
The fallacy of identity is the assumption that a cause must somehow resemble its effects... 
A more common form of the fallacy of identity is the idea that big effects must have big causes, or that big events must have big consequences...
Historians trying to understand apparently sudden collapses of empires--like Rome--often fall prey to this fallacy, as do those historians engaged in economic history, wherein any large-scale economic episode is assumed to have an equally large scale economic trigger or consequence.

Outside of history, proper, the fallacy appears in the realm of policy--public and private--in its solution-problem form. For instance, many businesses facing severe problems tend to approach things the same way: by assuming the only way out of a huge mess is with a huge solution, when the truth is that such a solution may only make matters worse. Oftentimes, the mess is a consequences of a series of small things--which can lead essentially to feedback loops, but that is another discussion--that could be corrected or changed with ease. But making suggestions like this doesn't tend to impress anyone, by and large. It's easier to sell the "big solution" because people are quick to believe such is required.

As is the case with the American healthcare system. There are lots of small things that have contributed to the dramatically rising costs in this industry, but almost no one in power wants to adopt a piecemeal approach to improving the system. Instead, they want it all at once, a big solution to a big problem. The consequences? Decades of fiddling, of trying to get such a solution by the only means possible: a huge federal program. And now we have one, which at best looks like it's not going to solve anything. At worst, it may make the problem even bigger (which of course will necessitate an even bigger and more costly solution down the road).

The education system is in similar straits. Recognition of the failings in K-12 schooling has, at various moments, spawned various big solutions from the Federal government (even thought it is not really the job of the Federal government to fund local schools). Several years ago, Andrew Coulson of the Cato Institute--currently the director of Cato's Center for Educational Freedom gave a presentation before the House Committee on Education & the Workforce. Here it is.

Thursday, August 22, 2013

Organizing For Action: where is the outrage?

In my previous bit, I discussed President Obama's latest (and quite silly) initiative in the field of college education costs. And low and behold, when I checked my (e-)mail box today, I found this lovely message from none other than the President, himself:
Robert --

Michelle and I wouldn't be in the White House today if it weren't for our college educations.

It wasn't cheap. We didn't finish paying off our student loans until about nine years ago.

That's why it's been a personal mission of mine to make higher education more affordable for more Americans -- and starting today, I'm hitting the road to talk about real reforms to fundamentally rethink how we pay for college in this country.

I'm asking you to speak out as well.

Stand with me today -- tell Congress you support real action to make college more affordable for American families.
Right now, the average student who takes out loans to pay for school graduates with more than $26,000 in debt. Something's got to change -- it's not enough just to tinker around the edges. We've got to shake up the current system.

My plan won't be popular with everybody, especially those who profit from the way things are. But we owe it to our students to make sure that our colleges are working for them.

While we'll need Congress' help to get some of this done, my administration will continue to do what we can to make sure quality, affordable higher education is in reach for millions more young Americans.

So far, we've taken some good steps forward. We've published college scorecards to ensure that families are getting the best information as they pick a school, doubled funding for Pell grants, and established a college tax credit. And thanks to the income-based repayment program, which caps student loan payments based on new graduates' incomes, 1.6 million young Americans can keep more money in their pockets.

But there's much more we can and should do -- this is key to creating a better bargain for the middle class.

That's something I've talked a lot about -- every day, I think about what I can do to live up to it.

That's why I'm calling on Congress to tackle rising tuition costs and pass reforms, so families can get a better bargain when it comes to getting a world-class education.

I'm counting on OFA supporters to be part of this fight. Not much gets done in Washington without the voices of people like you.

Add your name:


The return of NCLB: No College Left Behind

We have a problem in the field of higher education. And it's a problem that's existed for decades now: costs for attending college have skyrocketed, rising at a far greater rate than pretty much any other metric one might think of, including healthcare. And frankly, there are many, many people who simply don't care about this problem, who are actively ignoring it or even helping to keep it going. For the most part, such people are in government--elected officials and bureaucrats, both--or in academia, itself. There are also those in the financial industry who effectively bank on these rising costs as a means of selling products, from loans to life insurance.

And pretty much every couple of years, this issue makes a bold appearance on the national stage, as one politician or another vows to "do something" about these rising costs. So far nothing.

Enter President Obama. Today, he announced a new initiative that would supposedly make college more affordable. Essentially, the President is proposing the creation of a new ratings system for colleges--a federally controlled ratings system--that would be used to determine how much federal aid schools would be entitled to receive. All of the details on this are not yet available, but here is a brief summary:
A draft of the proposal, obtained by The New York Times and likely to cause some consternation among colleges, shows a plan to rate colleges before the 2015 school year based on measures like tuition, graduation rates, debt and earnings of graduates, and the percentage of lower-income students who attend. The ratings would compare colleges against their peer institutions. If the plan can win Congressional approval, the idea is to base federal financial aid to students attending the colleges partly on those rankings.

“All the things we’re measuring are important for students choosing a college,” a senior administration official said. “It’s important to us that colleges offer good value for their tuition dollars, and that higher education offer families a degree of security so students aren’t left with debt they can’t pay back.”

Mr. Obama hopes that starting in 2018, the ratings would be tied to financial aid, so that students at highly rated colleges might get larger federal grants and more affordable loans. But that would require new legislation.
So, the basic idea is to rate colleges based on the above metrics and use those ratings as a basis for doling out federal monies. Sound familiar? President George Bush at the the signing of the NCLB Act in 2002:
First principle is accountability. Every school has a job to do. And that's to teach the basics and teach them well. If we want to make sure no child is left behind, every child must learn to read. And every child must learn to add and subtract. (Applause.) So in return for federal dollars, we are asking states to design accountability systems to show parents and teachers whether or not children can read and write and add and subtract in grades three through eight.
The breakdown of this "accountability" issue, via the Department of Education:
The NCLB Act is designed to help all students meet high academic standards by requiring that states create annual assessments that measure what children know and can do in reading and math in grades 3 through 8. These tests, based on challenging state standards, will allow parents, educators, administrators, policymakers, and the general public to track the performance of every school in the nation. Data will be disaggregated for students by poverty levels, race, ethnicities, disabilities, and limited English proficiencies to ensure that no child--regardless of his or her background--is left behind. The federal government will provide assistance to help states design and administer these tests. States also must report on school safety on a school-by-school basis. 
Annual school "report cards" will provide comparative information on the quality of schools. By doing so, they will empower parents to make more informed choices about their children's educations. These report cards will show not only how well students are doing on meeting standards but also the progress that disaggregated groups are making in closing achievement gaps. 
Districts and schools that do not make sufficient yearly progress toward state proficiency goals for their students first will be targeted for assistance and then be subject to corrective action and ultimately restructuring. Schools that meet or exceed objectives will be eligible for "academic achievement awards."
Written mostly by Senator Kennedy and championed by pols from both sides of the aisle, NCLB has been roundly criticized since its passage by pretty much everyone. Why? Because it hasn't worked. It's failed to deliver the promised improvements and has instead created a kind of educational gridlock in schools across the nation, so much so that many States have asked for (and received) waivers from the requirements of the program.

Do we learn nothing from the past? Apparently not.

Wednesday, August 21, 2013

The Bill of Rights was a bad idea

Alexander Hamilton, quoted in my previous piece, from Federalist #84:
I go further, and affirm that bills of rights, in the sense and to the extent in which they are contended for, are not only unnecessary in the proposed Constitution, but would even be dangerous. They would contain various exceptions to powers not granted; and, on this very account, would afford a colorable pretext to claim more than were granted. For why declare that things shall not be done which there is no power to do? Why, for instance, should it be said that the liberty of the press shall not be restrained, when no power is given by which restrictions may be imposed? I will not contend that such a provision would confer a regulating power; but it is evident that it would furnish, to men disposed to usurp, a plausible pretense for claiming that power. They might urge with a semblance of reason, that the Constitution ought not to be charged with the absurdity of providing against the abuse of an authority which was not given, and that the provision against restraining the liberty of the press afforded a clear implication, that a power to prescribe proper regulations concerning it was intended to be vested in the national government.
Hamilton--writing as Publius of course--wrote this in July of 1788 as a part of the continued effort by himself, Madison, and Jay to increase popular support for the newly drafted Constitution via a series of articles now collectively called the Federalist Papers. Those arduously opposed to the new Constitution, the so-called Antifederalists, were simultaneously engaged in a similar campaign of article, letter, and pamphlet writing. And the lack of a "Bill of Rights" in the Constitution was a major bone of contention for this latter group.

Following the publication of the above piece, the Anitfederalists responded with a point by point rebuttal in "On the lack of a bill of rights" (now called Antifederalist #84), published under the pseudonym Brutus, but likely authored by Robert Yates, a delegate to the New York State ratifying convention, justice of the New York State Supreme Court, and future chief justice of the same. I mention his background to indicate the intellectual weight of the arguments; these were not small thinkers and this was a glorious time for a spirited exchange of ideas on the nature of government and law.

Yates (Brutus) takes Hamilton's argument on by citing specific examples of rights and reaching the following conclusions:
These provisions are as necessary under the general government as under that of the individual States; for the power of the former is as complete to the purpose of requiring bail, imposing fines, inflicting punishments, granting search warrants, and seizing persons, papers, or property, in certain cases, as the other.

For the purpose of securing the property of the citizens, it is declared by all the States, "that in all controversies at law, respecting property, the ancient mode of trial by jury is one of the best securities of the rights of the people, and ought to remain sacred and inviolable."

Does not the same necessity exist of reserving this right under their national compact, as in that of the States? Yet nothing is said respecting it. In the bills of rights of the States it is declared, that a well regulated militia is the proper and natural defense of a free government; that as standing armies in time of peace are dangerous, they are not to be kept up, and that the military should be kept under strict subordination to, and controlled by, the civil power.

The same security is as necessary in this Constitution, and much more so; for the general government will have the sole power to raise and to pay armies, and are under no control in the exercise of it; yet nothing of this is to be found in this new system.
Thus, he finds Hamilton's argument to be without merit by suggesting that the lack of an enumeration of rights is not only a fatal flaw in the Constitution but even represents a curtailment of the already-mandated rights in various Sate constitutions. In other words, he argues that failing to have a bill of rights would effectively negate rights already granted by the States:

Monday, August 19, 2013

Media shield laws, special privileges, and an expansive government

In the wake of recent events involving the Justice Department and its use of secret subpoenas to investigate certain reporters, Senators Chuck Schumer and Lindsay Graham have re-introduced legislation to establish a Federal media shield law, supposedly at the behest of the Administration. The bill, S.987, would make it more difficult for federal agencies to investigate reporters who are or had been involved with the leaking of information from inside sources.

Interestingly enough, Justice is fully behind the creation of such legislation, even though it has supposedly established new internal guidelines to limit its own activities in this regard. In essence, Justice wants Congress to pass such legislation to protect the department from itself, it would seem, as its own rules are somehow insufficient:
He [Holder] said the new guidelines — under which, most significantly, the records of a journalist will only be collected if that person is the focus of a criminal investigation — will make “a meaningful difference,” with the Justice Department effectively forgoing the opportunity to use search warrants to obtain journalists’ emails or other work product, as long as the reporters are engaged in routine newsgathering activities. 
But, Holder added, Congress needs to take further action by passing a media shield law, an idea the president also supports.
Now, the reality is that reporters are not special; their activities--when is comes to investigating or disseminating--should not provide them with special protections by virtue of their occupations, but nor should it allow them to be unduly targeted by government agencies. They deserve the exact same protections in this regard as everyone else. The much-ballyhooed "freedom of the press" established by the First Amendment was never intended to allow the infringement of rights for the "non-press," which once again hearkens back to Hamilton's warnings in Federalist #84 (my boldface):
I go further, and affirm that bills of rights, in the sense and to the extent in which they are contended for, are not only unnecessary in the proposed Constitution, but would even be dangerous. They would contain various exceptions to powers not granted; and, on this very account, would afford a colorable pretext to claim more than were granted. For why declare that things shall not be done which there is no power to do? Why, for instance, should it be said that the liberty of the press shall not be restrained, when no power is given by which restrictions may be imposed? I will not contend that such a provision would confer a regulating power; but it is evident that it would furnish, to men disposed to usurp, a plausible pretense for claiming that power. They might urge with a semblance of reason, that the Constitution ought not to be charged with the absurdity of providing against the abuse of an authority which was not given, and that the provision against restraining the liberty of the press afforded a clear implication, that a power to prescribe proper regulations concerning it was intended to be vested in the national government.
And indeed, as if on queue, Dianne Feinstein comes forward with an amendment to the shield law legislation that does exactly what Hamilton anticipates. Concerned with the protections of this legislation extending too far, Feinstein has sought to absolutely limit who would be protected by explicitly defining who qualifies as a "jouranlist":
(5) JOURNALIST.—The term ‘‘journalist’’—

(A) means a person who—

(i) is, or on the relevant date, was, a salaried employee, independent contractor, or agent of an entity that disseminates news or information by means of newspaper, nonfiction book, wire service, news agency, magazine, news website or other news service distributed digitally, news program, or other periodical, whether in print or electronic format or through television or radio broadcast, multichannel video programming distributor (as such term is defined in section 602(13) of the Communications Act of 1934 (47 U.S.C. 522(13)), or motion picture for public showing;

(ii) with the primary intent to investigate events and procure material in order to disseminate to the public news or information concerning local, national, or international events or other matters of public interest, engages, or as of the relevant date engaged, in the regular gathering, preparation, collection, photographing, recording, writing, editing, reporting or publishing on such matters by—

(I) conducting interviews; (II) making direct observation of events; or (III) collecting, reviewing, or analyzing original writings, statements, communications, reports, memoranda, records, transcripts, documents, photographs, recordings, tapes, materials, data, or other information whether in paper, electronic, or other form;

(iii) had such intent at the inception of the process of gathering the news or information sought; and

(iv) obtained the news or information sought in order to disseminate the news or information to the public;
The definition continues in order to include editors, publishers, and "student journalists under the umbrella of protection, but then proceeds to exclude specific people from the same, including terrorist-types, agents of foreign powers, and--most significantly--this group:
(D) does not include any person—

(i) whose principal function, as demonstrated by the totality of such person’s work, is to publish primary source documents that have been disclosed to such person without authorization;
Quite clearly, this is a blatant attempt to exclude people like Snowden and contributors to groups like Wikileaks.

Feinstein's amendment is being roundly criticized for establishing this definition of who exactly would qualify for the protections ensconced in the shield law legislation mostly because of this last bit and because the definition would also exclude much of the blogosphere, the realm of "amateur" journalists as it were. But such criticism is ultimately misguided, for it is based on the erroneous idea that the definition of who is a journalist needs to be expanded for the purposes of enlarging the coverage of such shield laws. The amendment is still wrong, no doubt, but so is the initial legislation.

Understand that for Hamilton--for Madison and the other Federalists (and anti-Federalists, for that matter)--the idea of "the press" encompassed all such activity, no matter who was undertaking it or why; there was no line between "professional" and "amateur" in this regard. Such activities are still subject to the laws, however. Libel and slander can still be prosecuted, along with treason and a host of other things. Thus, no special protections were/are needed for the theoretical "press" because it was/is subject to the same laws--proceeding from a limited government--as was/is everyone else.

If it is to be argued that some sort of press shield law is an absolute necessity now, it can only be because the government is capable of exercising power it was never supposed to have. The fix, therefore, is not the creation of new protections--much less the creation of a special class of citizens--but rather the scaling back of government powers.

Cheers, all.

Saturday, August 17, 2013

Missing the point on NSA violations

The latest Snowden revelations are causing something of a firestorm in the media and in the halls of Congress. For those unaware, WaPo broke the story a couple of days ago. Documents released by Snowden include an internal audit of the NSA that indicates it fractured the rules pertaining to data collection and storage thousands of times in one twelve month period:
The NSA audit obtained by The Post, dated May 2012, counted 2,776 incidents in the preceding 12 months of unauthorized collection, storage, access to or distribution of legally protected communications. Most were unintended. Many involved failures of due diligence or violations of standard operating procedure. The most serious incidents included a violation of a court order and unauthorized use of data about more than 3,000 Americans and green-card holders.
Since this story came out, there have been two basic sorts of reactions: on the one hand, there are people--mostly on the Right--up in arms over this long train of abuses. On the other hand, there are people--mostly on the Left--seeking to minimize these numbers by pointing out how most of the incidents are just simple errors with nothing nefarious behind them. The NSA, for it's part, is pushing back hard:
The official, John DeLong, the N.S.A. director of compliance, said that the number of mistakes by the agency was extremely low compared with its overall activities. The report showed about 100 errors by analysts in making queries of databases of already-collected communications data; by comparison, he said, the agency performs about 20 million such queries each month.

Mr. DeLong, speaking to reporters on a conference call, also argued that the overwhelming majority of the violations were unintentional human or technical errors and that the existence of the report showed that the agency’s efforts to detect and correct violations of the rules were robust. He said the number of willful errors was “minuscule,” involving a “couple over the past decade.”
The NSA's excuses notwithstanding, both its defenders and detractors are largely missing the point of all of this, of what this internal audit was really all about. It was undertaken to determine "compliance," to find out when specific actions by the NSA were outside the rules established for compliance with the various laws--including FISA, the Foreign Intelligence Surveillance Act--that impact its operations. Why? So such failures could be made public? No, of course not. It was done--is routinely done, I suspect--in order to tweak operations so they are no longer out of compliance. There was no thought as to whether or not these operations cross any lines, if they violate either the letter or the sense of laws like FISA, the only point here was to identify compliance failures in order to protect the agency.

Friday, August 16, 2013

Obama ends the faux "signing statement" becoming a dictator

Okay, the title is somewhat hyperbolic. Obama is not actually a dictator (as much as he may imagine otherwise). But he did put an end--quite unintentionally--to the entirely manufactured signing statement controversy. Remember that? It actually garnered Charlie Savage a Pulitzer.

A brief recap: in 2006, Charlie Savage of the Boston Globe "discovered" something, that President Bush was issuing signing statements about various pieces of legislation he had signed into law. Past Presidents had done the same. Signing statements are not actual legal instruments, they are not actually a part of the legislation they reference, and they--in and of themselves--do not restrict future Presidents (or Congresses) in any way, whatsoever. More often than not, signing statements are nothing more than commentary on a piece of legislation.

But on occasion, signing statements also indicate a position on a given portion(s) of a bill. In fact, a President might (as a number of them have, going back to FDR) indicate an unwillingness to enforce a particular provision of a bill being signed into law on Constitutional grounds. Such a statement would go something like this: "I'm signing this bill into law because it is needed, however I believe section xyz contains unconstitutional requirements so I will not enforce that section and Justice will not defend it in court if it is challenged."

Got that? Such a statement is a warning, with regard to a potential future, nothing more. Many people--starting with those in our erstwhile Fourth Estate--were and are unable to process this. They believe such a statement is, itself, akin to an Executive Order, insofar as it is not just a warning but an actual addition to the legislation in question that carries real legal weight as a Presidential "decree." It's not. I don't know how many hours I've spent arguing this point with various people--quite a few--but some just refuse to understand (or are incapable of understanding). They continue to insist signing statements are exactly what they are not. Again, signing statements carry no legal weight whatsoever (the SCOTUS noted this, back in the days of FDR and has never treated a signing statement as anything other than an opinion). At best, they amount to fair warning, nothing more. Thus my conclusion in the linked-to piece:
Still, the Signing Statement controversies are largely manufactured. Aside from the statements having no legal authority, there is another matter: laws stay on the books until repealed or overridden by new laws. Not so for Signing Statements. They pass into history as quickly as the President that made them. Future Presidents and administrations are not constrained in any way by the Signing Statements of previous Presidents. The statements are footnotes, nothing more.
Since I was aware of the truth of the matter when it came to signing statements, it was high comedy--to me--when President Obama began to utilize them in the same way President Bush had after explicitly declaring he would not do so. But this was just a sideshow--Obama's hypocrisy on the matter--because I knew the statements carried no legal weight. I knew that a President could take an action outlined in a particular signing statement even if he had never issued such a statement to begin with. Signing statements are not necessary, at all. That's the whole point. If a President deems a particular portion of a law he has signed to be unconstitutional or unenforceable, he can refuse to follow it or instruct affected agencies not to follow it in the moment, when the time actually comes for its application.

Thursday, August 15, 2013

A million little Oprahs

In a recent piece--The post-Zimmerman world--I noted how "outrage" over the verdict in the Zimmerman trial was less about race per se and more about a desire to have it be about race:
The outrage over this particular decision is not a product of injustice due to race, it's a product of people in the media wanting it to be about race, assuming it therefore is about race, and supposing injustice is a given if the verdict appears to favor one race over another. And unfortunately, the general population is all too willing to follow the music of the Pied Pipers, to accept the tale being told by the media and to jump on the outrage train.
And really, the "outrage train" left the station long before the actual trial took place. Once the story of the actual incident became national news, people immediately began to climb aboard, pushing and shoving others who were in their way in an effort to make sure they had a good seat. The mainstream media was a leader in this regard, of course, but it had plenty of help. Blogs, messageboards, and social media sites were positively brimming with people eager to show their outrage, to prove to others how deeply they truly cared about the injustice of supposed racism.

Such people don't need much to stir them to action on the internet (or in real life). Witness Oprah Winfrey's much-publicized flap with a high-end boutique in Switzerland. Everyone knows the story by now: the queen of media moguls was in Switzerland for a friend's wedding (Tina Turner) and went out to do some shopping. At the Trois Pommes Boutique in Zurich she claims to have been a victim of racism when a sales person supposedly refused to show her a handbag because it was too pricey for Oprah (the assumption being that Oprah's skin color was the basis of this determination: she is black and therefore poor, or at least not wealthy).

After Oprah went public with the story, the predictable outrage train quickly followed. The Swiss tourism board--cowed by the negative press--even went as far as to offer Oprah a public apology. Everyone--well, almost everyone--took it as a given that Oprah's tale was absolutely true. Many immediately began to offer up comparable tales that they either personally experienced or witnessed. And let's be clear on one point: this kind of stuff does happen, people make assumptions about others because of things like race then pigeonhole them accordingly.

But did it happen to Oprah, specifically, in the Zurich boutique? Not according to the owner of the store nor to the sales person who waited on Oprah:

Saturday, August 10, 2013

Myriad, manifold, and the Mountains of the Moon

In my previous piece, I used the expression "myriad and manifold" with reference to the reasons behind the dire situation the city of Detroit is currently in. To whit (my boldface):
And the reasons for the ramming are myriad and manifold, whose totality had led to a dire situation in the once-great city wherein the prospect of bankruptcy became almost a welcome relief for those who understood the challenges ahead.
After posting the piece, I got to thinking about the phrase, wondering if it might confuse people who were unfamiliar with it and/or if it might suggest a kind of intellectual snobbery on my part for using it (for the record, I'm fine with the latter, because I am an intellectual snob; know thyself). I think it's a great phrase, really, for it's perfectly apt for conveying an idea: a great number of things of various sorts or natures, yet all still related in some way or another.

"Myriad" comes from the Greek word myrias, meaning ten thousand. That specific meaning was carried forward in many other languages that borrowed from the Greek. Some early metric systems used "myria" as a prefix in the same way as "cent" (one hundred) and "kilo" (one thousand). Thus, a myriameter would be equivalent to 10,000 meters. Today, myriad means just a very large number, as in "a myriad of butterflies."

"Manifold," in contrast, is a far more recent word (by at least one thousand years). It comes from the Old English word manigfeald, meaning simply "many folds." Similar words existed in both German and Dutch. But the sense of the word is not just many, but also of diversity. For instance, one might refer to "a manifold of causes for the Great Depression." It's not the quantity that is central, but rather the very different natures of such causes.

Thus again, the expression "myriad and manifold" means a great number of things, all of which are different from each other, yet are related to the subject of the phrase. Using my example, the phrase means that Detroit is in its current situation for many, many reasons, reasons that are very dissimilar, yet are all still contributory to the whole.

But given the very different etymologies of the two words, I still wondered how and when the phrase was coined. As is the case with many great turns of the phrase, I assumed the source was probably the Bible. "Myriad," however, doesn't make any appearances in the KJV. In the KJV--which was itself based on older translations going back to the Tyndale Bible, translated from the original Greek and Hebrew--the Greek myrias is translated literally into "ten thousand." For instance, this is Deuteronomy 32:30 in the KJV:
How should one chase a thousand, and two put ten thousand to flight, except their Rock had sold them, and the LORD had shut them up?
And this is the same verse in the Tyndale Bible:
Howe it cometh that one shall chace a thousande, and two putt ten thousande off them to flyghte? excepte theire rocke had solde them, and because the Lorde had delyuered them.
However, in the YLT--Young's Literal Translation of the Bible, published in 1862--"myriad" reappears, some twelve times. The same verse in the YLT:
How doth one pursue a thousand, And two cause a myriad to flee! If not -- that their rock hath sold them, And Jehovah hath shut them up?
Moreover, professors of religion--and other scholars--along with learned priests and ministers might very well slip into the original Greek, when speaking about the Bible or the period in general. Because again, "myriad" is a literary word at the very least.

Thursday, August 8, 2013

Rescuing Detroit with a socialist fantasyland

John Nichols--author, long-time political blogger, and noted far left ideologue--has gotten it into his head that what is happening in Detroit represents some sort of slap in the face to the idea of democracy. He's been harping on this point for months, really. From back in March of this year he wrote:
Detroit is up against plenty of threats. But the most pressing one today is political.

If Michigan Governor Rick Snyder gets his way, Detroit runs the risk of losing democracy.

Snyder, a Republican who led the charge for Michigan’s enactment of an anti-labor “right-to-work” law last year, is targeting Detroit for a state takeover that will disempower the elected mayor and city council and give authority over the city’s finances, service delivery and direction to an appointed “emergency manager.”
And to his credit--from the standpoint of consistency--he has continued to beat the same drum, even as Detroit came under control of emergency manager Kevyn Orr. His latest column on the same topic signals the supposed end of democracy in Detroit:
Some truths are self-evident across time.

That is surely the case with Lincoln’s observation that “we cannot have free government without elections.”

It was absolutely true 150 years ago that the postponing of an election would have been a defeat for the forces of democracy.

It is absolutely true today that the sapping of meaning from elections—by denying successful candidates the authority to govern—represents another form of the same defeat.
Now, it is absolutely true that elected officials in Detroit will have severely (to put it mildly) curtailed authority and duties in the days and months ahead because of Orr's State-mandated powers. And it's also true that this situation was rammed down the throats of the elected officials and citizens of the city. But let's be clear about this: the people doing the ramming were the elected representatives of the entire state of Michigan. And the reasons for the ramming are myriad and manifold, whose totality had led to a dire situation in the once-great city wherein the prospect of bankruptcy became almost a welcome relief for those who understood the challenges ahead.

From the standpoint of politics, Detroit represents a major stronghold of the Democratic Party, owing to the history of organized labor and the role played therein by the automotive industry and its many related industries. In his earlier piece, Nichols actually makes the point of just how Democratic (party-wise, to be clear) Detroit still is:
Snyder is preparing to do by fiat what he and his political allies could not do at the polls: take charge of local government in Michigan’s largest city. In the 2012 election, Democrat Barack Obama received 98 percent of the almost 300,000 votes cast in Detroit, while Republican Mitt Romney took just over 2 percent. No Republican contender for federal, state or local office won more than 6 percent of the vote in the city.
Nichols cites these stats as evidence for the undermining of democracy in Detroit, supposing that the ideological certitude of Detroit voters somehow represents a right to be governed by elected officials drawn from the same political party responsible for creating the mess Detroit is currently in. And Nichols imagines that such officials should have the right to address this mess, to somehow solve it, even as he simultaneously whines about a lack of State and Federal monies for the city, monies that would--as a matter of definition--come largely from taxpayers who are not citizens of the city.

Monday, August 5, 2013

Hillary: the Movie, parts II and III...or how the 4th Estate deludes itself into thinking it's still special

At the end of last month, both CNN and NBC had "big news": they both announced plans to air treatments of Hillary Clinton's life story (or at least some portion thereof). NBC's news was probably a little bigger, given that the network is going with a full mini-series (CNN is just doing a movie) and has already cast Diane Lane in the role of the former Secretary of State/Senator/First Lady. Personally, I don't see it. But that's neither here nor there.

This would be just ho-hum news if it wasn't for the fact that everyone and his or her brother--particularly on the Left--is taking it as a given that Clinton will be the Democratic Party's 2016 nominee for President of the United States. True, she hasn't formally announced, but only a fool would question her intent at this point in time. So the question is, should these networks be running such movies/miniseries ahead of the actual election, during what will be--to some degree--campaign season?

The RNC certainly has a very clear opinion on the matter:
RNC Chairman Reince Priebus wrote a letter to the heads of both networks to “express his deep disappointment” in their decision to either air a miniseries in NBC’s case or a movie in CNN’s, writing that the networks are “promoting former Secretary Hillary Clinton ahead of her likely candidacy for the Democratic nomination for president in 2016. 
“As an American company, you have every right to air programming of your choice,” the letter reads. “But as American citizens, certainly you recognize why many are astounded at your actions, which appear to be a major network’s thinly veiled attempt at putting a thumb on the scales of the 2016 presidential election.” 
The RNC says that the programs could not only hurt the 2016 Republican contenders but the 2016 Democratic candidates too.
There's no question that Clinton's life--even a small portion of it--has the makings of an interesting movie (though it was already turned into an incredibly boring book). And there's no question that the lives of other political leaders have received such treatments, sometimes even before a future election. Hell, CNN aired a documentary on Mitt Romney entitled Romney Revealed: Family, Faith & The Road To Power that aired just before the 2012 Elections. Of course, it was a documentary, was only an hour long, and few thought it benefited Romney in the least. The assumption here on the part of the RNC and others--with regard to the Clinton films--is that both will be panegyrics, both will portray Clinton in a wholly positive, if not openly heroic, light.

So again, allowing that this is exactly the case, is there something wrong with that?

My answer is simple: no. I mean, the offering up of both panegyrics and hit pieces by major media organizations is, in my mind, distasteful and low-rent but of course I quite obviously have different (okay, superior) moral and ethical standards than those of the people running CNN, NBC, or any other network. And of course, my little blog is not drawing in millions upon millions of viewers and/or readers; I'm certainly not making money hand over fist from advertisers (but feel free to click on a Google ad and make me a penny, if you so desire).

Having said this, that I see no inherent "wrongness" with the airing of a Hillary Clinton panegyric (let's face it, we all know this is what the NBC miniseries will be) just prior to a huge national election and her candidacy in the same. Yet I can't help but think of another movie about Ms. Clinton, one produced in 2007 and scheduled to air in 2008, just prior to the beginning of the primary season for the 2008 Election. The name of the film? Hillary: The Movie. The group responsible for producing it? The conservative NPO Citizens United. We all remember that name, don't we?

That begs the question, did the Citizens United decision pave the way for the upcoming Clinton movies on CNN and NBC? Despite how tempting it might be to jump to such a conclusion, the fact of the matter is that Federal election laws prior to the decision would not have prevented the making of these movies because of the special status enjoyed by media companies. Under the twin umbrellas of "news" and "commercial activity" they were--and still are--allowed to produce and broadcast pretty much anything they desire, irrespective of obvious partisan leanings. This was the basis of the FEC's refusal to prevent the release of Michael Moore's Fahrenheit 911 in 2004:
The complainant alleged that the release and distribution of FAHRENHEIT 9/11 constituted an independent expenditure because the film expressly advocated the defeat of President Bush and that by being fully or partially responsible for the film’s release, Michael Moore and other entities associated with the film made excessive and/or prohibited contributions to unidentified candidates. The Commission found no reason to believe the respondents violated the Act because the film, associated trailers and website represented bona fide commercial activity, not “contributions” or “expenditures” as defined by the Federal Election Campaign Act.
This is a rarely remembered footnote in the Citizens United decision, but it is quite critical. For there was no question that the film Fahrenheit 911 was advocating against a particular candidate for office (George W. Bush), just as there was no question that the film Hillary: The Movie was doing the same. Yet somehow the first was legitimate--according to the government--while the second was not. The only significant difference between the two: the first had better funding (and was better made).

Yet following the Citizens United decision, commentator after commentator (along with four clueless members of the Court) told us how the Court's action was opening the floodgates, unleashing a torrent of money into the election process. But the big money was already there, as evidenced by films like Moore's and the two upcoming ones on Hillary Clinton. Meanwhile, a piddly little film by an NPO was theoretically over the line, even though we all known it would have and did change nothing about the the 2008 election.

And remember, those commentators telling us why the Court was wrong, why the decision was dangerous, they're all drawing paychecks from big media companies.

Cheers, all.

Saturday, August 3, 2013

Larry Summers in charge of the Fed? Are you kidding me?

Larry Summers hard at work, as usual
With the likelihood of there being a new Chairman of the Federal Reserve in the very near future, everyone is offering opinions on who it should be. But the White House seems inclined to go with either Larry Summers or Janet Yellen, as Ezra Klein reports. Most Democrats in Congress are apparently in favor of the latter, though most commentators seem to feel Larry Summers has the edge, given that he was the President's selection to initially head the National Economic Council, has been Secretary of the Treasury (under Clinton), and even chief economist at the World Bank, along with also serving as President of Harvard University. Plus, I think there is little question that Summers very badly wants the job; he's likely been campaigning hard for it--behind closed doors--for years.

And he's a smart guy, says everyone in the room, everyone that knows him. Or as Paul Krugman once quipped, "Larry’s extremely smart—ask him and he’ll tell you." And those smarts have obviously earned Summers an impressive resume. So why shouldn't he take over for Bernake at the Fed? Indeed, the indomitable Steven Rattner has penned an extensive essay trumpeting Summers, his smarts and accomplishments. Let's look as some of the praise Rattner is heaping on Summers:
No one is perfect, but I score Larry’s batting average and qualifications at the top of the heap. There’s that extraordinary intelligence: the most brilliant, most analytical and most surgical brain of anyone I've ever encountered.
Batting average? What's that supposed to mean? His success rate with respect to himself or with respect to the people affected by his decisions? Because those are two very different things. Summers has certainly done well for himself, there's no question about that. But when he's been put in a position to make big decisions, I don't know that his history there is something he really wants highlighted.

From back in 2008--when Summers was in line for the Secretary of the Treasury job and was on the Obma transition team--Robert Scheer, who describes himself as a liberal, summed up some of Larry's past decisions quite nicely in an editorial for SFGate:
But it was Summers who most vehemently pushed for congressional passage of that drastic deregulation measure, the Financial Services Modernization Act, which eliminated the New Deal barriers against mergers of commercial and investment banks as well as insurance companies and stock brokers. Standing at his side as President Bill Clinton signed the legislation, Summers heralded it as "a major step forward to the 21st century" - and what a wonderful century it's proving to be.  
It was also Summers who worked in cahoots with Enron and banking lobbyists, and who backed Republican Sen. Phil Gramm's Commodity Futures Modernization Act, which banned any effective government regulation of the newly unleashed derivatives market. The result was not only a temporary boon to Enron, which soon collapsed under its unbridled greed, but also to the entire Wall Street financial community.
Aside from Chris Dodd, Barney Frank, and Charles Schumer, if there is one person who deserves to shoulder the most blame for the financial crisis of 2007-2008, that person is Larry Summers (while of course Joseph Stiglitz likes to pretend he saw the warning signs, but basically fiddled while Summers helped burn things down).

Friday, August 2, 2013

Single motherhood: imagining a war and forsaking the battle

Writers at the Daily Beast, Salon, and elsewhere are up in arms--figuratively, I think--over recent comments made by people like Bill O'Reilly, Don Lemon of CNN, and George Will supposedly "blaming" single mothers for society's ills.

Amanda Marcotte at the Daily Beast:
It appears that it’s open season on single mothers again. Granted, open season is called on single mothers a few times a year and can be spurred by anything from a politician trying to punt a question about gun control to polling data showing women are frequently breadwinners for their families, so this isn’t unusual. But this round is particularly aggressive, with George Will actually blaming single mothers for Detroit’s bankruptcy, Bill O’Reilly using the specter of single motherhood to distract from the Trayvon Martin case (even though George Zimmerman did not check Martin’s parentage before choosing to gun down the unarmed young man), and even CNN’s Don Lemon going on a moralizing scold that assumes that women become single mothers for no other reason than to thumb their nose at propriety.
Stacia Brown at Salon:
We saw it come up in the last presidential election debates, when Mitt Romney insinuated that single mothers were to blame for gun violence. President Obama often hints at it in his own “be a father to your child” speeches, which tend to assume that unmarried and low-income fathers are entirely uninvolved parents. And just this weekend, conservative columnist George Will pointed to single motherhood as a large part of what he called “Detroit’s cultural collapse.” Despite being the laziest explanation for the majority of society’s ills, single motherhood is not the bogeyman without which we’d all live morally spotless, fiscally responsible, racism-free, crime-reduced lives. And marital status is not an automatic or accurate indicator of father involvement.
Both writers, after mis-characterizing the remarks of others, engage in attempts to change the parameters of the discussion, to pull the focus away from out-of-wedlock births and the consequences of increases in their numbers. Marcotte laments the supposed lack of attention given to divorced single women with children, while Brown argues that there is more involvement from fathers (of children born out of wedlock) than most tend to assume. But both defenses are based on an attack not actually taking place.

Even within the framework of admonishments from people like Lemon and Obama to the black community on the rising number of single mothers, no one is actually arguing that single motherhood is the principal cause of various problems, that it is to blame. George Will, for instance, is pointing it out as a symptom of problems afflicting cities like Detroit. And indeed, this is hardly earth-shattering stuff, when it comes to the "black community" or any other community. It is a trend--single motherhood--that has accelerated in recent decades, particularly in the lower tiers of the socio-economic ladder.

Last year, I broached the subject a number of times, mostly in reference to Charles Murray's recent book Coming Apart. Pundits are quick to associate this trend with the black community alone, but as Murray demonstrated, the issue of race can be wholly subtracted from the equation without changing the story one iota. In general, marriage rates are down across the board in the country for the last fifty years, but they have disproportionately decreased in middle to low income communities. And a drop in industriousness follows that same pattern, as does an increase in crime and a decrease in religiosity (the last is particularly telling: people assume the poor are still more religious than the upper middle class and the wealthy, but this is no longer true).

Thursday, August 1, 2013

An Obamacare defense by people who don't understand the concept of "evidence"

Marilyn Tavennar
Today at a meeting of the House Energy and Commerce Committee, the head of the Center for Medicare & Medicaid Services--one Marilyn Tavenner--informed the committee members that businesses are not cutting hours of employees in order to avoid requirements of the Patient Protection and Affordable Care Act (Obamacare). She allowed that there could be "isolated incidents" of this happening (hour-cutting), but that it was nowhere close to being a common thing.

For those unfamiliar with this issue, the situation is simple: the legislation mandates that businesses employing fifty or more full-time workers must offer an employer-sponsored health insurance plan. It also dictates that employees working thirty or more hours per week are considered to be full-time employees. So, a reasonable choice for some businesses--in response to being forced into the added expense of a health insurance plan (or having to pay a per employees penalty)--is to cut back the hours of enough employees in order to avoid the requirement. As I said, simple stuff. And make no mistake, this is happening and it will continue to happen. A number of companies have already come forward to admit to such behavior.

It's an easy thing to process, to understand. Such behavior has always characterized decisions of management, particularly for retail businesses. Ages ago, when I was young and working for a grocery chain, the full-time/part-time ratio was always carefully monitored because company policy provided benefits for full-time employees and such benefits cost money. Because there was a high turnover rate in employees, the company could not afford--financially--to maintain profitability if the ratio was too high (more full-time). So, as a manager, one of my duties was to keep part-time employees part-time, by not exceeding an average of thirty hours per week per quarter. This worked for both employer and employee is such industries, because those employees who wanted full-time work and the benefits that went with it had to earn it, they had to prove themselves. Those employees who were half-assed about their work had no hope in this regard (at least when I was in charge). And that was fair (it still is), in my opinion.

But despite the logical, practical, and empirical certainty of this happening (hour-cutting in response to Obamacare), there are still those who want to argue that it won't. This piece at ThinkProgress cites a study by CEPR (a progressive and of course biased think-tank) that purports to show very few companies making such choices. I don't need to read it all to know it's nonsense. Here's the money quote from the study's conclusion that proves it's nonsense:
While there may certainly be instances of individual employers carrying through with threats to reduce their employees’ hours to below 30 to avoid the sanctions in the ACA, the numbers are too small to show up in the data.
Of course the numbers are going to be "too small" to show up in a survey! Why? Because the hour-cutting won't be of employees across the board, it will be targeted to keep the number of full-time employees below the magic number of fifty. Thus, at a given company, the number of employees who have their hours cut could be only a handful. Hell, it could be just one person, who maybe was averaging thirty-two hours a week and thus only needs to lose forty hours a quarter (thirteen weeks). And such cuts need not show up every single week. In the above example, the employee could theoretically work those same thirty-two hours for the next (assuming this is the beginning of a quarter) ten weeks, then work only eighteen hours for the next three weeks. Problem solved, consequences averted. These are the kind of calculations being made. To not understand this reality is to be ignorant of the way businesses operate.

But beyond that, Tavenner's comments also demonstrate how what constitutes "evidence" for this administration and its defenders changes, based on the politics of the moment. Again, Tavenner claims there are only "isolated incidents" of businesses cutting hours. And again, the CEPR study claims the changes are "too small" to be significant. Let's time-travel back to late February and early March of this year, when the political world was in the grip of Sequester-talk.