Federalist #78 and the Importance of Judicial Precedent

Fed Papers

Excerpts from the Federalist Papers #78 (Alexander Hamilton)

The Federalist Papers were a series of 85 essays written by John Jay (5), James Madison (29), and Alexander Hamilton (51) to explain and defend the new Constitution in hopes of securing unanimous ratification. While not part of the document, they are generally considered one of the most reliable sources of the Framers’ intentions. Hamilton was the original “Federalist” in terms of his commitment to a strong central government and an expansive reading of the Constitution and the powers it grants to the various branches. Unlike Thomas Jefferson, who was primarily concerned with protecting the liberties of individuals, Hamilton’s focus was on strengthening the powers of the federal government sufficiently to ensure its long-term success. And yet, here in Essay #78, he argues that lifetime appointments are essential in the judicial branch in order to assure attention to precedent and consistent protection of individual liberties from legislative abuse.

WE PROCEED now to an examination of the judiciary department of the proposed government…

{T}he judiciary is beyond comparison the weakest of the three departments of power… {T}hough individual oppression may now and then proceed from the courts of justice, the general liberty of the people can never be endangered from that quarter; I mean so long as the judiciary remains truly distinct from both the legislature and the Executive. For I agree, that “there is no liberty, if the power of judging be not separated from the legislative and executive powers.” … {Since} liberty can have nothing to fear from the judiciary alone, but would have everything to fear from its union with either of the other departments… {and since the judicial branch is the weakest of the three,} nothing can contribute so much to its firmness and independence as permanency in office{. T}his quality may therefore be justly regarded as an indispensable ingredient in its constitution, and, in a great measure, as the citadel of the public justice and the public security.

We might debate whether or not Hamilton was correct to consider the judicial branch the “weakest” of the three, but what’s important here is the idea that the lifetime tenure of justices was intended to provide consistency in the nation’s highest court. Notice also his assumption that one of the primary purposes of the Court is to protect the “general liberty of the people” and act as the “citadel of the public justice and the public security.” While Hamilton was speaking primarily of national government (it would almost a century before constitutional protections were automatically assumed to apply at the state and local level via the Fourteenth Amendment), this understanding of the judicial branch is antithetical to the idea that “faithfulness” to the Constitution requires stripping away established protections in order to better facilitate state-level abuse of personal liberties.

The complete independence of the courts of justice is peculiarly essential in a limited Constitution. By a limited Constitution, I understand one which contains certain specified exceptions to the legislative authority… Limitations of this kind can be preserved in practice no other way than through the medium of courts of justice, whose duty it must be to declare all acts contrary to the manifest tenor of the Constitution void. Without this, all the reservations of particular rights or privileges would amount to nothing…

There is no position which depends on clearer principles, than that every act of a delegated authority, contrary to the tenor of the commission under which it is exercised, is void. No legislative act, therefore, contrary to the Constitution, can be valid. To deny this, would be to affirm, that the deputy is greater than his principal; that the servant is above his master; that the representatives of the people are superior to the people themselves; that men acting by virtue of powers, may do not only what their powers do not authorize, but what they forbid…

The power of “judicial review” was formally claimed by the Supreme Court in its landmark decision in Marbury v. Madison (1803). The concept, however, was established long before then. One of the primary reasons Jefferson and Madison had so much trouble garnering support for their Virginia and Kentucky Resolutions (1798-1799) which promoted state “nullification” of the Alien and Sedition Acts was that even state legislatures who didn’t love these statutes deferred to the appropriate branch of government for dealing with such things. In this essay, Hamilton is not suggesting “judicial review” as a potential power of the Supreme Court; he’s explaining and justifying it as something clearly granted under the new Constitution… even if it wasn’t spelled out in exactly those words.

If it be said that the legislative body are themselves the constitutional judges of their own powers, and that the construction they put upon them is conclusive upon the other departments, it may be answered, that this cannot be the natural presumption, where it is not to be collected from any particular provisions in the Constitution. It is not otherwise to be supposed, that the Constitution could intend to enable the representatives of the people to substitute their WILL to that of their constituents. It is far more rational to suppose, that the courts were designed to be an intermediate body between the people and the legislature, in order, among other things, to keep the latter within the limits assigned to their authority.

Until Justice Clarence Thomas and his ilk manage to effectively neuter the Fourteenth Amendment, it’s reasonable to apply this philosophy to state governments as well as the national Congress. The original purpose of the Fourteenth Amendment, after all, was to decry “states’ rights” when they violated more fundamental (and more important) natural rights.  

The interpretation of the laws is the proper and peculiar province of the courts. A constitution is, in fact, and must be regarded by the judges, as a fundamental law. It therefore belongs to them to ascertain its meaning, as well as the meaning of any particular act proceeding from the legislative body. If there should happen to be an irreconcilable variance between the two, that which has the superior obligation and validity ought, of course, to be preferred; or, in other words, the Constitution ought to be preferred to the statute, the intention of the people to the intention of their agents.

Nor does this conclusion by any means suppose a superiority of the judicial to the legislative power. It only supposes that the power of the people is superior to both; and that where the will of the legislature, declared in its statutes, stands in opposition to that of the people, declared in the Constitution, the judges ought to be governed by the latter rather than the former. They ought to regulate their decisions by the fundamental laws, rather than by those which are not fundamental.

President Andrew Jackson saw himself as defending the “common man” from the corrupted powers of their elected legislators. According to Hamilton, however, the primary defense of the people from legislative bodies is the courts. That’s not “judicial activism,” according to one of the strongest proponents of powerful central government in our history – it’s one of the judicial system’s primary functions.

This exercise of judicial discretion, in determining between two contradictory laws, is exemplified in a familiar instance. It not uncommonly happens, that there are two statutes existing at one time, clashing in whole or in part with each other, and neither of them containing any repealing clause or expression. In such a case, it is the province of the courts to liquidate and fix their meaning and operation. So far as they can, by any fair construction, be reconciled to each other, reason and law conspire to dictate that this should be done; where this is impracticable, it becomes a matter of necessity to give effect to one, in exclusion of the other. The rule which has obtained in the courts for determining their relative validity is, that the last in order of time shall be preferred to the first. But this is a mere rule of construction, not derived from any positive law, but from the nature and reason of the thing. It is a rule not enjoined upon the courts by legislative provision, but adopted by themselves, as consonant to truth and propriety, for the direction of their conduct as interpreters of the law. They thought it reasonable, that between the interfering acts of an EQUAL authority, that which was the last indication of its will should have the preference…

What Hamilton is essentially talking about here is stare decisis – the importance of maintaining judicial precedents. When laws (or, say… clauses in the First Amendment) clash or pull against one another, it’s the job of the Supreme Court to figure out the best understanding of those laws and establish this as the correct meaning.

This independence of the judges is equally requisite to guard the Constitution and the rights of individuals from the effects of those ill humors, which the arts of designing men, or the influence of particular conjunctures, sometimes disseminate among the people themselves, and which, though they speedily give place to better information, and more deliberate reflection, have a tendency, in the meantime, to occasion dangerous innovations in the government, and serious oppressions of the minor party in the community.

Hamilton may not have been quite the progressive crusader suggested by his musical, but he’s at least pro-Warren Court here.

It’s worth repeating – a primary duty of the courts is to protect individual liberties (in this case, minority rights specifically) from legislative abuses. That’s not “exceeding” their constitutional role, at least according to the guys who wrote the damn thing.

Surely you can’t get much more “originalist” than that.

Though I trust the friends of the proposed Constitution will never concur with its enemies, in questioning that fundamental principle of republican government, which admits the right of the people to alter or abolish the established Constitution, whenever they find it inconsistent with their happiness, yet it is not to be inferred from this principle, that the representatives of the people, whenever a momentary inclination happens to lay hold of a majority of their constituents, incompatible with the provisions in the existing Constitution, would, on that account, be justifiable in a violation of those provisions; or that the courts would be under a greater obligation to connive at infractions in this shape, than when they had proceeded wholly from the cabals of the representative body. Until the people have, by some solemn and authoritative act, annulled or changed the established form, it is binding upon themselves collectively, as well as individually; and no presumption, or even knowledge, of their sentiments, can warrant their representatives in a departure from it, prior to such an act. But it is easy to see, that it would require an uncommon portion of fortitude in the judges to do their duty as faithful guardians of the Constitution, where legislative invasions of it had been instigated by the major voice of the community.

It’s nice of him to go ahead and validate the January 6th hearings while he’s at it. Alexander “Nostradamus” Hamilton, at your service.

Hamilton continues making his point that lifetime tenure is essential for the judiciary to effectively protect individual liberty against potential abuses by the other two branches (but mostly the legislative). Apparently he doesn’t consider elected representatives to always be the best judges of what the Constitution does and doesn’t protect. Huh.

It turns out there’s even a more important reason for those lifetime appointments – they help protect stare decisis by making justices less likely to overturn established precedents in service of their own ideological whims. At least, that was the idea.

There is yet a further and a weightier reason for the permanency of the judicial offices, which is deducible from the nature of the qualifications they require. It has been frequently remarked, with great propriety, that a voluminous code of laws is one of the inconveniences necessarily connected with the advantages of a free government. To avoid an arbitrary discretion in the courts, it is indispensable that they should be bound down by strict rules and precedents, which serve to define and point out their duty in every particular case that comes before them…

Precedent matters. It’s not inviolable, but it should carry greater weight than “yeah, but we don’t like how the last fifty years or so have gone.” It should certainly trump “you don’t know how long the Federalist Society and rich white evangelicals have been working to reverse course on this stuff!”

Hamilton was concerned that excessive turnover on the bench would produce justices insufficiently schooled in established jurisprudence. He did not account for the possibility that they’d know damn well what’s been said and done before but simply pick and choose selected bits to justify their predetermined outcomes while ignoring context and inevitable impact.

{T}here can be but few men in the society who will have sufficient skill in the laws to qualify them for the stations of judges. And… the number must be still smaller of those who unite the requisite integrity with the requisite knowledge.

You said it, Alexander.

The 1950s (Part Two)

NOTE: Part One of this post can be found here. Both segments are from the rough draft of a book I’m hoping will be called something like “Have To” History: Stuff You Don’t Really Want To Know (But For Some Reason Have To) About The Most Boring Events, People, and Issues in American History.

It’s Moving Day (Rust Belt to Sun Belt)

For more than a century, manufacturing was central to the American economy. While the image of the north as universally industrialized and the south as endless agriculture is far too simplistic, a definable “Manufacturing Belt” was easily traceable from New York through Pennsylvania, Ohio, Indiana, Michigan, and eastern Illinois. Some sources would add St. Louis or other noncontiguous pockets, using the description less as a geographical marker than as an economic indicator – which it was.

Thousands of families throughout the “Manufacturing Belt” relied for generations on the solid blue-collar incomes available there. Workers produced steel, weapons, and automobiles, buoyed by a strong economy and periodic government contracts. Until, one day, they didn’t.

The term “Rust Belt” didn’t take hold until the late 1970s, by which time many factories were closed (or closing) and their structures left to decay. As with the more positive moniker, the term was less about specific location and more about economic changes – changes which took place unevenly and over an extended period. The decline of the “Manufacturing Belt” had been delayed by World War II, during which government defense needs brought a massive infusion of cash and energy to the region. Once peace ruined everything, however, the writing was on the factory wall. The party wasn’t entirely over, but the DJ had switched to slow dances and the host was out of punch.

History teachers like to talk about “push-pull” factors whenever people migrate. There’s usually at least one good reason to leave a place and a different good reason for one’s chosen destination. In the mid-twentieth century, changes in the economy and dramatic technological improvements began chipping away at blue collar jobs across the “Manufacturing Belt” (aka “Rust Belt”). At the same time, high-tech industries and defense plants were beginning to flourish in parts of the South and along the west coast. The “push” was the loss of opportunity up north; the “pull” was the need for skilled and semi-skilled labor in the south and west.

The migration didn’t happen overnight, and it wasn’t monolithic. A “Second Great Migration” from the south occurred at much the same time as Black workers left the south in search of greater economic opportunity and less racial oppression. Some headed north, but many headed west in search of the same jobs drawing white laborers from the north. (Side Note: “white” by this time had begun expanding to include descendants of all those different immigrant groups that used to be the primary targets of Anglo violence in the preceding century.) Skilled or semi-skilled workers could find reliable employment and good wages in Los Angeles, Portland, Phoenix, and the like, as well as in select cities scattered across the south – locations not previously known for their manufacturing prowess.

Remember the Missouri Compromise way back in 1820? Imagine roughly that same line reaching both directions to each coast. Once we get to the 1950s, everything below that line (minus Oklahoma, because… Oklahoma) becomes collectively known as the “Sun Belt.” “Sun” because it’s hot down there, but also “Sun” like “Here Comes the _____.” The Sun Belt was the new land of opportunity for workers in the fifties and thereafter.

When speaking of major migration patterns after World War II, especially during the 1950s, the general trend was from the “Rust Belt” to the “Sun Belt” or to the west coast. You’ll live a fuller, happier life if you take a moment right now to lock in mental images of the “Rust Belt” and the “Sun Belt” (plus California/Oregon), then add a few mental arrows indicating the general direction of the major migrations of the decade. Don’t forget those “Second Great Migration” arrows coming out of the south!

The 1950s were still a pretty good time to be a blue collar worker, but changes were already beginning in that world as well. Republicans had begun taking steps to limit worker protections and weaken labor unions. President Truman had vetoed the Taft-Hartley Act in 1947, but Congress passed it anyway. Depending on your point of view, this act and others like it either reined in union abuses and suppressed communist influences in the workplace or began rolling back worker protections and working conditions to something more akin to the Gilded Age.

Politicians still like to bust out the guarantee that, if elected, they’ll restore the great age of manufacturing and bring back all those textile mills, coal mining jobs, and other 1950s era factory gigs. They’ll eliminate all manufacturing technology developed over the past half-century and ensure a glorious new age of sweaty uneducated labor for outrageously high wages. Oddly, this seems to work far more often than it should.

On The Road Again…

All this moving about was made much easier by the interstate highway system. The Eisenhower Administration championed the passage of the Federal Aid Highway Act (1956) which dramatically increased the number and quality of freeways across the U.S. (Henry Clay and the Whigs would have been thrilled.) States often contributed funding to the segments within their borders, but federal money and planning was key – and that’s what was new and borderline exciting about the whole thing.

Much of this new or improved infrastructure was paid for through taxes on vehicles and gasoline and justified as essential for national defense. (If the Commies landed on our shores, we’d need to be able to get our soldiers, tanks, and boom-sticks to wherever they needed to be quickly and efficiently.) It was tolerated because most people were feeling pretty prosperous and didn’t want those “reds” coming for their nifty new black and white television and hi-tech frozen dinners. The trucking industry loved it, as did white families shifting to the suburbs and pretty much anyone moving from the “Rust Belt” to the “Sun Belt” – at least during the move itself.

Not everyone was thrilled. New construction often meant moving or eliminating older neighborhoods and relocating residents. Railroads weren’t thrilled. Urban residents who relied on public transportation soon found their lives becoming more difficult. The environmentalists wouldn’t have loved it either, but that really wasn’t a thing yet. They’d make up for lost time come 1970, however.

Whatever their downsides, interstate highways have become an essential element of state and federal cooperation and are considered critical infrastructure still today. They make excellent metaphors for freedom and opportunity and adventure (“If you’re going my way, I wanna drive it all night long…”). They’re also powerful symbols of environmental destruction, the loss of humanity and individuality, and a future rushing madly forward with unstoppable force (“I didn’t hear nobody pray, dear brother… I heard the crash on the highway, but I didn’t hear nobody pray…”).

Highways aren’t particularly helpful without automobiles, of course. Once World War II ended, Americans who’d saved up money during the war (partly because there were so few big-ticket items available) were ready to spend. Industries which had been fully committed to wartime production shifted back into making consumer goods, including automobiles. It was a perfect match of supply and demand.

No wonder the communists were so jealous. They didn’t even have toaster ovens.

The other major technological evolution smoothing this massive migration was air conditioning. The underlying technology had been around for several decades, but it was in the post-war years that air conditioning was first considered indispensable. If you want people to be productive during the day and tolerably comfortable and well-rested at night anywhere south of Nebraska, you need affordable, effective, artificial air-cooling. Now it was possible – even practical. When combined with neat stuff like refrigerators, washers and dryers, vacuum cleaners, and the like, Americans in the 1950s had arguably the highest quality of living in the known universe.

Even without Sea Monkeys (which were coming soon).

The Writing (and Painting) On The Wall

Not everything was as idyllic as it may have seemed in the 1950s – at least, not for everyone. Poverty still existed and racial disparities were glaring in many parts of the nation. Even among mainstream white folks, there were hints of discontent.

Some of the art, for example, was getting a bit challenging. Abstract expressionism was just coming into its own, while guys like Edward Hopper or George Tooker were utilizing new forms of realism (Hopper) and surrealism (Tooker) to explore the universality of human isolation. Jack Kerouac violated sexual taboos and experimented with drugs while writing it all down in no particular order. Allen Ginsberg broke poetry to better howl about broken people and a broken society, echoing the chaos around and within by writing in new and provocative forms. J.D. Salinger’s Catcher in the Rye explored teenage disillusionment through the eyes of a young man who failed classes and was diagnosed with mental disorders for how he felt about the world around him.

Also, he cussed. A lot.

Abstract art and the Beatniks may seem tame compared to what came next, but at the time… well, nothing had come next yet.

Making The Grade: What You’re Most Likely To Be Asked

Expect at least one generic multiple choice question about Levittown and the Baby Boom – sometimes together, sometimes considered separately. You should recognize Levittown as a response to the Baby Boom and/or an increased need for affordable housing after World War II, and remember that it was facilitated by improved infrastructure and a rise in automobile ownership. (The racial component probably won’t come up unless you bring it up as part of an essay response.)

The shift from the Rust Belt to the Sun Belt will usually get at least one fairly general question as well – either identifying the movement itself or specifying the underlying causes. From time to time you’ll even see a map included!

APUSH and other advanced classes are likely to ask about ways in which “postwar economic and demographic changes had far-reaching consequences for American society, politics, and culture.” There are all sorts of ways this one can be narrowed down, but be prepared to tie the development of suburbs to things like the Reagan Revolution or to connect resistance to Brown v. Board with bussing efforts in the 1970s and the explosion of private schools and voucher programs still being debated today. It’s especially impressive if you have the opportunity to identify technological improvements (automobiles, air conditioning, etc.) as driving forces behind major migration patterns.

None of this means you can ignore all the expected stuff – the Truman Doctrine, the Fair Deal, the Taft-Hartley Act, Brown v. Board, Rosa Parks, the bus boycott, MLK, McCarthyism, NATO, the Marshall Plan, and curriculum writers’ bizarre fascination with John Foster Dulles. If it seems like a lot to keep up with, just wait until you get into the 1960s.

The 1950s (Part One)

NOTE: This post and its sequel are from the rough draft of a book I’m hoping will be called something like “Have To” History: Stuff You Don’t Really Want To Know (But For Some Reason Have To) About The Most Boring Events, People, and Issues in American History.

As is generally the case with the drafts I post here, the final version will presumably be tightened up substantially and better edited. Your comments along the way are very much welcomed. 

The 1950s – Because The Sixties Had To Come From Somewhere (Part One) 

Three Big Things:

1. The 1950s are largely remembered as a time of prosperity and “cultural homogeneity.” Nevertheless, the major issues of the 1960s were poking through everywhere.

2. The explosion of new “suburbs” (like Levittown) was facilitated by more highways and more automobiles. White families fled big cities for protected pockets of all-white schools, churches, shopping, and front lawns that all looked the same.

3. On a larger scale, workers and their families moved from the “Rust Belt” of the northeast to the “Sun Belt” of the south and west in pursuit of better employment opportunities. This move was facilitated by highways and cars as well, along with advancements in the modern miracle of air conditioning.

Introduction

The 1950s are an easily brushed-over decade, whether you’re rushing to get through someone else’s curriculum before “the test” or a lover of history browsing titles at your local bookstore or online.

As part of a formal curriculum, the 50s have the unenviable task of following World War II – which is kind of like booking Led Zeppelin as your opening act but hoping the audience stays for your one-man avant-garde banjo extravaganza. Even teachers who manage to get past “the last good war” before state testing or the AP Exam are anxious to get to the 1960s, where most of the important stuff is naturally engaging all on its own – sex, drugs, rock’n’roll, civil rights, hippies, war protests (and a war to go with them), MLK, JFK, LBJ, Malcolm X, Woodstock, “the pill,” Brown Power, the American Indian Movement, women’s rights – even men on the moon (yes, really).

Sure, we’d like to get to the Reagan Revolution and 9/11, but the Sixties managed to make even stage musicals naughty and blasphemous. And there were Sea Monkeys. Why would we ever move on?

For adults interested in history, it’s almost as bad. Browsing the shelves at your local bookstore or scrolling through Amazon search results, how often do you stop and exclaim, “Hey… post-war suburban development!” There are too many far more tantalizing topics to grab the eye, and no one wants to be the guy on the subway reading The Rise of the Sunbelt: How the Interstate Highway System and Modern Air Conditioning Impacted Twentieth Century Migration Patterns – as if your social life didn’t have enough problems already.

(Thankfully, the book you’re currently reading is a proven status magnet. Currently, everyone in the room either wants you or wants to be you, so play it cool and just keep reading… like you’re too deep in learning to care.)

The 1950s, however, have plenty to add to the conversation – and not just the parts about the Cold War, the G.I. Bill, and the birth of modern rock’n’roll. Let’s see if we can unborify a few of the most neglected or easily overlooked features of the decade before you blindly rush into all the violence, nudity, and social transformation of its successor.

The “Exciting” Parts of the 1950s

Despite its reputation (or lack thereof), there were numerous important history-ish things going on in the 1950s which you probably already know about, even if you don’t realize it.

The Cold War was easily the biggest. This half-century staring contest between the U.S. and U.S.S.R. was going strong by the time all those post-WWII babies started to boom. With it came anticommunist hysteria topping even the “red scare” of the previous generation. All those Congressional committees investigating authors and the film makers and McCarthy with his supposed list of “known Communists” working for the State Department? That was all the 1950s.

The Rosenbergs were executed in 1953 for (apparently) passing along U.S. atomic know-how to the Russians. Those same Russians launched Sputnik in 1957, prompting the creation of NASA in the U.S. and all sorts of panic that American children didn’t know enough math or science. (Sometimes it really does take a rocket scientist.)

There were many less-dramatic-but-still-pretty-important results of the Cold War, such as the National Defense Education Act (1958). This provided financial aid for college students and boosted funding for math and science in high schools. It was the first meaningful foray of the federal government into public education and the basic approach proved so successful that it never went away: if the federal government offers states enough money to do X, Y, or Z, they essentially insert themselves as controlling partner in what were previously state functions (at least according to the Constitution). If states want the money, they have to follow the federal rules and adapt federal priorities.

Who’s a good state? Does someone want federal funding? Hmmm? Heel, state – heel!

Speaking of “sharing” as a means of control, don’t forget the Truman Doctrine (1947), under which the U.S. spends zillions of dollars every year propping up foreign “democracies” with American troops, money, and motivational posters. (The name is periodically updated to reflect whoever’s in office, but its substance hasn’t changed much in 75 years.) In 1954, President Eisenhower popularized the “domino theory” – the idea was that if communism was allowed to take hold anywhere in the world, the surrounding nations would soon fall to it as well. Capitalism and democracy, on the other hand, often required overwhelming military force to implement, as if they were for some reason less attractive to the rest of the world.

Weird, right?

American foreign policy was thus dramatically and forever altered. Rather than wait until U.S. interests were actually threatened, the military could now be sent anywhere in the world – locked, loaded, and overflowing with cash and lifestyle advice – to intervene wherever Uncle Sam thought it might be fun or profitable. It turned out to be surprisingly easy to justify just about anything in the name of someone else’s “freedom” or “democracy” or “unrestricted oil supply.” Besides, you wouldn’t want the godless communists to win, would you?!

This “domino theory” which would be one of the primary justifications for U.S. involvement in Vietnam a decade later was already being cited as justification for the millions spent in the 1950s to finance the war against communism in Indochina. In the meantime, there was a Korean “conflict” to tie everyone over – like a prequel or an appetizer. At least we got M*A*S*H out of the deal. (Rest in peace, Captain Tuttle.)

The modern Civil Rights Movement commonly associated with the 1960s began in the 1950s as well. The Supreme Court decided Brown v. Board of Education in 1954 and began the long, messy push towards school desegregation. (It’s possible we’ll still get there someday.) Rosa Parks refused to change seats on the bus in 1955, which in turn sparked the Montgomery Bus Boycott of 1956. A young reverend by the name of Martin Luther King, Jr., who just happened to pastor a church in the area, added his voice to the protests and soon became the most recognizable face, name, and voice of the entire movement – all before New Year’s Day, 1960.

There are a few other things we usually remember easily enough. The G.I. Bill, which helped returning soldiers go to school or start small businesses. The general economic prosperity of the postwar years. The explosion of modernity for normal people – kitchen appliances, automobiles, television, McDonald’s, and Barbie. Finally, of course, there’s that legendary “cultural homogeneity” of the 1950s – a collective sense of shared purpose lingering from WWII, now redirected into the brave struggle against alternative economic systems and political structures. There’s great comfort in sameness, particularly when accompanied by common enemies and a newfound prosperity for those enemies to threaten.

In reality, the 1950s weren’t quite as universally unified or prosperous as they appeared. Still, it was close enough to give the 1960s something to challenge – a lifestyle and presumed set of values for the youth of the era to reject. (It’s difficult to rebel against the mainstream if there’s no mainstream.) If nothing else, the 1950s made the 1960s possible. The decade became the “ordinary world” for a whole new hero’s journey.

So… what were the boring parts we should make sure we don’t overlook?

Levittown and the Growth of the Suburbs

All those folks coming back from the war needed somewhere to live. Plus, there was that “Baby Boom” thing which somehow started increasing the population – dramatically. The name you should most remember in connection with all of this is William J. Levitt.

Levitt built entire neighborhoods of affordable, but decent, family homes. The most notable was his pilot project in Long Island, New York – Levittown. Disposable income was up, and while the 30-year mortgage so familiar today wasn’t yet standard, it was becoming increasingly popular. The federal government played with ways to keep interest rates low and gave homeowners a big ol’ tax deduction as well. (Remember the part above about using money to promote government-approved lifestyles?) It worked. Levitt sold nearly 17,000 homes in Long Island alone before moving into other markets. Needless to say, other developers quickly followed suit.

The ready availability of automobiles and the growth of highways made travel to and from work more convenient, even at a distance – and just look at all those freshly-mowed lawns… looking exactly the same! These mass-produced suburban homes weren’t always easy to tell apart. It became easy comedy to portray a husband coming home from work and entering the wrong home without ever noticing the difference. But this was the 50s – being the same as everyone else wasn’t exactly a downside.

On the other hand, that homogeneity didn’t end with the shingle choices on your Cape Cod. Levitt’s suburbs, like many others, only sold to white families. This wasn’t something subtle or implied based on a close reading of the historical data; it was established policy. Part of the appeal of the suburbs was getting away from crowded cities and into affordable convenience, but “white flight” was quite intentional as well. White neighborhoods meant your kids could go to all-white schools and you could attend all-white churches and shop at all-white stores, etc. It may seem biased or hurtful to portray racism as planned, systematic, and intentional across the board and by everyone involved; it’s just that it was planned, systematic, and intentional across the board by everyone involved.

Other than that, though, the suburbs were (and are) swell.

Prosperity Doctrines

The federal government had poured major stimulation into the economy during the war, and they were in no hurry to dial it back just because the bad guys had finally surrendered. Tax dollars both collected and anticipated were funneled into education, social programs, highways and other infrastructure, the aforementioned G.I. Bill, mortgage protection for all those new suburban homeowners, and anything else Congress could think of. While federal spending in the 1950s may have been humble by the standards of subsequent decades, the idea that it was a time of pure self-sufficiency or any version of laissez-faire economics is just silly. That would be like suggesting that homesteaders and railroads after the Civil War forged west without constant, massive government support and encouragement.

Nothing against the “invisible hand,” but it’s terrible at land grants, killing Indians, or promoting interstate travel.

In the 1950s, at least, all that government stimulation turned out to be quite effective. Americans were able to whip themselves into a consumerist frenzy, purchasing homes, cars, appliances, entertainment, and anything else they could think of. All that buying and wanting meant higher demand for pretty much everything, which meant good wages and low unemployment while somehow keeping inflation low. It was truly a marvelous time to be alive.

And white.

NEXT TIME: The 1950s (Part Two) – “It’s Moving Day!”